abstract
stringlengths 42
2.09k
|
---|
Cohort analysis is a pervasive activity in web analytics. One divides users
into groups according to specific criteria and tracks their behavior over time.
Despite its extensive use, academic circles do not discuss cohort analysis to
evaluate user behavior online. This work introduces an unsupervised
non-parametric approach to group Internet users based on their activities. In
comparison, canonical methods in marketing and engineering-based techniques
underperform. COHORTNEY is the first machine learning-based cohort analysis
algorithm with a robust theoretical explanation.
|
We study ancient Ricci flows which admit asymptotic solitons in the sense of
Perelman. We prove that the asymptotic solitons must coincide with Bamler's
tangent flows at infinity. Furthermore, we show that Perelman's
$\nu$-functional is uniformly bounded on such ancient solutions; this fact
leads to logarithmic Sobolev inequalities and Sobolev inequalities. Lastly, as
an important tool for the proofs of the above results, we also show that, for a
complete Ricci flow with bounded curvature, the bound of the Nash entropy
depends only on the local geometry around an $H_n$-center of its base point.
|
A deep equilibrium model uses implicit layers, which are implicitly defined
through an equilibrium point of an infinite sequence of computation. It avoids
any explicit computation of the infinite sequence by finding an equilibrium
point directly via root-finding and by computing gradients via implicit
differentiation. In this paper, we analyze the gradient dynamics of deep
equilibrium models with nonlinearity only on weight matrices and non-convex
objective functions of weights for regression and classification. Despite
non-convexity, convergence to global optimum at a linear rate is guaranteed
without any assumption on the width of the models, allowing the width to be
smaller than the output dimension and the number of data points. Moreover, we
prove a relation between the gradient dynamics of the deep implicit layer and
the dynamics of trust region Newton method of a shallow explicit layer. This
mathematically proven relation along with our numerical observation suggests
the importance of understanding implicit bias of implicit layers and an open
problem on the topic. Our proofs deal with implicit layers, weight tying and
nonlinearity on weights, and differ from those in the related literature.
|
Owing to its low vapor pressure, low toxicity, high thermal and electrical
conductivities, eutectic Ga-In (EGaIn) has shown a great potential for smart
material applications in flexible devices, cooling in micro-devices,
self-healing reconfigurable materials, and actuators. For such applications,
EGaIn is maintained above its melting point, below which it undergoes
solidification and complex phase separation. A scientific understanding of the
structural and compositional evolution during thermal cycling could help
further assess the application range of Ga and other low-melting-point fusible
alloys. Here, we use an integrated suite of cryogenically-enabled advanced
microscopy & microanalysis to better understand phase separation and (re)mixing
processes in EGaIn. We reveal an overlooked thermal-stimulus-response behavior
for frozen mesoscale EGaIn at cryogenic temperatures, with a sudden volume
expansion observed during in-situ heat-cycling, associated with the
immiscibility between Ga and In during cooling and the formation of metastable
Ga phases. These results emphasize the importance of the kinetics of
rejuvenation, and open new paths for EGaIn in sensor applications.
|
Deep learning has achieved impressive performance on many tasks in recent
years. However, it has been found that it is still not enough for deep neural
networks to provide only point estimates. For high-risk tasks, we need to
assess the reliability of the model predictions. This requires us to quantify
the uncertainty of model prediction and construct prediction intervals. In this
paper, We explore the uncertainty in deep learning to construct the prediction
intervals. In general, We comprehensively consider two categories of
uncertainties: aleatory uncertainty and epistemic uncertainty. We design a
special loss function, which enables us to learn uncertainty without
uncertainty label. We only need to supervise the learning of regression task.
We learn the aleatory uncertainty implicitly from the loss function. And that
epistemic uncertainty is accounted for in ensembled form. Our method correlates
the construction of prediction intervals with the uncertainty estimation.
Impressive results on some publicly available datasets show that the
performance of our method is competitive with other state-of-the-art methods.
|
We improve on a construction of Mestre--Shioda to produce some families of
curves $X/\mathbb{Q}$ of record rank relative to the genus $g$ of $X$. Our
first main result is that for any integer $g \geqslant 8$ with $g \equiv 2
\pmod 3$, there exist infinitely many genus $g$ hyperelliptic curves over
$\mathbb{Q}$ with at least $8g+32$ $\mathbb{Q}$-points and Mordell--Weil rank
$\geqslant 4g + 15$ over $\mathbb{Q}$. Our second main theorem is that if $g+1$
is an odd prime and $K$ contains the $g+1$-th roots of unity, then there exist
infinitely many genus $g$ hyperelliptic curves over $K$ with Mordell--Weil rank
at least $6g$ over $K$.
|
Simultaneous localization and mapping (SLAM) is a fundamental capability
required by most autonomous systems. In this paper, we address the problem of
loop closing for SLAM based on 3D laser scans recorded by autonomous cars. Our
approach utilizes a deep neural network exploiting different cues generated
from LiDAR data for finding loop closures. It estimates an image overlap
generalized to range images and provides a relative yaw angle estimate between
pairs of scans. Based on such predictions, we tackle loop closure detection and
integrate our approach into an existing SLAM system to improve its mapping
results. We evaluate our approach on sequences of the KITTI odometry benchmark
and the Ford campus dataset. We show that our method can effectively detect
loop closures surpassing the detection performance of state-of-the-art methods.
To highlight the generalization capabilities of our approach, we evaluate our
model on the Ford campus dataset while using only KITTI for training. The
experiments show that the learned representation is able to provide reliable
loop closure candidates, also in unseen environments.
|
Due to its high lethality amongst the elderly, the safety of nursing homes
has been of central importance during the COVID-19 pandemic. With test
procedures becoming available at scale, such as antigen or RT-LAMP tests, and
increasing availability of vaccinations, nursing homes might be able to safely
relax prohibitory measures while controlling the spread of infections (meaning
an average of one or less secondary infections per index case). Here, we
develop a detailed agent-based epidemiological model for the spread of
SARS-CoV-2 in nursing homes to identify optimal prevention strategies. The
model is microscopically calibrated to high-resolution data from nursing homes
in Austria, including detailed social contact networks and information on past
outbreaks. We find that the effectiveness of mitigation testing depends
critically on the timespan between test and test result, the detection
threshold of the viral load for the test to give a positive result, and the
screening frequencies of residents and employees. Under realistic conditions
and in absence of an effective vaccine, we find that preventive screening of
employees only might be sufficient to control outbreaks in nursing homes,
provided that turnover times and detection thresholds of the tests are low
enough. If vaccines that are moderately effective against infection and
transmission are available, control is achieved if 80% or more of the
inhabitants are vaccinated, even if no preventive testing is in place and
residents are allowed to have visitors. Since these results strongly depend on
vaccine efficacy against infection, retention of testing infrastructures,
regular voluntary screening and sequencing of virus genomes is advised to
enable early identification of new variants of concern.
|
Membership inference (MI) attacks affect user privacy by inferring whether
given data samples have been used to train a target learning model, e.g., a
deep neural network. There are two types of MI attacks in the literature, i.e.,
these with and without shadow models. The success of the former heavily depends
on the quality of the shadow model, i.e., the transferability between the
shadow and the target; the latter, given only blackbox probing access to the
target model, cannot make an effective inference of unknowns, compared with MI
attacks using shadow models, due to the insufficient number of qualified
samples labeled with ground truth membership information.
In this paper, we propose an MI attack, called BlindMI, which probes the
target model and extracts membership semantics via a novel approach, called
differential comparison. The high-level idea is that BlindMI first generates a
dataset with nonmembers via transforming existing samples into new samples, and
then differentially moves samples from a target dataset to the generated,
non-member set in an iterative manner. If the differential move of a sample
increases the set distance, BlindMI considers the sample as non-member and vice
versa.
BlindMI was evaluated by comparing it with state-of-the-art MI attack
algorithms. Our evaluation shows that BlindMI improves F1-score by nearly 20%
when compared to state-of-the-art on some datasets, such as Purchase-50 and
Birds-200, in the blind setting where the adversary does not know the target
model's architecture and the target dataset's ground truth labels. We also show
that BlindMI can defeat state-of-the-art defenses.
|
We propose a novel high dynamic range (HDR) video reconstruction method with
new tri-exposure quad-bayer sensors. Thanks to the larger number of exposure
sets and their spatially uniform deployment over a frame, they are more robust
to noise and spatial artifacts than previous spatially varying exposure (SVE)
HDR video methods. Nonetheless, the motion blur from longer exposures, the
noise from short exposures, and inherent spatial artifacts of the SVE methods
remain huge obstacles. Additionally, temporal coherence must be taken into
account for the stability of video reconstruction. To tackle these challenges,
we introduce a novel network architecture that divides-and-conquers these
problems. In order to better adapt the network to the large dynamic range, we
also propose LDR-reconstruction loss that takes equal contributions from both
the highlighted and the shaded pixels of HDR frames. Through a series of
comparisons and ablation studies, we show that the tri-exposure quad-bayer with
our solution is more optimal to capture than previous reconstruction methods,
particularly for the scenes with larger dynamic range and objects with motion.
|
We derive a limiting absorption principle on any compact interval in
$\mathbb{R} \backslash \{0\}$ for the free massless Dirac operator, $H_0 =
\alpha \cdot (-i \nabla)$ in $[L^2(\mathbb{R}^n)]^N$, $n \geq 2$,
$N=2^{\lfloor(n+1)/2\rfloor}$, and then prove the absence of singular
continuous spectrum of interacting massless Dirac operators $H = H_0 +V$, where
$V$ decays like $O(|x|^{-1 - \varepsilon})$.
Expressing the spectral shift function $\xi(\,\cdot\,; H,H_0)$ as normal
boundary values of regularized Fredholm determinants, we prove that for
sufficiently decaying $V$, $\xi(\,\cdot\,;H,H_0) \in C((-\infty,0) \cup
(0,\infty))$, and that the left and right limits at zero, $\xi(0_{\pm};
H,H_0)$, exist.
Introducing the non-Fredholm operator $\boldsymbol{D}_{\boldsymbol{A}} =
\frac{d}{dt} + \boldsymbol{A}$ in
$L^2\big(\mathbb{R};[L^2(\mathbb{R}^n)]^N\big)$, where $\boldsymbol{A} =
\boldsymbol{A_-} + \boldsymbol{B}$, $\boldsymbol{A_-}$, and $\boldsymbol{B}$
are generated in terms of $H, H_0$ and $V$, via $A(t) = A_- + B(t)$, $A_- =
H_0$, $B(t)=b(t) V$, $t \in \mathbb{R}$, assuming $b$ is smooth, $b(-\infty) =
0$, $b(+\infty) = 1$, and introducing $\boldsymbol{H_1} =
\boldsymbol{D}_{\boldsymbol{A}}^{*} \boldsymbol{D}_{\boldsymbol{A}}$,
$\boldsymbol{H_2} = \boldsymbol{D}_{\boldsymbol{A}}
\boldsymbol{D}_{\boldsymbol{A}}^{*}$, one of the principal results in this
manuscript expresses the $k$th resolvent regularized Witten index
$W_{k,r}(\boldsymbol{D}_{\boldsymbol{A}})$ ($k \in \mathbb{N}$, $k \geq \lceil
n/2 \rceil$) in terms of spectral shift functions as \[
W_{k,r}(\boldsymbol{D}_{\boldsymbol{A}}) = \xi(0_+; \boldsymbol{H_2},
\boldsymbol{H_1}) = [\xi(0_+;H,H_0) + \xi(0_-;H,H_0)]/2. \] Here
$L^2(\mathbb{R};\mathcal{H}) = \int_{\mathbb{R}}^{\oplus} dt \, \mathcal{H}$
and $\boldsymbol{T} = \int_{\mathbb{R}}^{\oplus} dt \, T(t)$ abbreviate direct
integrals.
|
The one-loop contributions to the trilinear neutral gauge boson couplings
$ZZV^\ast$ ($V=\gamma,Z,Z'$), parametrized in terms of one $CP$-conserving
$f_5^{V}$ and one $CP$-violating $f_4^{V}$ form factors, are calculated in
models with $CP$-violating flavor changing neutral current couplings mediated
by the $Z$ gauge boson and an extra neutral gauge boson $Z'$. Analytical
results are presented in terms of both Passarino-Veltman scalar functions and
closed form functions. Constraints on the vector and axial couplings of the $Z$
gauge boson $\left|g_{{VZ}}^{tu}\right|< 0.0096$ and
$\left|g_{{VZ}}^{tc}\right|<0.011$ are obtained from the current experimental
data on the $t\rightarrow Z q$ decays. It is found that in the case of the
$ZZ\gamma^\ast$ vertex the only non-vanishing form factor is $f_5^{\gamma}$,
which can be of the order of $10^{-3}$, whereas for the $ZZZ^\ast$ vertex both
form factors $f_5^{Z}$ and $f_4^{Z}$ are non-vanishing and can be of the order
of $10^{-6}$ and $10^{-5}$, respectively. Our estimates for $f_5^{\gamma}$ and
$f_5^{Z}$ are smaller than those predicted by the standard model, where
$f_4^{Z}$ is absent up to the one loop level. We also estimate the $ZZ{Z'}^{*}$
form factors arising from both diagonal and non-diagonal $Z'$ couplings within
a few extension models. It is found that in the diagonal case $f_{5}^{Z'}$ is
the only non-vanishing form factor and its real and imaginary parts can be of
the order of $10^{-1}-10^{-2}$ and $ 10^{-2}-10^{-3}$, respectively, with the
dominant contributions arising from the light quarks and leptons. In the
non-diagonal case $f_{5}^{Z^\prime}$ can be of the order of $10^{-4}$, whereas
$f_4^{Z'}$ can reach values as large as $10^{-7}-10^{-8}$, with the largest
contributions arising from the $Z'tq$ couplings.
|
Large-scale sky surveys have played a transformative role in our
understanding of astrophysical transients, only made possible by increasingly
powerful machine learning-based filtering to accurately sift through the vast
quantities of incoming data generated. In this paper, we present a new
real-bogus classifier based on a Bayesian convolutional neural network that
provides nuanced, uncertainty-aware classification of transient candidates in
difference imaging, and demonstrate its application to the datastream from the
GOTO wide-field optical survey. Not only are candidates assigned a
well-calibrated probability of being real, but also an associated confidence
that can be used to prioritise human vetting efforts and inform future model
optimisation via active learning. To fully realise the potential of this
architecture, we present a fully-automated training set generation method which
requires no human labelling, incorporating a novel data-driven augmentation
method to significantly improve the recovery of faint and nuclear transient
sources. We achieve competitive classification accuracy (FPR and FNR both below
1%) compared against classifiers trained with fully human-labelled datasets,
whilst being significantly quicker and less labour-intensive to build. This
data-driven approach is uniquely scalable to the upcoming challenges and data
needs of next-generation transient surveys. We make our data generation and
model training codes available to the community.
|
The weighted $k$-server problem is a natural generalization of the $k$-server
problem in which the cost incurred in moving a server is the distance traveled
times the weight of the server. Even after almost three decades since the
seminal work of Fiat and Ricklin (1994), the competitive ratio of this problem
remains poorly understood, even on the simplest class of metric spaces -- the
uniform metric spaces. In particular, in the case of randomized algorithms
against the oblivious adversary, neither a better upper bound that the doubly
exponential deterministic upper bound, nor a better lower bound than the
logarithmic lower bound of unweighted $k$-server, is known. In this article, we
make significant progress towards understanding the randomized competitive
ratio of weighted $k$-server on uniform metrics. We cut down the triply
exponential gap between the upper and lower bound to a singly exponential gap
by proving that the competitive ratio is at least exponential in $k$,
substantially improving on the previously known lower bound of about $\ln k$.
|
Recently, many authors have embraced the study of certain properties of
modules such as projectivity, injectivity and flatness from an alternative
point of view. Rather than saying a module has a certain property or not, each
module is assigned a relative domain which, somehow, measures to which extent
it has this particular property. In this work, we introduce a new and fresh
perspective on flatness of modules. However, we will first investigate a more
general context by introducing domains relative to a precovering class $\x$. We
call these domains $\x$-precover completing domains. In particular, when $\x$
is the class of flat modules, we call them flat-precover completing domains.
This approach allows us to provide a common frame for a number of classical
notions. Moreover, some known results are generalized and some classical rings
are characterized in terms of these domains.
|
Cometary outbursts offer a valuable window into the composition of comet
nuclei with their forceful ejection of dust and volatiles in explosive events,
revealing the interior components of the comet. Understanding how different
types of outbursts influence the dust properties and volatile abundances to
better interpret what signatures can be attributed to primordial composition
and what features are the result of processing is an important task best
undertaken with a multi-instrument approach. The European Space Agency
\textit{Rosetta} mission to 67P/Churyumov-Gerasimenko carried a suite of
instruments capable of carrying out this task in the near-nucleus coma with
unprecedented spatial and spectral resolution. In this work we discuss two
outbursts that occurred November 7 2015 and were observed by three instruments
on board: the Alice ultraviolet spectrograph, the Visual Infrared and Thermal
Imaging Spectrometer (VIRTIS), and the Optical, Spectroscopic, and Infrared
Remote Imaging System (OSIRIS). Together the observations show that mixed gas
and dust outbursts can have different spectral signatures representative of
their initiating mechanisms, with the first outburst showing indicators of a
cliff collapse origin and the second more representative of fresh volatiles
being exposed via a deepening fracture. This analysis opens up the possibility
of remote spectral classification of cometary outbursts with future work.
|
The search for suitable materials for solid-state stationary storage of green
hydrogen is pushing the implementation of efficient renewable energy systems.
This involves rational design and modification of cheap alloys for effective
storage in mild conditions of temperature and pressure. Among many
intermetallic compounds described in the literature, TiFe-based systems have
recently regained vivid interest as materials for practical applications since
they are low-cost and they can be tuned to match required pressure and
operation conditions. This work aims to provide a comprehensive review of
publications involving chemical substitution in TiFe-based compounds for
guiding compound design and materials selection in current and future hydrogen
storage applications. Mono- and multi-substituted compounds modify TiFe
thermodynamics and are beneficial for many hydrogenation properties. They will
be reviewed and deeply discussed, with a focus on manganese substitution.
|
We report a series of measurements of the effect of an electric field on the
frequency of the ultranarrow linewidth $^7F_0 \rightarrow$ $^5D_0$ optical
transition of $\rm Eu^{3+}$ ions in an $\rm Y_2SiO_5$ matrix at cryogenic
temperatures. We provide linear Stark coefficients along two dielectric axes
and for the two different substitution sites of the $\rm Eu^{3+}$ ions, with an
unprecedented accuracy, and an upper limit for the quadratic Stark shift. The
measurements, which indicate that the electric field sensitivity is a factor of
seven larger for site 1 relative to site 2 for a particular direction of the
electric field are of direct interest both in the context of quantum
information processing and laser frequency stabilization with rare-earth doped
crystals, in which electric fields can be used to engineer experimental
protocols by tuning transition frequencies.
|
We investigate the lattice regularization of $\mathcal{N} = 4$ supersymmetric
Yang-Mills theory, by stochastically computing the eigenvalue mode number of
the fermion operator. This provides important insight into the non-perturbative
renormalization group flow of the lattice theory, through the definition of a
scale-dependent effective mass anomalous dimension. While this anomalous
dimension is expected to vanish in the conformal continuum theory, the finite
lattice volume and lattice spacing generically lead to non-zero values, which
we use to study the approach to the continuum limit. Our numerical results,
comparing multiple lattice volumes, 't Hooft couplings, and numbers of colors,
confirm convergence towards the expected continuum result, while quantifying
the increasing significance of lattice artifacts at larger couplings.
|
We use the data from Gaia Early Data Release 3 (EDR3) to study the kinematic
properties of Milky Way globular clusters. We measure the mean parallaxes and
proper motions (PM) for 170 clusters, determine the PM dispersion profiles for
more than 100 clusters, uncover rotation signatures in more than 20 objects,
and find evidence for radial or tangential PM anisotropy in a dozen richest
clusters. At the same time, we use the selection of cluster members to explore
the reliability and limitations of the Gaia catalogue itself. We find that the
formal uncertainties on parallax and PM are underestimated by 10-20% in dense
central regions even for stars that pass numerous quality filters. We explore
the the spatial covariance function of systematic errors, and determine a lower
limit on the uncertainty of average parallaxes and PM at the level 0.01 mas and
0.025 mas/yr, respectively. Finally, a comparison of mean parallaxes of
clusters with distances from various literature sources suggests that the
parallaxes (after applying the zero-point correction suggested by Lindegren et
al. 2021) are overestimated by 0.01+-0.003 mas. Despite these caveats, the
quality of Gaia astrometry has been significantly improved in EDR3 and provides
valuable insights into the properties of star clusters.
|
We consider the task of minimizing the sum of smooth and strongly convex
functions stored in a decentralized manner across the nodes of a communication
network whose links are allowed to change in time. We solve two fundamental
problems for this task. First, we establish the first lower bounds on the
number of decentralized communication rounds and the number of local
computations required to find an $\epsilon$-accurate solution. Second, we
design two optimal algorithms that attain these lower bounds: (i) a variant of
the recently proposed algorithm ADOM (Kovalev et al., 2021) enhanced via a
multi-consensus subroutine, which is optimal in the case when access to the
dual gradients is assumed, and (ii) a novel algorithm, called ADOM+, which is
optimal in the case when access to the primal gradients is assumed. We
corroborate the theoretical efficiency of these algorithms by performing an
experimental comparison with existing state-of-the-art methods.
|
A model based on the classic non-interacting Ehrenfest urn model with
two-urns is generalized to $M$ urns with the introduction of interactions for
particles within the same urn. As the inter-particle interaction strength is
varied, phases of different levels of non-uniformity emerge and their
stabilities are calculated analytically. In particular, coexistence of locally
stable uniform and non-uniform phases connected by first-order transition
occurs. The phase transition threshold and energy barrier can be derived
exactly together with the phase diagram obtained analytically. These analytic
results are further confirmed by Monte Carlo simulations.
|
Recently, Balliu, Brandt, and Olivetti [FOCS '20] showed the first
$\omega(\log^* n)$ lower bound for the maximal independent set (MIS) problem in
trees. In this work we prove lower bounds for a much more relaxed family of
distributed symmetry breaking problems. As a by-product, we obtain improved
lower bounds for the distributed MIS problem in trees.
For a parameter $k$ and an orientation of the edges of a graph $G$, we say
that a subset $S$ of the nodes of $G$ is a $k$-outdegree dominating set if $S$
is a dominating set of $G$ and if in the induced subgraph $G[S]$, every node in
$S$ has outdegree at most $k$. Note that for $k=0$, this definition coincides
with the definition of an MIS. For a given $k$, we consider the problem of
computing a $k$-outdegree dominating set. We show that, even in regular trees
of degree at most $\Delta$, in the standard \LOCAL model, there exists a
constant $\epsilon>0$ such that for $k\leq \Delta^\epsilon$, for the problem of
computing a $k$-outdegree dominating set, any randomized algorithm requires at
least $\Omega(\min\{\log\Delta,\sqrt{\log\log n}\})$ rounds and any
deterministic algorithm requires at least $\Omega(\min\{\log\Delta,\sqrt{\log
n}\})$ rounds.
The proof of our lower bounds is based on the recently highly successful
round elimination technique. We provide a novel way to do simplifications for
round elimination, which we expect to be of independent interest. Our new proof
is considerably simpler than the lower bound proof in [FOCS '20]. In
particular, our round elimination proof uses a family of problems that can be
described by only a constant number of labels. The existence of such a proof
for the MIS problem was believed impossible by the authors of [FOCS '20].
|
In the present paper, we will study geometric properties of harmonic mappings
whose analytic and co-analytic parts are (shifted) generated functions of
completely monotone sequences.
|
The number of cyber attacks has increased tremendously in the last few years.
This resulted into both human and financial losses at the individual and
organization levels. Recently, cyber-criminals are leveraging new skills and
capabilities by employing anti-forensics activities, techniques and tools to
cover their tracks and evade any possible detection. Consequently,
cyber-attacks are becoming more efficient and more sophisticated. Therefore,
traditional cryptographic and non-cryptographic solutions and access control
systems are no longer enough to prevent such cyber attacks, especially in terms
of acquiring evidence for attack investigation. Hence, the need for
well-defined, sophisticated, and advanced forensics investigation tools are
highly required to track down cyber criminals and to reduce the number of cyber
crimes. This paper reviews the different forensics and anti-forensics methods,
tools, techniques, types, and challenges, while also discussing the rise of the
anti-anti-forensics as a new forensics protection mechanism against
anti-forensics activities. This would help forensics investigators to better
understand the different anti-forensics tools, methods and techniques that
cyber criminals employ while launching their attacks. Moreover, the limitations
of the current forensics techniques are discussed, especially in terms of
issues and challenges. Finally, this paper presents a holistic view from a
literature point of view over the forensics domain and also helps other fellow
colleagues in their quest to further understand the digital forensics domain.
|
We discuss the characteristics of the patterns of the vascular networks in a
mathematical model for angiogenesis. Based on recent in vitro experiments, this
mathematical model assumes that the elongation and bifurcation of blood vessels
during angiogenesis are determined by the density of endothelial cells at the
tip of the vascular network, and describes the dynamical changes in vascular
network formation using a system of simultaneous ordinary differential
equations. The pattern of formation strongly depends on the supply rate of
endothelial cells by cell division, the branching angle, and also on the
connectivity of vessels. By introducing reconnection of blood vessels, the
statistical distribution of the size of islands in the network is discussed
with respect to bifurcation angles and elongation factor distributions. The
characteristics of the obtained patterns are analysed using multifractal
dimension and other techniques.
|
Famous double-slit or double-path experiments, implemented in a Young's or
Mach-Zehnder interferometer, have confirmed the dual nature of quantum matter,
When a stream of photons, neutrons, atoms, or molecules, passes through two
slits, either wave-like interference fringes build up on a screen, or
particle-like which-path distribution can be ascertained. These quantum objects
exhibit both wave and particle properties but exclusively, depending on the way
they are measured. In an equivalent Mach-Zehnder configuration, the object
displays either wave or particle nature in the presence or absence of a
beamsplitter, respectively, that represents the choice of which-measurement.
Wheeler further proposed a gedanken experiment, in which the choice of
which-measurement is delayed, i.e. determined after the object has already
entered the interferometer, so as to exclude the possibility of predicting
which-measurement it will confront. The delayed-choice experiments have enabled
significant demonstrations of genuine two-path duality of different quantum
objects. Recently, a quantum controlled version of delayed-choice was proposed
by Ionicioiu and Terno, by introducing a quantum-controlled beamsplitter that
is in a coherent superposition of presence and absence. It represents a
controllable experiment platform that can not only reveal wave and particle
characters, but also their superposition. Moreover, a quantitative description
of two-slit duality relation was initialized in Wootters and Zurek's seminal
work and formalized by Greenberger,et. al. as D2+V2<=1, where D is the
distinguishability of whichpath information, and V is the contrast visibility
of interference. In this regard, getting which-path information exclusively
reduces the interference visibility, and vice versa. This double-path duality
relation has been tested in pioneer experiments and recently in delayed-choice
measurements.
|
We will introduce BumbleBee, a transformer model that will generate MIDI
music data . We will tackle the issue of transformers applied to long sequences
by implementing a longformer generative model that uses dilating sliding
windows to compute the attention layers. We will compare our results to that of
the music transformer and Long-Short term memory (LSTM) to benchmark our
results. This analysis will be performed using piano MIDI files, in particular
, the JSB Chorales dataset that has already been used for other research works
(Huang et al., 2018)
|
It is well known that the Newton method may not converge when the initial
guess does not belong to a specific quadratic convergence region. We propose a
family of new variants of the Newton method with the potential advantage of
having a larger convergence region as well as more desirable properties near a
solution. We prove quadratic convergence of the new family, and provide
specific bounds for the asymptotic error constant. We illustrate the advantages
of the new methods by means of test problems, including two and six variable
polynomial systems, as well as a challenging signal processing example. We
present a numerical experimental methodology which uses a large number of
randomized initial guesses for a number of methods from the new family, in turn
providing advice as to which of the methods employed is preferable to use in a
particular search domain.
|
We study the log-rank conjecture from the perspective of point-hyperplane
incidence geometry. We formulate the following conjecture: Given a point set in
$\mathbb{R}^d$ that is covered by constant-sized sets of parallel hyperplanes,
there exists an affine subspace that accounts for a large (i.e.,
$2^{-{\operatorname{polylog}(d)}}$) fraction of the incidences. Alternatively,
our conjecture may be interpreted linear-algebraically as follows: Any rank-$d$
matrix containing at most $O(1)$ distinct entries in each column contains a
submatrix of fractional size $2^{-{\operatorname{polylog}(d)}}$, in which each
column contains one distinct entry. We prove that our conjecture is equivalent
to the log-rank conjecture.
Motivated by the connections above, we revisit well-studied questions in
point-hyperplane incidence geometry without structural assumptions (i.e., the
existence of partitions). We give an elementary argument for the existence of
complete bipartite subgraphs of density $\Omega(\epsilon^{2d}/d)$ in any
$d$-dimensional configuration with incidence density $\epsilon$. We also
improve an upper-bound construction of Apfelbaum and Sharir (SIAM J. Discrete
Math. '07), yielding a configuration whose complete bipartite subgraphs are
exponentially small and whose incidence density is $\Omega(1/\sqrt d)$.
Finally, we discuss various constructions (due to others) which yield
configurations with incidence density $\Omega(1)$ and bipartite subgraph
density $2^{-\Omega(\sqrt d)}$.
Our framework and results may help shed light on the difficulty of improving
Lovett's $\tilde{O}(\sqrt{\operatorname{rank}(f)})$ bound (J. ACM '16) for the
log-rank conjecture; in particular, any improvement on this bound would imply
the first bipartite subgraph size bounds for parallel $3$-partitioned
configurations which beat our generic bounds for unstructured configurations.
|
In-pipe robots are promising solutions for condition assessment, leak
detection, water quality monitoring in a variety of other tasks in pipeline
networks. Smart navigation is an extremely challenging task for these robots as
a result of highly uncertain and disturbing environment for operation. Wireless
communication to control these robots during operation is not feasible if the
pipe material is metal since the radio signals are destroyed in the pipe
environment, and hence, this challenge is still unsolved. In this paper, we
introduce a method for smart navigation for our previously designed in-pipe
robot [1] based on particle filtering and a two-phase motion controller. The
robot is given the map of the operation path with a novel approach and the
particle filtering determines the straight and non-straight configurations of
the pipeline. In the straight paths, the robot follows a linear quadratic
regulator (LQR) and proportional-integral-derivative (PID) based controller
that stabilizes the robot and tracks a desired velocity. In non-straight paths,
the robot follows the trajectory that a motion trajectory generator block plans
for the robot. The proposed method is a promising solution for smart navigation
without the need for wireless communication and capable of inspecting long
distances in water distribution systems.
|
We consider topologically twisted $\mathcal{N}=2$, $SU(2)$ gauge theory with
a massive adjoint hypermultiplet on a smooth, compact four-manifold $X$. A
consistent formulation requires coupling the theory to a ${\rm Spin}^c$
structure, which is necessarily non-trivial if $X$ is non-spin. We derive
explicit formulae for the topological correlation functions when $b_2^+\geq 1$.
We demonstrate that, when the ${\rm Spin}^c$ structure is canonically
determined by an almost complex structure and the mass is taken to zero, the
path integral reproduces known results for the path integral of the
$\mathcal{N}=4$ gauge theory with Vafa-Witten twist. On the other hand, we
reproduce results from Donaldson-Witten theory after taking a suitable infinite
mass limit. The topological correlators are functions of the UV coupling
constant $\tau_{\rm uv}$ and we confirm that they obey the expected $S$-duality
transformation laws. The holomorphic part of the partition function is a
generating function for the Euler numbers of the matter (or obstruction) bundle
over the instanton moduli space. For $b_2^+=1$, we derive a non-holomorphic
contribution to the path integral, such that the partition function and
correlation functions are mock modular forms rather than modular forms. We
comment on the generalization of this work to the large class of
$\mathcal{N}=2$ theories of class $S$.
|
This paper studies constrained text generation, which is to generate
sentences under certain pre-conditions. We focus on CommonGen, the task of
generating text based on a set of concepts, as a representative task of
constrained text generation. Traditional methods mainly rely on supervised
training to maximize the likelihood of target sentences.However, global
constraints such as common sense and coverage cannot be incorporated into the
likelihood objective of the autoregressive decoding process. In this paper, we
consider using reinforcement learning to address the limitation, measuring
global constraints including fluency, common sense and concept coverage with a
comprehensive score, which serves as the reward for reinforcement learning.
Besides, we design a guided decoding method at the word, fragment and sentence
levels. Experiments demonstrate that our method significantly increases the
concept coverage and outperforms existing models in various automatic
evaluations.
|
This study focuses on the synthesis of FeRh nanoparticles via pulsed laser
ablation in liquid and on controlling the oxidation of the synthesized
nanoparticles. Formation of monomodal {\gamma}-FeRh nanoparticles was confirmed
by transmission electron microscopy (TEM) and their composition confirmed by
atom probe tomography (APT). On these particles, three major contributors to
oxidation were analysed: 1) dissolved oxygen in the organic solvents, 2) the
bound oxygen in the solvent and 3) oxygen in the atmosphere above the solvent.
The decrease of oxidation for optimized ablation conditions was confirmed
through energy-dispersive X-ray (EDX) and M\"ossbauer spectroscopy.
Furthermore, the time dependence of oxidation was monitored for dried FeRh
nanoparticles powders using ferromagnetic resonance spectroscopy (FMR). By
magnetophoretic separation, B2-FeRh nanoparticles could be extracted from the
solution and characteristic differences of nanostrand formation between
{\gamma}-FeRh and B2-FeRh nanoparticles were observed.
|
A number of industrial applications, such as smart grids, power plant
operation, hybrid system management or energy trading, could benefit from
improved short-term solar forecasting, addressing the intermittent energy
production from solar panels. However, current approaches to modelling the
cloud cover dynamics from sky images still lack precision regarding the spatial
configuration of clouds, their temporal dynamics and physical interactions with
solar radiation. Benefiting from a growing number of large datasets, data
driven methods are being developed to address these limitations with promising
results. In this study, we compare four commonly used Deep Learning
architectures trained to forecast solar irradiance from sequences of
hemispherical sky images and exogenous variables. To assess the relative
performance of each model, we used the Forecast Skill metric based on the smart
persistence model, as well as ramp and time distortion metrics. The results
show that encoding spatiotemporal aspects of the sequence of sky images greatly
improved the predictions with 10 min ahead Forecast Skill reaching 20.4% on the
test year. However, based on the experimental data, we conclude that, with a
common setup, Deep Learning models tend to behave just as a 'very smart
persistence model', temporally aligned with the persistence model while
mitigating its most penalising errors. Thus, despite being captured by the sky
cameras, models often miss fundamental events causing large irradiance changes
such as clouds obscuring the sun. We hope that our work will contribute to a
shift of this approach to irradiance forecasting, from reactive to
anticipatory.
|
Recent technological advances allowed the coherent optical manipulation of
high-energy electron wavepackets with attosecond precision. Here we
theoretically investigate the collision of optically-modulated pulsed electron
beams with atomic targets and reveal a quantum interference associated with
different momentum components of the incident broadband electron pulse, which
coherently modulates both the elastic and inelastic scattering cross sections.
We show that the quantum interference has a high spatial sensitivity at the
level of Angstroms, offering potential applications in high-resolution
ultrafast electron microscopy. Our findings are rationalized by a simple model.
|
Autonomous vehicles have a great potential in the application of both civil
and military fields, and have become the focus of research with the rapid
development of science and economy. This article proposes a brief review on
learning-based decision-making technology for autonomous vehicles since it is
significant for safer and efficient performance of autonomous vehicles.
Firstly, the basic outline of decision-making technology is provided. Secondly,
related works about learning-based decision-making methods for autonomous
vehicles are mainly reviewed with the comparison to classical decision-making
methods. In addition, applications of decision-making methods in existing
autonomous vehicles are summarized. Finally, promising research topics in the
future study of decision-making technology for autonomous vehicles are
prospected.
|
This paper investigates the cooperative planning and control problem for
multiple connected autonomous vehicles (CAVs) in different scenarios. In the
existing literature, most of the methods suffer from significant problems in
computational efficiency. Besides, as the optimization problem is nonlinear and
nonconvex, it typically poses great difficultly in determining the optimal
solution. To address this issue, this work proposes a novel and completely
parallel computation framework by leveraging the alternating direction method
of multipliers (ADMM). The nonlinear and nonconvex optimization problem in the
autonomous driving problem can be divided into two manageable subproblems; and
the resulting subproblems can be solved by using effective optimization methods
in a parallel framework. Here, the differential dynamic programming (DDP)
algorithm is capable of addressing the nonlinearity of the system dynamics
rather effectively; and the nonconvex coupling constraints with small
dimensions can be approximated by invoking the notion of semi-definite
relaxation (SDR), which can also be solved in a very short time. Due to the
parallel computation and efficient relaxation of nonconvex constraints, our
proposed approach effectively realizes real-time implementation and thus also
extra assurance of driving safety is provided. In addition, two transportation
scenarios for multiple CAVs are used to illustrate the effectiveness and
efficiency of the proposed method.
|
A black-box optimization algorithm such as Bayesian optimization finds
extremum of an unknown function by alternating inference of the underlying
function and optimization of an acquisition function. In a high-dimensional
space, such algorithms perform poorly due to the difficulty of acquisition
function optimization. Herein, we apply quantum annealing (QA) to overcome the
difficulty in the continuous black-box optimization. As QA specializes in
optimization of binary problems, a continuous vector has to be encoded to
binary, and the solution of QA has to be translated back. Our method has the
following three parts: 1) Random subspace coding based on axis-parallel
hyperrectangles from continuous vector to binary vector. 2) A quadratic
unconstrained binary optimization (QUBO) defined by acquisition function based
on nonnegative-weighted linear regression model which is solved by QA. 3) A
penalization scheme to ensure that the QA solution can be translated back. It
is shown in benchmark tests that its performance using D-Wave Advantage$^{\rm
TM}$ quantum annealer is competitive with a state-of-the-art method based on
the Gaussian process in high-dimensional problems. Our method may open up a new
possibility of quantum annealing and other QUBO solvers including quantum
approximate optimization algorithm (QAOA) using a gated-quantum computers, and
expand its range of application to continuous-valued problems.
|
Internet of quantum blockchains (IoB) will be the future Internet. In this
paper, we make two new contributions to IoB: developing a block based quantum
channel networking technology to handle its security modeling in face of the
quantum supremacy and establishing IoB based FinTech platform model with
dynamic pricing for stable digital currency. The interaction between our new
contributions is also addressed. In doing so, we establish a generalized IoB
security model by quantum channel networking in terms of both time and space
quantum entanglements with quantum key distribution (QKD). Our IoB can interact
with general structured things (e.g., supply chain systems) having online
trading and payment capability via stable digital currency and can handle
vector-valued data streams requiring synchronized services. Thus, within our
designed QKD, a generalized random number generator for private and public keys
is proposed by a mixed zero-sum and non-zero-sum resource-competition pricing
policy. The effectiveness of this policy is justified by diffusion modeling
with approximation theory and numerical implementations.
|
Several efficient distributed algorithms have been developed for
matrix-matrix multiplication: the 3D algorithm, the 2D SUMMA algorithm, and the
2.5D algorithm. Each of these algorithms was independently conceived and they
trade-off memory needed per node and the inter-node data communication volume.
The convolutional neural network (CNN) computation may be viewed as a
generalization of matrix-multiplication combined with neighborhood stencil
computations. We develop communication-efficient distributed-memory algorithms
for CNNs that are analogous to the 2D/2.5D/3D algorithms for matrix-matrix
multiplication.
|
In subsurface multiphase flow simulations, poor nonlinear solver performance
is a significant runtime sink. The system of fully implicit mass balance
equations is highly nonlinear and often difficult to solve for the nonlinear
solver, generally Newton(-Raphson). Strong nonlinearities can cause Newton
iterations to converge very slowly. This frequently results in time step cuts,
leading to computationally expensive simulations. Much literature has looked
into how to improve the nonlinear solver through enhancements or safeguarding
updates. In this work, we take a different approach; we aim to improve
convergence with a smoother finite volume discretization scheme which is more
suitable for the Newton solver.
Building on recent work, we propose a novel total velocity hybrid upwinding
scheme with weighted average flow mobilities (WA-HU TV) that is unconditionally
monotone and extends to compositional multiphase simulations. Analyzing the
solution space of a one-cell problem, we demonstrate the improved properties of
the scheme and explain how it leverages the advantages of both phase potential
upwinding and arithmetic averaging. This results in a flow subproblem that is
smooth with respect to changes in the sign of phase fluxes, and is well-behaved
when phase velocities are large or when co-current viscous forces dominate.
Additionally, we propose a WA-HU scheme with a total mass (WA-HU TM)
formulation that includes phase densities in the weighted averaging.
The proposed WA-HU TV consistently outperforms existing schemes, yielding
benefits from 5\% to over 50\% reduction in nonlinear iterations. The WA-HU TM
scheme also shows promising results; in some cases leading to even more
efficiency. However, WA-HU TM can occasionally also lead to convergence issues.
Overall, based on the current results, we recommend the adoption of the WA-HU
TV scheme as it is highly efficient and robust.
|
We provide a sufficient condition for the monogamy inequality of multi-party
quantum entanglement of arbitrary dimensions in terms of entanglement of
formation. Based on the classical-classical-quantum(ccq) states whose quantum
parts are obtained from the two-party reduced density matrices of a three-party
quantum state, we show the additivity of the mutual information of the ccq
states guarantees the monogamy inequality of the three-party pure state in
terms of EoF. After illustrating the result with some examples, we generalize
our result of three-party systems into any multi-party systems of arbitrary
dimensions.
|
Four dimensional scanning transmission electron microscopy (4D STEM) records
the scattering of electrons in a material in great detail. The benefits offered
by 4D STEM are substantial, with the wealth of data it provides facilitating
for instance high precision, high electron dose efficiency phase imaging via
center of mass or ptychography based analysis. However the requirement for a 2D
image of the scattering to be recorded at each probe position has long placed a
severe bottleneck on the speed at which 4D STEM can be performed. Recent
advances in camera technology have greatly reduced this bottleneck, with the
detection efficiency of direct electron detectors being especially well suited
to the technique. However even the fastest frame driven pixelated detectors
still significantly limit the scan speed which can be used in 4D STEM, making
the resulting data susceptible to drift and hampering its use for low dose beam
sensitive applications. Here we report the development of the use of an event
driven Timepix3 direct electron camera that allows us to overcome this
bottleneck and achieve 4D STEM dwell times down to 100~ns; orders of magnitude
faster than what has been possible with frame based readout. We characterise
the detector for different acceleration voltages and show that the method is
especially well suited for low dose imaging and promises rich datasets without
compromising dwell time when compared to conventional STEM imaging.
|
The ejecta velocities of type-Ia supernovae (SNe Ia), as measured by the Si
II $\lambda 6355$ line, have been shown to correlate with other supernova
properties, including color and standardized luminosity. We investigate these
results using the Foundation Supernova Survey, with a spectroscopic data
release presented here, and photometry analyzed with the SALT2 light-curve
fitter. We find that the Foundation data do not show significant evidence for
an offset in color between SNe Ia with high and normal photospheric velocities,
with $\Delta c = 0.005 \pm 0.014$. Our SALT2 analysis does show evidence for
redder high-velocity SN Ia in other samples, including objects from the
Carnegie Supernova Project, with a combined sample yielding $\Delta c = 0.017
\pm 0.007$. When split on velocity, the Foundation SN Ia also do not show a
significant difference in Hubble diagram residual, $\Delta HR = 0.015 \pm
0.049$ mag. Intriguingly, we find that SN Ia ejecta velocity information may be
gleaned from photometry, particularly in redder optical bands. For
high-redshift SN Ia, these rest-frame red wavelengths will be observed by the
Nancy Grace Roman Space Telescope. Our results also confirm previous work that
SN Ia host-galaxy stellar mass is strongly correlated with ejecta velocity:
high-velocity SN Ia are found nearly exclusively in high-stellar-mass hosts.
However, host-galaxy properties alone do not explain velocity-dependent
differences in supernova colors and luminosities across samples. Measuring and
understanding the connection between intrinsic explosion properties and
supernova environments, across cosmic time, will be important for precision
cosmology with SNe Ia.
|
This paper presents a formally verified quantifier elimination (QE) algorithm
for first-order real arithmetic by linear and quadratic virtual substitution
(VS) in Isabelle/HOL. The Tarski-Seidenberg theorem established that the
first-order logic of real arithmetic is decidable by QE. However, in practice,
QE algorithms are highly complicated and often combine multiple methods for
performance. VS is a practically successful method for QE that targets formulas
with low-degree polynomials. To our knowledge, this is the first work to
formalize VS for quadratic real arithmetic including inequalities. The proofs
necessitate various contributions to the existing multivariate polynomial
libraries in Isabelle/HOL. Our framework is modularized and easily expandable
(to facilitate integrating future optimizations), and could serve as a basis
for developing practical general-purpose QE algorithms. Further, as our
formalization is designed with practicality in mind, we export our development
to SML and test the resulting code on 378 benchmarks from the literature,
comparing to Redlog, Z3, Wolfram Engine, and SMT-RAT. This identified
inconsistencies in some tools, underscoring the significance of a verified
approach for the intricacies of real arithmetic.
|
Benefiting from large-scale pre-training, we have witnessed significant
performance boost on the popular Visual Question Answering (VQA) task. Despite
rapid progress, it remains unclear whether these state-of-the-art (SOTA) models
are robust when encountering examples in the wild. To study this, we introduce
Adversarial VQA, a new large-scale VQA benchmark, collected iteratively via an
adversarial human-and-model-in-the-loop procedure. Through this new benchmark,
we discover several interesting findings. (i) Surprisingly, we find that during
dataset collection, non-expert annotators can easily attack SOTA VQA models
successfully. (ii) Both large-scale pre-trained models and adversarial training
methods achieve far worse performance on the new benchmark than over standard
VQA v2 dataset, revealing the fragility of these models while demonstrating the
effectiveness of our adversarial dataset. (iii) When used for data
augmentation, our dataset can effectively boost model performance on other
robust VQA benchmarks. We hope our Adversarial VQA dataset can shed new light
on robustness study in the community and serve as a valuable benchmark for
future work.
|
We report on the discovery and validation of a two-planet system around a
bright (V = 8.85 mag) early G dwarf (1.43 $R_{\odot}$, 1.15 $M_{\odot}$, TOI
2319) using data from NASA's Transiting Exoplanet Survey Satellite (TESS).
Three transit events from two planets were detected by citizen scientists in
the month-long TESS light curve (sector 25), as part of the Planet Hunters TESS
project. Modelling of the transits yields an orbital period of \Pb\ and radius
of $3.41 _{ - 0.12 } ^ { + 0.14 }$ $R_{\oplus}$ for the inner planet, and a
period in the range 19.26-35 days and a radius of $5.83 _{ - 0.14 } ^ { + 0.14
}$ $R_{\oplus}$ for the outer planet, which was only seen to transit once. Each
signal was independently statistically validated, taking into consideration the
TESS light curve as well as the ground-based spectroscopic follow-up
observations. Radial velocities from HARPS-N and EXPRES yield a tentative
detection of planet b, whose mass we estimate to be $11.56 _{ - 6.14 } ^ { +
6.58 }$ $M_{\oplus}$, and allow us to place an upper limit of $27.5$
$M_{\oplus}$ (99 per cent confidence) on the mass of planet c. Due to the
brightness of the host star and the strong likelihood of an extended H/He
atmosphere on both planets, this system offers excellent prospects for
atmospheric characterisation and comparative planetology.
|
Cell polarization underlies many cellular processes, such as differentiation,
migration, and budding. Many living cells, such as budding yeast and fission
yeast, use cytoskeletal structures to actively transport proteins to one
location on the membrane and create a high density spot of membrane-bound
proteins. Yet, the thermodynamic constraints on filament-based cell
polarization remain unknown. We show by mathematical modeling that cell
polarization requires detailed balance to be broken, and we quantify the
free-energy cost of maintaining a polarized state of the cell. Our study
reveals that detailed balance cannot only be broken via the active transport of
proteins along filaments, but also via a chemical modification cycle, allowing
detailed balance to be broken by the shuttling of proteins between the
filament, membrane, and cytosol. Our model thus shows that cell polarization
can be established via two distinct driving mechanisms, one based on active
transport and one based on non-equilibrium binding. Furthermore, the model
predicts that the driven binding process dissipates orders of magnitude less
free-energy than the transport-based process to create the same membrane spot.
Active transport along filaments may be sufficient to create a polarized
distribution of membrane-bound proteins, but an additional chemical
modification cycle of the proteins themselves is more efficient and less
sensitive to the physical exclusion of proteins on the transporting filaments,
providing insight in the design principles of the Pom1/Tea1/Tea4 system in
fission yeast and the Cdc42 system in budding yeast.
|
The sequential quantum random access code (QRAC) allows two or more decoders
to obtain a desired message with higher success probability than the best
classical bounds by appropriately modulating the measurement sharpness. Here,
we propose an entanglement-assisted sequential QRAC protocol which can enable
device-independent tasks. By relaxing the equal sharpness and mutually unbiased
measurement limits, we widen the sharpness modulation region from a
one-dimensional interval to a two-dimensional triangle. Then, we demonstrate
our scheme experimentally and get more than 27 standard deviations above the
classical bound even when both decoders perform approximately projective
measurements. We use the observed success probability to quantify the
connection among sequential QRAC, measurement sharpness, measurement
biasedness, and measurement incompatibility. Finally, we show that our protocol
can be applied to sequential device-independent randomness expansion and our
measurement strategy can enhance the success probability of decoding the entire
input string. Our results may promote a deeper understanding of the
relationship among quantum correlation, quantum measurement, and quantum
information processing.
|
We introduce a new method for Estimation of Signal Parameters based on
Iterative Rational Approximation (ESPIRA) for sparse exponential sums. Our
algorithm uses the AAA algorithm for rational approximation of the discrete
Fourier transform of the given equidistant signal values. We show that ESPIRA
can be interpreted as a matrix pencil method applied to Loewner matrices. These
Loewner matrices are closely connected with the Hankel matrices which are
usually employed for signal recovery. Due to the construction of the Loewner
matrices via an adaptive selection of index sets, the matrix pencil method is
stabilized. ESPIRA achieves similar recovery results for exact data as ESPRIT
and the matrix pencil method but with less computational effort. Moreover,
ESPIRA strongly outperforms ESPRIT and the matrix pencil method for noisy data
and for signal approximation by short exponential sums.
|
For a certain function $J(s)$ we prove that the identity
$$\frac{\zeta(2s)}{\zeta(s)}-\left(s-\frac{1}{2}\right)J(s)=\frac{\zeta(2s+1)}{\zeta(s+1/2)},
$$ holds in the half-plane Re$(s)>1/2$ and both sides of the equality are
analytic in this half-plane.
|
To survive a learning management system (LMS) implementation an understanding
of the needs of the various stakeholders is necessary. The goal of every LMS
implementation is to ensure the use of the system by instructors and students
to enhance teaching and communication thereby enhancing learning outcomes of
the students. If the teachers and students do not use the system, the system is
useless. This research is motivated by the importance of identifying and
understanding various stakeholders involved in the LMS implementation process
in order to anticipate possible challenges and identify critical success
factors essential for the effective implementation and adoption of a new LMS
system. To this end, we define the term stakeholder. We conducted a stakeholder
analysis to identify the key stakeholders in an LMS implementation process. We
then analyze their goals and needs, and how they collaborate in the
implementation process. The findings of this work will provide institutions of
higher learning an overview of the implementation process and useful insights
into the needs of the stakeholders, which will in turn ensure an increase in
the level of success achieved when implementing a LMS.
|
Let $(\Omega, \mathcal{M}, \mu)$ be a measure space. In this paper we
establish the set of G\^ateaux differentiability for the usual norm of
$L^{1}(\Omega, \mu)$ and the corresponding derivative formulae at each point in
this set.
|
Haptic feedback is critical in a broad range of
human-machine/computer-interaction applications. However, the high cost and low
portability/wearability of haptic devices remains an unresolved issue, severely
limiting the adoption of this otherwise promising technology. Electrotactile
interfaces have the advantage of being more portable and wearable due to its
reduced actuators' size, as well as benefiting from lower power consumption and
manufacturing cost. The usages of electrotactile feedback have been explored in
human-computer interaction and human-machine-interaction for facilitating
hand-based interactions in applications such as prosthetics, virtual reality,
robotic teleoperation, surface haptics, portable devices, and rehabilitation.
This paper presents a systematic review and meta-analysis of electrotactile
feedback systems for hand-based interactions in the last decade. We categorize
the different electrotactile systems according to their type of stimulation and
implementation/application. We also present and discuss a quantitative
congregation of the findings, so as to offer a high-level overview into the
state-of-art and suggest future directions. Electrotactile feedback was
successful in rendering and/or augmenting most tactile sensations, eliciting
perceptual processes, and improving performance in many scenarios, especially
in those where the wearability/portability of the system is important. However,
knowledge gaps, technical drawbacks, and methodological limitations were
detected, which should be addressed in future studies.
|
The linear contextual bandit literature is mostly focused on the design of
efficient learning algorithms for a given representation. However, a contextual
bandit problem may admit multiple linear representations, each one with
different characteristics that directly impact the regret of the learning
algorithm. In particular, recent works showed that there exist "good"
representations for which constant problem-dependent regret can be achieved. In
this paper, we first provide a systematic analysis of the different definitions
of "good" representations proposed in the literature. We then propose a novel
selection algorithm able to adapt to the best representation in a set of $M$
candidates. We show that the regret is indeed never worse than the regret
obtained by running LinUCB on the best representation (up to a $\ln M$ factor).
As a result, our algorithm achieves constant regret whenever a "good"
representation is available in the set. Furthermore, we show that the algorithm
may still achieve constant regret by implicitly constructing a "good"
representation, even when none of the initial representations is "good".
Finally, we empirically validate our theoretical findings in a number of
standard contextual bandit problems.
|
This paper develops a novel approach to necessary optimality conditions for
constrained variational problems defined in generally incomplete subspaces of
absolutely continuous functions. Our approach involves reducing a variational
problem to a (nondynamic) problem of constrained optimization in a normed space
and then applying the results recently obtained for the latter class using
generalized differentiation. In this way, we derive necessary optimality
conditions for nonconvex problems of the calculus of variations with velocity
constraints under the weakest metric subregularity-type constraint
qualification. The developed approach leads us to a short and simple proof of
the First-order necessary optimality conditions for such and related problems
in broad spaces of functions, including those of class C^k.
|
Satellite imagery analytics have numerous human development and disaster
response applications, particularly when time series methods are involved. For
example, quantifying population statistics is fundamental to 67 of the 231
United Nations Sustainable Development Goals Indicators, but the World Bank
estimates that over 100 countries currently lack effective Civil Registration
systems. To help address this deficit and develop novel computer vision methods
for time series data, we present the Multi-Temporal Urban Development SpaceNet
(MUDS, also known as SpaceNet 7) dataset. This open source dataset consists of
medium resolution (4.0m) satellite imagery mosaics, which includes 24 images
(one per month) covering >100 unique geographies, and comprises >40,000 km2 of
imagery and exhaustive polygon labels of building footprints therein, totaling
over 11M individual annotations. Each building is assigned a unique identifier
(i.e. address), which permits tracking of individual objects over time. Label
fidelity exceeds image resolution; this "omniscient labeling" is a unique
feature of the dataset, and enables surprisingly precise algorithmic models to
be crafted. We demonstrate methods to track building footprint construction (or
demolition) over time, thereby directly assessing urbanization. Performance is
measured with the newly developed SpaceNet Change and Object Tracking (SCOT)
metric, which quantifies both object tracking as well as change detection. We
demonstrate that despite the moderate resolution of the data, we are able to
track individual building identifiers over time. This task has broad
implications for disaster preparedness, the environment, infrastructure
development, and epidemic prevention.
|
This paper studies fixed step-size stochastic approximation (SA) schemes,
including stochastic gradient schemes, in a Riemannian framework. It is
motivated by several applications, where geodesics can be computed explicitly,
and their use accelerates crude Euclidean methods. A fixed step-size scheme
defines a family of time-homogeneous Markov chains, parametrized by the
step-size. Here, using this formulation, non-asymptotic performance bounds are
derived, under Lyapunov conditions. Then, for any step-size, the corresponding
Markov chain is proved to admit a unique stationary distribution, and to be
geometrically ergodic. This result gives rise to a family of stationary
distributions indexed by the step-size, which is further shown to converge to a
Dirac measure, concentrated at the solution of the problem at hand, as the
step-size goes to 0. Finally, the asymptotic rate of this convergence is
established, through an asymptotic expansion of the bias, and a central limit
theorem.
|
We consider the lattice analog of a recently proposed continuum model for the
propagation of one- and two-photon states in a random medium. We find that
there is localization of single photons in an energy band centered at the
resonant energy of the atoms. Moreover, there is also localization of photons
at arbitrarily large energies. For the case of two photons, there is
localization in an energy band centered at twice the resonant frequency.
|
Many-electron problems pose some of the greatest challenges in computational
science, with important applications across many fields of modern science.
Fermionic quantum Monte Carlo (QMC) methods are among the most powerful
approaches to these problems. However, they can be severely biased when
controlling the fermionic sign problem using constraints, as is necessary for
scalability. Here we propose an approach that combines constrained QMC with
quantum computing tools to reduce such biases. We experimentally implement our
scheme using up to 16 qubits in order to unbias constrained QMC calculations
performed on chemical systems with as many as 120 orbitals. These experiments
represent the largest chemistry simulations performed on quantum computers
(more than doubling the size of prior electron correlation calculations), while
obtaining accuracy competitive with state-of-the-art classical methods. Our
results demonstrate a new paradigm of hybrid quantum-classical algorithm,
surpassing the popular variational quantum eigensolver in terms of potential
towards the first practical quantum advantage in ground state many-electron
calculations.
|
In recent years, graph theoretic considerations have become increasingly
important in the design of HPC interconnection topologies. One approach is to
seek optimal or near-optimal families of graphs with respect to a particular
graph theoretic property, such as diameter. In this work, we consider
topologies which optimize the spectral gap. In particular, we study a novel HPC
topology, SpectralFly, designed around the Ramanujan graph construction of
Lubotzky, Phillips, and Sarnak (LPS). We show combinatorial properties, such as
diameter, bisection bandwidth, average path length, and resilience to link
failure, of SpectralFly topologies are better than, or comparable to, similarly
constrained DragonFly, SlimFly, and BundleFly topologies. Additionally, we
simulate the performance of SpectralFly topologies on a representative sample
of physics-inspired HPC workloads using the Structure Simulation Toolkit
Macroscale Element Library simulator and demonstrate considerable benefit to
using the LPS construction as the basis of the SpectralFly topology.
|
We have developed a simulation tool to model self-limited processes such as
atomic layer deposition and atomic layer etching inside reactors of arbitrary
geometry. In this work, we have applied this model to two standard types of
cross-flow reactors: a cylindrical reactor and a model 300 mm wafer reactor,
and explored both ideal and non-ideal self-limited kinetics. For the
cylindrical tube reactor the full simulation results agree well with analytic
expressions obtained using a simple plug flow model, though the presence of
axial diffusion tends to soften growth profiles with respect to the plug flow
case. Our simulations also allowed us to model the output of in-situ techniques
such as quartz crystal microbalance and mass spectrometry, providing a way of
discriminating between ideal and non-ideal surface kinetics using in-situ
measurements. We extended the simulations to consider two non-ideal
self-limited processes: soft-saturating processes characterized by a slow
reaction pathway, and processes where surface byproducts can compete with the
precursor for the same pool of adsorption sites, allowing us to quantify their
impact in the thickness variability across 300 mm wafer substrates.
|
Single image super-resolution (SISR) algorithms reconstruct high-resolution
(HR) images with their low-resolution (LR) counterparts. It is desirable to
develop image quality assessment (IQA) methods that can not only evaluate and
compare SISR algorithms, but also guide their future development. In this
paper, we assess the quality of SISR generated images in a two-dimensional (2D)
space of structural fidelity versus statistical naturalness. This allows us to
observe the behaviors of different SISR algorithms as a tradeoff in the 2D
space. Specifically, SISR methods are traditionally designed to achieve high
structural fidelity but often sacrifice statistical naturalness, while recent
generative adversarial network (GAN) based algorithms tend to create more
natural-looking results but lose significantly on structural fidelity.
Furthermore, such a 2D evaluation can be easily fused to a scalar quality
prediction. Interestingly, we find that a simple linear combination of a
straightforward local structural fidelity and a global statistical naturalness
measures produce surprisingly accurate predictions of SISR image quality when
tested using public subject-rated SISR image datasets. Code of the proposed
SFSN model is publicly available at \url{https://github.com/weizhou-geek/SFSN}.
|
Strongly confined NaVO$^+$ segregation and its thermo-responsive
functionality at the interface between simple sputter-deposited amorphous
vanadium oxide thin films and soda-lime glass was substantiated in the present
study by in-situ temperature-controlled Time of Flight Secondary Ion Mass
Spectrometry (ToF-SIMS). The obtained ToF-SIMS depth profiles provided
unambiguous evidence for a reversible transformation that caused systematic
switching of the NaVO$^+$/ Na$^+$ and Na$^+$/ VO$^+$ intensities upon cycling
the temperature between 25 $^\circ$C and 340 $^\circ$C. Subsequently, NaVO
complexes were found to be reversibly formed (at 300 $^\circ$C) in vanadium
oxide diffused glass, leading to thermo-responsive electrical behaviour of the
thin film glass system. This new segregation -- and diffusion-dependent
multifunctionality of NaVO$^+$ -- points towards applications as an advanced
material for thermo-optical switches, in smart windows or in thermal sensors.
|
Going beyond the simplified gluonic cascades, we introduce both gluon and
quark degrees of freedom for partonic cascades inside the medium. We then solve
the set of coupled evolution equations numerically with splitting kernels
calculated for static, exponential, and Bjorken expanding media to arrive at
medium-modified parton spectra for quark and gluon initiated jets. Using these,
we calculate the inclusive jet $R_{AA}$ where the phenomenologically driven
combinations of quark and gluon jet fractions are included. Then, the rapidity
dependence of the jet $R_{AA}$ is examined. We also study the path-length
dependence of jet quenching for different types of expanding media by
calculating the jet $v_{2}$. Additionally, we qualitatively study the
sensitivity of observables on the time of the onset of the quenching for the
case of Bjorken media. All calculations are compared with recently measured
data.
|
We present a novel decision tree-based synthesis algorithm of ranking
functions for verifying program termination. Our algorithm is integrated into
the workflow of CounterExample Guided Inductive Synthesis (CEGIS). CEGIS is an
iterative learning model where, at each iteration, (1) a synthesizer
synthesizes a candidate solution from the current examples, and (2) a validator
accepts the candidate solution if it is correct, or rejects it providing
counterexamples as part of the next examples. Our main novelty is in the design
of a synthesizer: building on top of a usual decision tree learning algorithm,
our algorithm detects cycles in a set of example transitions and uses them for
refining decision trees. We have implemented the proposed method and obtained
promising experimental results on existing benchmark sets of (non-)termination
verification problems that require synthesis of piecewise-defined lexicographic
affine ranking functions.
|
In the past few years we have seen great advances in object perception
(particularly in 4D space-time dimensions) thanks to deep learning methods.
However, they typically rely on large amounts of high-quality labels to achieve
good performance, which often require time-consuming and expensive work by
human annotators. To address this we propose an automatic annotation pipeline
that generates accurate object trajectories in 3D space (i.e., 4D labels) from
LiDAR point clouds. The key idea is to decompose the 4D object label into two
parts: the object size in 3D that's fixed through time for rigid objects, and
the motion path describing the evolution of the object's pose through time.
Instead of generating a series of labels in one shot, we adopt an iterative
refinement process where online generated object detections are tracked through
time as the initialization. Given the cheap but noisy input, our model produces
higher quality 4D labels by re-estimating the object size and smoothing the
motion path, where the improvement is achieved by exploiting aggregated
observations and motion cues over the entire trajectory. We validate the
proposed method on a large-scale driving dataset and show a 25% reduction of
human annotation efforts. We also showcase the benefits of our approach in the
annotator-in-the-loop setting.
|
Gastric diffuse-type adenocarcinoma represents a disproportionately high
percentage of cases of gastric cancers occurring in the young, and its relative
incidence seems to be on the rise. Usually it affects the body of the stomach,
and presents shorter duration and worse prognosis compared with the
differentiated (intestinal) type adenocarcinoma. The main difficulty
encountered in the differential diagnosis of gastric adenocarcinomas occurs
with the diffuse-type. As the cancer cells of diffuse-type adenocarcinoma are
often single and inconspicuous in a background desmoplaia and inflammation, it
can often be mistaken for a wide variety of non-neoplastic lesions including
gastritis or reactive endothelial cells seen in granulation tissue. In this
study we trained deep learning models to classify gastric diffuse-type
adenocarcinoma from WSIs. We evaluated the models on five test sets obtained
from distinct sources, achieving receiver operator curve (ROC) area under the
curves (AUCs) in the range of 0.95-0.99. The highly promising results
demonstrate the potential of AI-based computational pathology for aiding
pathologists in their diagnostic workflow system.
|
The mechanism of bacterial cell size control has been a mystery for decades,
which involves the well-coordinated growth and division in the cell cycle. The
revolutionary modern techniques of microfluidics and the advanced live imaging
analysis techniques allow long term observations and high-throughput analysis
of bacterial growth on single cell level, promoting a new wave of quantitative
investigations on this puzzle. Taking the opportunity, this theoretical study
aims to clarify the stochastic nature of bacterial cell size control under the
assumption of the accumulation mechanism, which is favoured by recent
experiments on species of bacteria. Via the master equation approach with
properly chosen boundary conditions, the distributions concerned in cell size
control are estimated and are confirmed by experiments. In this analysis, the
inter-generation Green's function is analytically evaluated as the key to
bridge two kinds of statistics used in batch-culture and mother machine
experiments. This framework allows us to quantify the noise level in growth and
accumulation according to experimental data. As a consequence of non-Gaussian
noises of the added sizes, the non-equilibrium nature of bacterial cell size
homeostasis is predicted, of which the biological meaning requires further
investigation.
|
We investigate the modification of the Higgs signals from vector boson fusion
at the LHC arising from higher-dimensional effective operators involving
quarks, electroweak gauge bosons and the 125-GeV scalar discovered in 2012.
Taking a few of the admissible dimension-6 operators as illustration, we work
within the framework of the Standard Model Effective Field Theory (SMEFT) and
identify kinematic variables that can reflect the presence of such effective
operators. The useful variables turn out to be the geometric mean of the
transverse momenta of the two forward jets produced in VBF and the rapidity
difference between the two forward jets. We identify the shift in event
population caused by the effective operators in the same, spanned by the above
kinematic variables. Minimum values of the Wilson coefficients of the chosen
dimension-6 operators are identified, for which they can be probed at the
$3\sigma$ level in the high luminosity run of the LHC at 14 TeV. Projected
exclusion limits on some of the couplings, obtained from our analysis, can
significantly improve the limits on such couplings derived from electroweak
precision data.
|
Which classes can be learned properly in the online model? -- that is, by an
algorithm that at each round uses a predictor from the concept class. While
there are simple and natural cases where improper learning is necessary, it is
natural to ask how complex must the improper predictors be in such cases. Can
one always achieve nearly optimal mistake/regret bounds using "simple"
predictors?
In this work, we give a complete characterization of when this is possible,
thus settling an open problem which has been studied since the pioneering works
of Angluin (1987) and Littlestone (1988). More precisely, given any concept
class C and any hypothesis class H, we provide nearly tight bounds (up to a log
factor) on the optimal mistake bounds for online learning C using predictors
from H. Our bound yields an exponential improvement over the previously best
known bound by Chase and Freitag (2020).
As applications, we give constructive proofs showing that (i) in the
realizable setting, a near-optimal mistake bound (up to a constant factor) can
be attained by a sparse majority-vote of proper predictors, and (ii) in the
agnostic setting, a near-optimal regret bound (up to a log factor) can be
attained by a randomized proper algorithm.
A technical ingredient of our proof which may be of independent interest is a
generalization of the celebrated Minimax Theorem (von Neumann, 1928) for binary
zero-sum games. A simple game which fails to satisfy Minimax is "Guess the
Larger Number", where each player picks a number and the larger number wins.
The payoff matrix is infinite triangular. We show this is the only obstruction:
if a game does not contain triangular submatrices of unbounded sizes then the
Minimax Theorem holds. This generalizes von Neumann's Minimax Theorem by
removing requirements of finiteness (or compactness), and captures precisely
the games of interest in online learning.
|
Frequently, population studies feature pyramidally-organized data represented
using Hierarchical Bayesian Models (HBM) enriched with plates.These models can
become prohibitively large in settings such as neuroimaging, where a sample is
composed of a functional MRI signal measured on 64 thousand brain locations,
across 4 measurement sessions, and at least tens of subjects. Even a reduced
example on a specific cortical region of 300 brain locations features around 1
million parameters, hampering the usage of modern density estimation techniques
such as Simulation-Based Inference (SBI) or structured Variational Inference
(VI).To infer parameter posterior distributions in this challenging class of
problems, we designed a novel methodology that automatically produces a
variational family dual to a target HBM. This variational family, represented
as a neural network, consists in the combination of an attention-based
hierarchical encoder feeding summary statistics to a set of normalizing flows.
Our automatically-derived neural network exploits exchangeability in the
plate-enriched HBM and factorizes its parameter space. The resulting
architecture reduces by orders of magnitude its parameterization with respect
to that of a typical SBI or structured VI representation, while maintaining
expressivity.Our method performs inference on the specified HBM in an amortized
setup: once trained, it can readily be applied to a new data sample to compute
the parameters' full posterior.We demonstrate the capability and scalability of
our method on simulated data, as well as a challenging high-dimensional brain
parcellation experiment. We also open up several questions that lie at the
intersection between SBI techniques, structured Variational Inference, and
inference amortization.
|
We introduce a convergent finite difference method for solving the optimal
transportation problem on the sphere. The method applies to both the
traditional squared geodesic cost (arising in mesh generation) and a
logarithmic cost (arising in the reflector antenna design problem). At each
point on the sphere, we replace the surface PDE with a Generated Jacobian
equation posed on the local tangent plane using geodesic normal coordinates.
The discretization is inspired by recent monotone methods for the
Monge-Amp\`ere equation, but requires significant adaptations in order to
correctly handle the mix of gradient and Hessian terms appearing inside the
nonlinear determinant operator, as well as the singular logarithmic cost
function. Numerical results demonstrate the success of this method on a wide
range of challenging problems involving both the squared geodesic and the
logarithmic cost functions.
|
As the use of artificial intelligence (AI) in high-stakes decision-making
increases, the ability to contest such decisions is being recognised in AI
ethics guidelines as an important safeguard for individuals. Yet, there is
little guidance on how AI systems can be designed to support contestation. In
this paper we explain that the design of a contestation process is important
due to its impact on perceptions of fairness and satisfaction. We also consider
design challenges, including a lack of transparency as well as the numerous
design options that decision-making entities will be faced with. We argue for a
human-centred approach to designing for contestability to ensure that the needs
of decision subjects, and the community, are met.
|
The authors reply to the Comment arXiv:2104.03770 by P. Canfield et. al.
|
Machine learning (ML) has been widely used for efficient resource allocation
(RA) in wireless networks. Although superb performance is achieved on small and
simple networks, most existing ML-based approaches are confronted with
difficulties when heterogeneity occurs and network size expands. In this paper,
specifically focusing on power control/beamforming (PC/BF) in heterogeneous
device-to-device (D2D) networks, we propose a novel unsupervised learning-based
framework named heterogeneous interference graph neural network (HIGNN) to
handle these challenges. First, we characterize diversified link features and
interference relations with heterogeneous graphs. Then, HIGNN is proposed to
empower each link to obtain its individual transmission scheme after limited
information exchange with neighboring links. It is noteworthy that HIGNN is
scalable to wireless networks of growing sizes with robust performance after
trained on small-sized networks. Numerical results show that compared with
state-of-the-art benchmarks, HIGNN achieves much higher execution efficiency
while providing strong performance.
|
Hierarchical clustering is a stronger extension of one of today's most
influential unsupervised learning methods: clustering. The goal of this method
is to create a hierarchy of clusters, thus constructing cluster evolutionary
history and simultaneously finding clusterings at all resolutions. We propose
four traits of interest for hierarchical clustering algorithms: (1) empirical
performance, (2) theoretical guarantees, (3) cluster balance, and (4)
scalability. While a number of algorithms are designed to achieve one to two of
these traits at a time, there exist none that achieve all four.
Inspired by Bateni et al.'s scalable and empirically successful Affinity
Clustering [NeurIPs 2017], we introduce Affinity Clustering's successor,
Matching Affinity Clustering. Like its predecessor, Matching Affinity
Clustering maintains strong empirical performance and uses Massively Parallel
Communication as its distributed model. Designed to maintain provably balanced
clusters, we show that our algorithm achieves good, constant factor
approximations for Moseley and Wang's revenue and Cohen-Addad et al.'s value.
We show Affinity Clustering cannot approximate either function. Along the way,
we also introduce an efficient $k$-sized maximum matching algorithm in the MPC
model.
|
The learning of a new language remains to this date a cognitive task that
requires considerable diligence and willpower, recent advances and tools
notwithstanding. In this paper, we propose Broccoli, a new paradigm aimed at
reducing the required effort by seamlessly embedding vocabulary learning into
users' everyday information diets. This is achieved by inconspicuously
switching chosen words encountered by the user for their translation in the
target language. Thus, by seeing words in context, the user can assimilate new
vocabulary without much conscious effort. We validate our approach in a careful
user study, finding that the efficacy of the lightweight Broccoli approach is
competitive with traditional, memorization-based vocabulary learning. The low
cognitive overhead is manifested in a pronounced decrease in learners' usage of
mnemonic learning strategies, as compared to traditional learning. Finally, we
establish that language patterns in typical information diets are compatible
with spaced-repetition strategies, thus enabling an efficient use of the
Broccoli paradigm. Overall, our work establishes the feasibility of a novel and
powerful "install-and-forget" approach for embedded language acquisition.
|
Recently, graph neural networks (GNNs) have achieved remarkable performances
for quantum mechanical problems. However, a graph convolution can only cover a
localized region, and cannot capture long-range interactions of atoms. This
behavior is contrary to theoretical interatomic potentials, which is a
fundamental limitation of the spatial based GNNs. In this work, we propose a
novel attention-based framework for molecular property prediction tasks. We
represent a molecular conformation as a discrete atomic sequence combined by
atom-atom distance attributes, named Geometry-aware Transformer (GeoT). In
particular, we adopt a Transformer architecture, which has been widely used for
sequential data. Our proposed model trains sequential representations of
molecular graphs based on globally constructed attentions, maintaining all
spatial arrangements of atom pairs. Our method does not suffer from cost
intensive computations, such as angle calculations. The experimental results on
several public benchmarks and visualization maps verified that keeping the
long-range interatomic attributes can significantly improve the model
predictability.
|
Many machine learning problems can be formulated as minimax problems such as
Generative Adversarial Networks (GANs), AUC maximization and robust estimation,
to mention but a few. A substantial amount of studies are devoted to studying
the convergence behavior of their stochastic gradient-type algorithms. In
contrast, there is relatively little work on their generalization, i.e., how
the learning models built from training examples would behave on test examples.
In this paper, we provide a comprehensive generalization analysis of stochastic
gradient methods for minimax problems under both convex-concave and
nonconvex-nonconcave cases through the lens of algorithmic stability. We
establish a quantitative connection between stability and several
generalization measures both in expectation and with high probability. For the
convex-concave setting, our stability analysis shows that stochastic gradient
descent ascent attains optimal generalization bounds for both smooth and
nonsmooth minimax problems. We also establish generalization bounds for both
weakly-convex-weakly-concave and gradient-dominated problems.
|
Actively tunable optical filters based on chalcogenide phase-change materials
(PCMs) are an emerging technology with applications across chemical
spectroscopy and thermal imaging. The refractive index of an embedded PCM thin
film is modulated through an amorphous-to-crystalline phase transition induced
through thermal stimulus. Performance metrics include transmittance, passband
center wavelength (CWL), and bandwidth; ideally monitored during operation (in
situ) or after a set number of tuning cycles to validate real-time operation.
Measuring these aforementioned metrics in real-time is challenging.
Fourier-transform infrared spectroscopy (FTIR) provides the gold-standard for
performance characterization, yet is expensive and inflexible -- incorporating
the PCM tuning mechanism is not straightforward, hence in situ electro-optical
measurements are challenging. In this work, we implement an open-source
MATLAB-controlled real-time performance characterization system consisting of
an inexpensive linear variable filter (LVF) and mid-wave infrared camera,
capable of switching the PCM-based filters while simultaneously recording in
situ filter performance metrics and spectral filtering profile. These metrics
are calculated through pixel intensity measurements and displayed on a
custom-developed graphical user interface in real-time. The CWL is determined
through spatial position of intensity maxima along the LVF's longitudinal axis.
Furthermore, plans are detailed for a future experimental system that further
reduces cost, is compact, and utilizes a near-infrared camera.
|
We study an optimization problem related to the approximation of given data
by a linear combination of transformed modes. In the simplest case, the
optimization problem reduces to a minimization problem well-studied in the
context of proper orthogonal decomposition. Allowing transformed modes in the
approximation renders this approach particularly useful to compress data with
transported quantities, which are prevalent in many flow applications. We prove
the existence of a solution to the infinite-dimensional optimization problem.
Towards a numerical implementation, we compute the gradient of the cost
functional and derive a suitable discretization in time and space. We
demonstrate the theoretical findings with three challenging numerical examples.
|
With time, machine learning models have increased in their scope,
functionality and size. Consequently, the increased functionality and size of
such models requires high-end hardware to both train and provide inference
after the fact. This paper aims to explore the possibilities within the domain
of model compression, discuss the efficiency of combining various levels of
pruning and quantization, while proposing a quality measurement metric to
objectively decide which combination is best in terms of minimizing the
accuracy delta and maximizing the size reduction factor.
|
We consider the classic problem of computing the Longest Common Subsequence
(LCS) of two strings of length $n$. While a simple quadratic algorithm has been
known for the problem for more than 40 years, no faster algorithm has been
found despite an extensive effort. The lack of progress on the problem has
recently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS'15]
and Bringmann and K\"unnemann [FOCS'15] who proved that there is no
subquadratic algorithm unless the Strong Exponential Time Hypothesis fails.
This has led the community to look for subquadratic approximation algorithms
for the problem.
Yet, unlike the edit distance problem for which a constant-factor
approximation in almost-linear time is known, very little progress has been
made on LCS, making it a notoriously difficult problem also in the realm of
approximation. For the general setting, only a naive
$O(n^{\varepsilon/2})$-approximation algorithm with running time
$\tilde{O}(n^{2-\varepsilon})$ has been known, for any constant $0 <
\varepsilon \le 1$. Recently, a breakthrough result by Hajiaghayi, Seddighin,
Seddighin, and Sun [SODA'19] provided a linear-time algorithm that yields a
$O(n^{0.497956})$-approximation in expectation; improving upon the naive
$O(\sqrt{n})$-approximation for the first time.
In this paper, we provide an algorithm that in time $O(n^{2-\varepsilon})$
computes an $\tilde{O}(n^{2\varepsilon/5})$-approximation with high
probability, for any $0 < \varepsilon \le 1$. Our result (1) gives an
$\tilde{O}(n^{0.4})$-approximation in linear time, improving upon the bound of
Hajiaghayi, Seddighin, Seddighin, and Sun, (2) provides an algorithm whose
approximation scales with any subquadratic running time $O(n^{2-\varepsilon})$,
improving upon the naive bound of $O(n^{\varepsilon/2})$ for any $\varepsilon$,
and (3) instead of only in expectation, succeeds with high probability.
|
Game semantics is a denotational semantics presenting compositionally the
computational behaviour of various kinds of effectful programs. One of its
celebrated achievement is to have obtained full abstraction results for
programming languages with a variety of computational effects, in a single
framework. This is known as the semantic cube or Abramsky's cube, which for
sequential deterministic programs establishes a correspondence between certain
conditions on strategies (''innocence'', ''well-bracketing'', ''visibility'')
and the absence of matching computational effects. Outside of the sequential
deterministic realm, there are still a wealth of game semantics-based full
abstraction results; but they no longer fit in a unified canvas. In particular,
Ghica and Murawski's fully abstract model for shared state concurrency (IA)
does not have a matching notion of pure parallel program-we say that
parallelism and interference (i.e. state plus semaphores) are entangled. In
this paper we construct a causal version of Ghica and Murawski's model, also
fully abstract for IA. We provide compositional conditions parallel innocence
and sequentiality, respectively banning interference and parallelism, and
leading to four full abstraction results. To our knowledge, this is the first
extension of Abramsky's semantic cube programme beyond the sequential
deterministic world.
|
Microfabricated ion-trap devices offer a promising pathway towards scalable
quantum computing. Research efforts have begun to focus on the engineering
challenges associated with developing large-scale ion-trap arrays and networks.
However, increasing the size of the array and integrating on-chip electronics
can drastically increase the power dissipation within the ion-trap chips. This
leads to an increase in the operating temperature of the ion-trap and limits
the device performance. Therefore, effective thermal management is an essential
consideration for any large-scale architecture. Presented here is the
development of a modular cooling system designed for use with multiple
ion-trapping experiments simultaneously. The system includes an extensible
cryostat that permits scaling of the cooling power to meet the demands of a
large network. Following experimental testing on two independent ion-trap
experiments, the cooling system is expected to deliver a net cooling power of
111 W at ~70 K to up to four experiments. The cooling system is a step towards
meeting the practical challenges of operating large-scale quantum computers
with many qubits.
|
In this paper, we mainly show that generalized hyperharmonic number sums with
reciprocal binomial coefficients can be expressed in terms of classical
(alternating) Euler sums, zeta values and generalized (alternating) harmonic
numbers.
|
We demonstrate a method for the simultaneous determination of the
thermoelectric figure of merit of multiple martials by means of the lock-in
thermography (LIT) technique. This method is based on the thermal analyses of
the transient temperature distribution induced by the Peltier effect and Joule
heating, which enables high-throughput estimation of the thermal diffusivity,
thermal conductivity, volumetric heat capacity, Seebeck or Peltier coefficient
of the materials. The LIT-based approach has high reproducibility and
reliability because it offers sensitive noncontact temperature measurements and
does not require the installation of an external heater. By performing the same
measurements and analyses with applying an external magnetic field, the
magnetic field and/or magnetization dependences of the Seebeck or Peltier
coefficient and thermal conductivity can be determined simultaneously. We
demonstrate the validity of this method by using several ferromagnetic metals
(Ni, Ni$_{95}$Pt$_{5}$, and Fe) and a nonmagnetic metal (Ti). The proposed
method will be useful for materials research in thermoelectrics and spin
caloritronics and for investigation of magneto-thermal and
magneto-thermoelectric transport properties.
|
We use holographic methods to show that photons emitted by a strongly coupled
plasma subject to a magnetic field are linearly polarized regardless of their
four-momentum, except when they propagate along the field direction. The
gravitational dual is constructed using a 5D truncation of 10-dimensional type
IIB supergravity, and includes a scalar field in addition to the constant
magnetic one. In terms of the geometry of the collision experiment that we
model, our statement is that any photon produced there has to be in its only
polarization state parallel to the reaction plane.
|
Label the vertices of the complete graph $K_v$ with the integers $\{ 0, 1,
\ldots, v-1 \}$ and define the length of the edge between $x$ and $y$ to be
$\min( |x-y| , v - |x-y| )$. Let $L$ be a multiset of size $v-1$ with
underlying set contained in $\{ 1, \ldots, \lfloor v/2 \rfloor \}$. The
Buratti-Horak-Rosa Conjecture is that there is a Hamiltonian path in $K_v$
whose edge lengths are exactly $L$ if and only if for any divisor $d$ of $v$
the number of multiples of $d$ appearing in $L$ is at most $v-d$.
We introduce "growable realizations," which enable us to prove many new
instances of the conjecture and to reprove known results in a simpler way. As
examples of the new method, we give a complete solution when the underlying set
is contained in $\{ 1,4,5 \}$ or in $\{ 1,2,3,4 \}$ and a partial result when
the underlying set has the form $\{ 1, x, 2x \}$. We believe that for any set
$U$ of positive integers there is a finite set of growable realizations that
implies the truth of the Buratti-Horak-Rosa Conjecture for all but finitely
many multisets with underlying set $U$.
|
While most simulations of the epoch of reionization have focused on
single-stellar populations in star-forming dwarf galaxies, products of binary
evolution are expected to significantly contribute to emissions of
hydrogen-ionizing photons. Among these products are stripped stars (or helium
stars), which have their envelopes stripped from interactions with binary
companions, leaving an exposed helium core. Previous work has suggested these
stripped stars can dominate the LyC photon output of high-redshift low
luminosity galaxies. Other sources of hard radiation in the early universe
include zero-metallicity Population III stars, which may have similar SED
properties to galaxies with radiation dominated by stripped star emissions.
Here, we use two metrics (the power-law exponent over wavelength intervals
240-500 \r{A}, 600-900 \r{A}, and 1200-2000 \r{A}, and the ratio of total
luminosity in FUV wavelengths to LyC wavelengths) to compare the SEDs of
simulated galaxies with only single-stellar evolution, galaxies containing
stripped stars, and galaxies containing Population III stars, with four
different IMFs. We find that stripped stars significantly alter the SEDs in the
LyC range of galaxies at the epoch of reionization. SEDs in galaxies with
stripped stars present have lower power-law indices in the LyC range and lower
FUV to LyC luminosity ratios. These differences in SEDs are present at all
considered luminosities ($M_{UV} > -15$, AB system), and are most pronounced
for lower luminosity galaxies. We also find that SEDs of galaxies with stripped
stars and Pop III stars are distinct from each other for all tested IMFs.
|
In canonical quantum gravity, the presence of spatial boundaries naturally
leads to a boundary quantum states, representing quantum boundary conditions
for the bulk fields. As a consequence, quantum states of the bulk geometry
needs to be upgraded to wave-functions valued in the boundary Hilbert space:
the bulk become quantum operator acting on boundary states. We apply this to
loop quantum gravity and describe spin networks with 2d boundary as
wave-functions mapping bulk holonomies to spin states on the boundary. This
sets the bulk-boundary relation in a clear mathematical framework, which allows
to define the boundary density matrix induced by a bulk spin network states
after tracing out the bulk degrees of freedom. We ask the question of the bulk
reconstruction and prove a boundary-to-bulk universal reconstruction procedure,
to be understood as a purification of the mixed boundary state into a pure bulk
state. We further perform a first investigation in the algebraic structure of
induced boundary density matrices and show how correlations between bulk
excitations, i.e. quanta of 3d geometry, get reflected into the boundary
density matrix.
|
In this article we provide a framework for the study of Hecke operators
acting on the Bredon (co)homology of an arithmetic discrete group. Our main
interest lies in the study of Hecke operators for Bianchi groups. Using the
Baum-Connes conjecture, we can transfer computations in Bredon homology to
obtain a Hecke action on the $K$-theory of the reduced $C^{*}$-algebra of the
group. We show the power of this method giving explicit computations for the
group $SL_2(\mathbb{Z}[i])$. In order to carry out these computations we use an
Atiyah-Segal type spectral sequence together with the Bredon homology of the
classifying space for proper actions.
|
The enhanced star forming activity, typical of starburst galaxies, powers
strong galactic winds expanding on kpc scales and characterized by bubble
structures. Here we discuss the possibility that particle acceleration may take
place at the termination shock of such winds. We calculate the spectrum of such
particles and their maximum energy, that turns out to be in the range $\sim
10-10^2$ PeV for typical values of the parameters. Cosmic rays accelerated at
the termination shock can be advected towards the edge of the wind bubble and
eventually escape into extragalactic space. We also calculate the flux of gamma
rays and neutrinos produced by hadronic interactions in the bubble as well as
the diffuse flux resulting from the superposition of the contribution of
starburst galaxies on cosmological scales. Finally we compute the diffuse flux
of cosmic rays from starburst bubbles and compare it with existing data.
|
Nearly three decades ago, Bar-Noy, Motwani and Naor showed that no online
edge-coloring algorithm can edge color a graph optimally. Indeed, their work,
titled "the greedy algorithm is optimal for on-line edge coloring", shows that
the competitive ratio of $2$ of the na\"ive greedy algorithm is best possible
online. However, their lower bound required bounded-degree graphs, of maximum
degree $\Delta = O(\log n)$, which prompted them to conjecture that better
bounds are possible for higher-degree graphs. While progress has been made
towards resolving this conjecture for restricted inputs and arrivals or for
random arrival orders, an answer for fully general \emph{adversarial} arrivals
remained elusive.
We resolve this thirty-year-old conjecture in the affirmative, presenting a
$(1.9+o(1))$-competitive online edge coloring algorithm for general graphs of
degree $\Delta = \omega(\log n)$ under vertex arrivals. At the core of our
results, and of possible independent interest, is a new online algorithm which
rounds a fractional bipartite matching $x$ online under vertex arrivals,
guaranteeing that each edge $e$ is matched with probability $(1/2+c)\cdot x_e$,
for a constant $c>0.027$.
|
The vacancy concentration at finite temperatures is studied for a series of
(CoCrFeMn)$_{1-x_\mathrm{Ni}}$Ni$_{x_\mathrm{Ni}}$ alloys by grand-canonical
Monte-Carlo (MC) simulations. The vacancy formation energies are calculated
from a classical interatomic potential and exhibit a distribution due to the
different chemical environments of the vacated sites. In dilute alloys, this
distribution features multiple discrete peaks, while concentrated alloys
exhibit an unimodal distribution as there are many different chemical
environments of similar vacancy formation energy. MC simulations using a
numerically efficient bond-counting model confirm that the vacancy
concentration even in concentrated alloys may be calculated by the established
Maxwell-Boltzmann equation weighted by the given distribution of formation
energies. We calculate the variation of vacancy concentration as function of Ni
content in the (CoCrFeMn)$_{x_\mathrm{Ni}}$Ni$_{1-x_\mathrm{Ni}}$ and prove the
excellent agreement of the thermodynamic model and the results from the
grand-canonical Monte-Carlo simulations.
|
Remote photoplethysmography (rPPG), a family of techniques for monitoring
blood volume changes, may be especially useful for widespread contactless
health monitoring using face video from consumer-grade visible-light cameras.
The COVID-19 pandemic has caused the widespread use of protective face masks.
We found that occlusions from cloth face masks increased the mean absolute
error of heart rate estimation by more than 80\% when deploying methods
designed on unmasked faces. We show that augmenting unmasked face videos by
adding patterned synthetic face masks forces the model to attend to the
periocular and forehead regions, improving performance and closing the gap
between masked and unmasked pulse estimation. To our knowledge, this paper is
the first to analyse the impact of face masks on the accuracy of pulse
estimation and offers several novel contributions: (a) 3D CNN-based method
designed for remote photoplethysmography in a presence of face masks, (b) two
publicly available pulse estimation datasets acquired from 86 unmasked and 61
masked subjects, (c) evaluations of handcrafted algorithms and a 3D CNN trained
on videos of unmasked faces and with masks synthetically added, and (d) data
augmentation method to add a synthetic mask to a face video.
|
This work mainly addresses continuous-time multi-agent consensus networks
where an adverse attacker affects the convergence performances of said
protocol. In particular, we develop a novel secure-by-design approach in which
the presence of a network manager monitors the system and broadcasts encrypted
tasks (i.e. hidden edge weight assignments) to the agents involved. Each agent
is then expected to decode the received codeword containing data on the task
through appropriate decoding functions by leveraging advanced security
principles, such as objective coding and information localization. Within this
framework, a stability analysis is conducted for showing the robustness to
channel tampering in the scenario where part of the codeword corresponding to a
single link in the system is corrupted. A trade-off between objective coding
capabilityand network robustness is also pointed out. To support these
novelties, an application example on decentralized estimation is provided.
Moreover, such an investigation on robust stability is as well extended in the
discrete-time domain. Further numerical simulations are given to validate the
theoretical results in both the time domains.
|
Imputation is a popular technique for handling missing data. We consider a
nonparametric approach to imputation using the kernel ridge regression
technique and propose consistent variance estimation. The proposed variance
estimator is based on a linearization approach which employs the entropy method
to estimate the density ratio. The root-n consistency of the imputation
estimator is established when a Sobolev space is utilized in the kernel ridge
regression imputation, which enables us to develop the proposed variance
estimator. Synthetic data experiments are presented to confirm our theory.
|
Subsets and Splits