abstract
stringlengths 42
2.09k
|
---|
We explicitly compute the homology groups with coefficients in a field of
characteristic zero of cocyclic subgroups or even Artin groups of FC-type. We
also give some partial results in the case when the coefficients are taken in a
field of prime characteristic.
|
We prove that for all positive integers $n$ and $k$, there exists an integer
$N = N(n,k)$ satisfying the following. If $U$ is a set of $k$ direction vectors
in the plane and $\mathcal{J}_U$ is the set of all line segments in direction
$u$ for some $u\in U$, then for every $N$ families $\mathcal{F}_1, \ldots,
\mathcal{F}_N$, each consisting of $n$ mutually disjoint segments in
$\mathcal{J}_U$, there is a set $\{A_1, \ldots, A_n\}$ of $n$ disjoint segments
in $\bigcup_{1\leq i\leq N}\mathcal{F}_i$ and distinct integers $p_1, \ldots,
p_n\in \{1, \ldots, N\}$ satisfying that $A_j\in \mathcal{F}_{p_j}$ for all
$j\in \{1, \ldots, n\}$. We generalize this property for underlying lines on
fixed $k$ directions to $k$ families of simple curves with certain conditions.
|
L. Moret-Bailly constructed families $\mathfrak{C}\rightarrow \mathbb{P}^1$
of genus 2 curves with supersingular jacobian. In this paper we first classify
the reducible fibers of a Moret-Bailly family using linear algebra over a
quaternion algebra. The main result is an algorithm that exploits properties of
two reducible fibers to compute a hyperelliptic model for any irreducible fiber
of a Moret-Bailly family.
|
This paper presents a method for gaze estimation according to face images. We
train several gaze estimators adopting four different network architectures,
including an architecture designed for gaze estimation (i.e.,iTracker-MHSA) and
three originally designed for general computer vision tasks(i.e., BoTNet,
HRNet, ResNeSt). Then, we select the best six estimators and ensemble their
predictions through a linear combination. The method ranks the first on the
leader-board of ETH-XGaze Competition, achieving an average angular error of
$3.11^{\circ}$ on the ETH-XGaze test set.
|
In a separable Hilbert space $X$, we study the controlled evolution equation
\begin{equation*} u'(t)+Au(t)+p(t)Bu(t)=0, \end{equation*} where $A\geq-\sigma
I$ ($\sigma\geq0$) is a self-adjoint linear operator, $B$ is a bounded linear
operator on $X$, and $p\in L^2_{loc}(0,+\infty)$ is a bilinear control.
We give sufficient conditions in order for the above nonlinear control system
to be locally controllable to the $j$th eigensolution for any $j\geq1$. We also
derive semi-global controllability results in large time and discuss
applications to parabolic equations in low space dimension. Our method is
constructive and all the constants involved in the main results can be
explicitly computed.
|
In the cuprates, one-dimensional chain compounds provide a unique opportunity
to understand the microscopic physics due to the availability of reliable
theories. However, progress has been limited by the inability to controllably
dope these materials. Here, we report the synthesis and spectroscopic analysis
of the one-dimensional cuprate Ba$_{2-x}$Sr$_x$CuO$_{3+\delta}$ over a wide
range of hole doping. Our angle-resolved photoemission experiments reveal the
doping evolution of the holon and spinon branches. We identify a prominent
folding branch whose intensity fails to match predictions of the simple Hubbard
model. An additional strong near-neighbor attraction, which may arise from
coupling to phonons, quantitatively explains experiments for all accessible
doping levels. Considering structural and quantum chemistry similarities among
cuprates, this attraction will play a similarly crucial role in the high-$T_C$
superconducting counterparts
|
Antonio Colla was a meteorologist and astronomer who made sunspot
observations at the Meteorological Observatory of the Parma University (Italy).
He carried out his sunspot records from 1830 to 1843, just after the Dalton
Minimum. We have recovered 71 observation days for this observer.
Unfortunately, many of these records are qualitative and we could only obtain
the number of sunspot groups and/or single sunspots from 25 observations.
However, we highlight the importance of these records because Colla is not
included in the sunspot group database as an observer and, therefore, neither
his sunspot observations. According to the number of groups, the sunspot
observations made by Colla are similar as several observers of his time. For
common observation day, only Stark significantly recorded more groups than
Colla. Moreover, we have calculated the sunspot area and positions from Colla's
sunspot drawings concluding that both areas and positions recorded by this
observer seem unreal. Therefore, Colla's drawings can be interpreted such as
sketches including reliable information on the number of groups but the
information on sunspot areas and positions should not be used for scientific
purposes.
|
All current approaches for statically enforcing differential privacy in
higher order languages make use of either linear or relational refinement
types. A barrier to adoption for these approaches is the lack of support for
expressing these "fancy types" in mainstream programming languages. For
example, no mainstream language supports relational refinement types, and
although Rust and modern versions of Haskell both employ some linear typing
techniques, they are inadequate for embedding enforcement of differential
privacy, which requires "full" linear types a la Girard. We propose a new type
system that enforces differential privacy, avoids the use of linear and
relational refinement types, and can be easily embedded in mainstream richly
typed programming languages such as Scala, OCaml and Haskell. We demonstrate
such an embedding in Haskell, demonstrate its expressiveness on case studies,
and prove that our type-based enforcement of differential privacy is sound.
|
We develop a multicomponent lattice Boltzmann (LB) model for the 2D
Rayleigh--Taylor turbulence with a Shan-Chen pseudopotential implemented on
GPUs. In the immiscible case this method is able to accurately overcome the
inherent numerical complexity caused by the complicated structure of the
interface that appears in the fully developed turbulent regime. Accuracy of the
LB model is tested both for early and late stages of instability. For the
developed turbulent motion we analyze the balance between different terms
describing variations of the kinetic and potential energies. Then, we analyze
the role of interface in the energy balance, and also the effects of the
vorticity induced by the interface in the energy dissipation. Statistical
properties are compared for miscible and immiscible flows. Our results can also
be considered as a first validation step to extend the application of LB model
to 3D immiscible Rayleigh-Taylor turbulence.
|
We present a geometrically exact nonlinear analysis of elastic in-plane beams
in the context of finite but small strain theory. The formulation utilizes the
full beam metric and obtains the complete analytic elastic constitutive model
by employing the exact relation between the reference and equidistant strains.
Thus, we account for the nonlinear strain distribution over the thickness of a
beam. In addition to the full analytical constitutive model, four simplified
ones are presented. Their comparison provides a thorough examination of the
influence of a beam's metric on the structural response. We show that the
appropriate formulation depends on the curviness of a beam at all
configurations. Furthermore, the nonlinear distribution of strain along the
thickness of strongly curved beams must be considered to obtain a complete and
accurate response.
|
Autoencoders as tools behind anomaly searches at the LHC have the structural
problem that they only work in one direction, extracting jets with higher
complexity but not the other way around. To address this, we derive classifiers
from the latent space of (variational) autoencoders, specifically in Gaussian
mixture and Dirichlet latent spaces. In particular, the Dirichlet setup solves
the problem and improves both the performance and the interpretability of the
networks.
|
We reconsider work of Elkalla on subnormal subgroups of 3-manifold groups,
giving essentially algebraic arguments that extend to the case of $PD_3$-groups
and group pairs. However the argument relies on an $L^2$-Betti number
hypothesis which has not yet been shown to hold in general.
|
Crystals and other condensed matter systems described by density waves often
exhibit dislocations. Here we show, by considering the topology of the ground
state manifolds (GSMs) of such systems, that dislocations in the density phase
field always split into disclinations, and that the disclinations themselves
are constrained to sit at particular points in the GSM. Consequently, the
topology of the GSM forbids zero-energy dislocation glide, giving rise to a
Peirels-Nabarro barrier.
|
In this paper, we present a model for semantic memory that allows machines to
collect information and experiences to become more proficient with time. Post
semantic analysis of the sensory and other related data, the processed
information is stored in the knowledge graph which is then used to comprehend
the work instructions expressed in natural language. This imparts industrial
robots cognitive behavior to execute the required tasks in a deterministic
manner. The paper outlines the architecture of the system along with an
implementation of the proposal.
|
Magnetic Resonance Fingerprinting (MRF) is a method to extract quantitative
tissue properties such as T1 and T2 relaxation rates from arbitrary pulse
sequences using conventional magnetic resonance imaging hardware. MRF pulse
sequences have thousands of tunable parameters which can be chosen to maximize
precision and minimize scan time. Here we perform de novo automated design of
MRF pulse sequences by applying physics-inspired optimization heuristics. Our
experimental data suggests systematic errors dominate over random errors in MRF
scans under clinically-relevant conditions of high undersampling. Thus, in
contrast to prior optimization efforts, which focused on statistical error
models, we use a cost function based on explicit first-principles simulation of
systematic errors arising from Fourier undersampling and phase variation. The
resulting pulse sequences display features qualitatively different from
previously used MRF pulse sequences and achieve fourfold shorter scan time than
prior human-designed sequences of equivalent precision in T1 and T2.
Furthermore, the optimization algorithm has discovered the existence of MRF
pulse sequences with intrinsic robustness against shading artifacts due to
phase variation.
|
The 5G mobile network brings several new features that can be applied to
existing and new applications. High reliability, low latency, and high data
rate are some of the features which fulfill the requirements of vehicular
networks. Vehicular networks aim to provide safety for road users and several
additional advantages such as enhanced traffic efficiency and in-vehicle
infotainment services. This paper summarizes the most important aspects of
NR-V2X, which is standardized by 3GPP, focusing on sidelink communication. The
main part of this work belongs to the 3GPP Rel-16, which is the first 3GPP
release for NR-V2X, and the work/study items of the future Rel-17
|
Existing emotion-aware conversational models usually focus on controlling the
response contents to align with a specific emotion class, whereas empathy is
the ability to understand and concern the feelings and experience of others.
Hence, it is critical to learn the causes that evoke the users' emotion for
empathetic responding, a.k.a. emotion causes. To gather emotion causes in
online environments, we leverage counseling strategies and develop an
empathetic chatbot to utilize the causal emotion information. On a real-world
online dataset, we verify the effectiveness of the proposed approach by
comparing our chatbot with several SOTA methods using automatic metrics,
expert-based human judgements as well as user-based online evaluation.
|
We investigate an M/M/1 queue operating in two switching environments, where
the switch is governed by a two-state time-homogeneous Markov chain. This model
allows to describe a system that is subject to regular operating phases
alternating with anomalous working phases or random repairing periods. We first
obtain the steady-state distribution of the process in terms of a generalized
mixture of two geometric distributions. In the special case when only one kind
of switch is allowed, we analyze the transient distribution, and investigate
the busy period problem. The analysis is also performed by means of a suitable
heavy-traffic approximation which leads to a continuous random process. Its
distribution satisfies a partial differential equation with randomly
alternating infinitesimal moments. For the approximating process we determine
the steady-state distribution, the transient distribution and a
first-passage-time density.
|
Signomials are obtained by generalizing polynomials to allow for arbitrary
real exponents. This generalization offers great expressive power, but has
historically sacrificed the organizing principle of ``degree'' that is central
to polynomial optimization theory. We reclaim that principle here through the
concept of signomial rings, which we use to derive complete convex relaxation
hierarchies of upper and lower bounds for signomial optimization via sums of
arithmetic-geometric exponentials (SAGE) nonnegativity certificates. The
Positivstellensatz underlying the lower bounds relies on the concept of
conditional SAGE and removes regularity conditions required by earlier works,
such as convexity and Archimedeanity of the feasible set. Through worked
examples we illustrate the practicality of this hierarchy in areas such as
chemical reaction network theory and chemical engineering. These examples
include comparisons to direct global solvers (e.g., BARON and ANTIGONE) and the
Lasserre hierarchy (where appropriate). The completeness of our hierarchy of
upper bounds follows from a generic construction whereby a Positivstellensatz
for signomial nonnegativity over a compact set provides for arbitrarily strong
outer approximations of the corresponding cone of nonnegative signomials. While
working toward that result, we prove basic facts on the existence and
uniqueness of solutions to signomial moment problems.
|
For a random walk in a uniformly elliptic and i.i.d. environment on $\mathbb
Z^d$ with $d \geq 4$, we show that the quenched and annealed large deviations
rate functions agree on any compact set contained in the boundary $\partial
\mathbb{D}:=\{ x \in \mathbb R^d : |x|_1 =1\}$ of their domain which does not
intersect any of the $(d-2)$-dimensional facets of $\partial \mathbb{D}$,
provided that the disorder of the environment is~low~enough. As a consequence,
we obtain a simple explicit formula for both rate functions on $\partial
\mathbb{D}$ at low disorder. In contrast to previous works, our results do not
assume any ballistic behavior of the random walk and are not restricted to
neighborhoods of any given point (on the boundary $\partial \mathbb{D}$). In
addition, our~results complement those in [BMRS19], where, using different
methods, we investigate the equality of the rate functions in the interior of
their domain. Finally, for a general parametrized family of environments,
we~show that the strength of disorder determines a phase transition in the
equality of both rate functions, in the sense that for each $x \in \partial
\mathbb{D}$ there exists $\varepsilon_x$ such that the two rate functions agree
at $x$ when the disorder is smaller than $\varepsilon_x$ and disagree when its
larger. This further reconfirms the idea, introduced in [BMRS19], that the
disorder of the environment is in general intimately related with the equality
of the rate functions.
|
In this paper, we consider graphon particle systems with heterogeneous
mean-field type interactions and the associated finite particle approximations.
Under suitable growth (resp. convexity) assumptions, we obtain uniform-in-time
concentration estimates, over finite (resp. infinite) time horizon, for the
Wasserstein distance between the empirical measure and its limit, extending the
work of Bolley--Guillin--Villani.
|
It is well known that entanglement can benefit quantum information processing
tasks. Quantum illumination, when first proposed, is surprising as
entanglement's benefit survives entanglement-breaking noise. Since then, many
efforts have been devoted to study quantum sensing in noisy scenarios. The
applicability of such schemes, however, is limited to a binary quantum
hypothesis testing scenario. In terms of target detection, such schemes
interrogate a single polarization-azimuth-elevation-range-Doppler resolution
bin at a time, limiting the impact to radar detection. We resolve this
binary-hypothesis limitation by proposing a quantum ranging protocol enhanced
by entanglement. By formulating a ranging task as a multiary hypothesis testing
problem, we show that entanglement enables a 6-dB advantage in the error
exponent against the optimal classical scheme. Moreover, the proposed ranging
protocol can also be utilized to implement a pulse-position modulated
entanglement-assisted communication protocol. Our ranging protocol reveals
entanglement's potential in general quantum hypothesis testing tasks and paves
the way towards a quantum-ranging radar with a provable quantum advantage.
|
In this paper, channel estimation techniques and phase shift design for
intelligent reflecting surface (IRS)-empowered single-user multiple-input
multiple-output (SU-MIMO) systems are proposed. Among four channel estimation
techniques developed in the paper, the two novel ones, single-path approximated
channel (SPAC) and selective emphasis on rank-one matrices (SEROM), have low
training overhead to enable practical IRS-empowered SU-MIMO systems. SPAC is
mainly based on parameter estimation by approximating IRS-related channels as
dominant single-path channels. SEROM exploits IRS phase shifts as well as
training signals for channel estimation and easily adjusts its training
overhead. A closed-form solution for IRS phase shift design is also developed
to maximize spectral efficiency where the solution only requires basic linear
operations. Numerical results show that SPAC and SEROM combined with the
proposed IRS phase shift design achieve high spectral efficiency even with low
training overhead compared to existing methods.
|
Our Galaxy and the nearby Andromeda galaxy (M31) are the most massive members
of the Local Group, and they seem to be a bound pair, despite the uncertainties
on the relative motion of the two galaxies. A number of studies have shown that
the two galaxies will likely undergo a close approach in the next 4$-$5 Gyr. We
used direct $N$-body simulations to model this interaction to shed light on the
future of the Milky Way - Andromeda system and for the first time explore the
fate of the two supermassive black holes (SMBHs) that are located at their
centers. We investigated how the uncertainties on the relative motion of the
two galaxies, linked with the initial velocities and the density of the diffuse
environment in which they move, affect the estimate of the time they need to
merge and form ``Milkomeda''. After the galaxy merger, we follow the evolution
of their two SMBHs up to their close pairing and fusion. Upon the fiducial set
of parameters, we find that Milky Way and Andromeda will have their closest
approach in the next 4.3 Gyr and merge over a span of 10 Gyr. Although the time
of the first encounter is consistent with other predictions, we find that the
merger occurs later than previously estimated. We also show that the two SMBHs
will spiral in the inner region of Milkomeda and coalesce in less than 16.6 Myr
after the merger of the two galaxies. Finally, we evaluate the
gravitational-wave emission caused by the inspiral of the SMBHs, and we discuss
the detectability of similar SMBH mergers in the nearby Universe ($z\leq 2$)
through next-generation gravitational-wave detectors.
|
With the emerging needs of creating fairness-aware solutions for search and
recommendation systems, a daunting challenge exists of evaluating such
solutions. While many of the traditional information retrieval (IR) metrics can
capture the relevance, diversity and novelty for the utility with respect to
users, they are not suitable for inferring whether the presented results are
fair from the perspective of responsible information exposure. On the other
hand, various fairness metrics have been proposed but they do not account for
the user utility or do not measure it adequately. To address this problem, we
propose a new metric called Fairness-Aware IR (FAIR). By unifying standard IR
metrics and fairness measures into an integrated metric, this metric offers a
new perspective for evaluating fairness-aware ranking results. Based on this
metric, we developed an effective ranking algorithm that jointly optimized user
utility and fairness. The experimental results showed that our FAIR metric
could highlight results with good user utility and fair information exposure.
We showed how FAIR related to existing metrics and demonstrated the
effectiveness of our FAIR-based algorithm. We believe our work opens up a new
direction of pursuing a computationally feasible metric for evaluating and
implementing the fairness-aware IR systems.
|
Real-Time Networks (RTNs) provide latency guarantees for time-critical
applications and it aims to support different traffic categories via various
scheduling mechanisms. Those scheduling mechanisms rely on a precise network
performance measurement to dynamically adjust the scheduling strategies.
Machine Learning (ML) offers an iterative procedure to measure network
performance. Network Calculus (NC) can calculate the bounds for the main
performance indexes such as latencies and throughputs in an RTN for ML. Thus,
the ML and NC integration improve overall calculation efficiency. This paper
will provide a survey for different approaches of Real-Time Network performance
measurement via NC as well as ML and present their results, dependencies, and
application scenarios.
|
Given a bipartite graph with bipartition $(A,B)$ where $B$ is equipartitioned
into $k\ge2$ blocks, can the vertices in $A$ be picked one by one so that at
every step, the picked vertices cover roughly the same number of vertices in
each of these blocks? We show that, if each block has cardinality $m$, the
vertices in $B$ have the same degree, and each vertex in $A$ has at most $cm$
neighbors in every block where $c>0$ is a small constant, then there is an
ordering $v_1,\ldots,v_n$ of the vertices in $A$ such that for every
$j\in\{1,\ldots,n\}$, the numbers of vertices with a neighbor in
$\{v_1,\ldots,v_j\}$ in every two blocks differ by at most $\sqrt{2(k-1)c}\cdot
m$. This is related to a well-known lemma of Steinitz, and partially answers an
unpublished question of Scott and Seymour.
|
Software verification may yield spurious failures when environment
assumptions are not accounted for. Environment assumptions are the expectations
that a system or a component makes about its operational environment and are
often specified in terms of conditions over the inputs of that system or
component. In this article, we propose an approach to automatically infer
environment assumptions for Cyber-Physical Systems (CPS). Our approach improves
the state-of-the-art in three different ways: First, we learn assumptions for
complex CPS models involving signal and numeric variables; second, the learned
assumptions include arithmetic expressions defined over multiple variables;
third, we identify the trade-off between soundness and informativeness of
environment assumptions and demonstrate the flexibility of our approach in
prioritizing either of these criteria.
We evaluate our approach using a public domain benchmark of CPS models from
Lockheed Martin and a component of a satellite control system from LuxSpace, a
satellite system provider. The results show that our approach outperforms
state-of-the-art techniques on learning assumptions for CPS models, and
further, when applied to our industrial CPS model, our approach is able to
learn assumptions that are sufficiently close to the assumptions manually
developed by engineers to be of practical value.
|
A fully discrete and fully explicit low-regularity integrator is constructed
for the one-dimensional periodic cubic nonlinear Schr\"odinger equation. The
method can be implemented by using fast Fourier transform with $O(N\ln N)$
operations at every time level, and is proved to have an $L^2$-norm error bound
of $O(\tau\sqrt{\ln(1/\tau)}+N^{-1})$ for $H^1$ initial data, without requiring
any CFL condition, where $\tau$ and $N$ denote the temporal stepsize and the
degree of freedoms in the spatial discretisation, respectively.
|
Isotropic hyper-elasticity, altogether with the equilibrium equation and the
usual boundary conditions, are formulated directly on the body B, a
three-dimensional compact and orientable manifold with boundary equipped with a
mass measure. Pearson-Sewell-Beatty pressure potential is formulated in an
intrinsic geometric manner. It is shown that Poincar{\'e}'s formula extended to
infinite dimension, provides, in a straightforward manner, the optimal
(non-holonomic) constraints for such a pressure potential to exist.
|
Rare-earth titanates are Mott insulators whose magnetic ground state --
antiferromagnetic (AFM) or ferromagnetic (FM) -- can be tuned by the radius of
the rare-earth element. Here, we combine phenomenology and first-principles
calculations to shed light on the generic magnetic phase diagram of a
chemically-substituted titanate on the rare-earth site that interpolates
between an AFM and a FM state. Octahedral rotations present in these
perovskites cause the AFM order to acquire a small FM component -- and
vice-versa -- removing any multi-critical point from the phase diagram.
However, for a wide parameter range, a first-order metamagnetic transition line
terminating at a critical end-point survives inside the magnetically ordered
phase. Similarly to the liquid-gas transition, a Widom line emerges from the
end-point, characterized by enhanced fluctuations. In contrast to metallic
ferromagnets, this metamagnetic transition involves two symmetry-equivalent and
insulating canted spin states. Moreover, instead of a magnetic field, we show
that uniaxial strain can be used to tune this transition to zero-temperature,
inducing a quantum critical end-point.
|
Machine learning models $-$ now commonly developed to screen, diagnose, or
predict health conditions $-$ are evaluated with a variety of performance
metrics. An important first step in assessing the practical utility of a model
is to evaluate its average performance over an entire population of interest.
In many settings, it is also critical that the model makes good predictions
within predefined subpopulations. For instance, showing that a model is fair or
equitable requires evaluating the model's performance in different demographic
subgroups. However, subpopulation performance metrics are typically computed
using only data from that subgroup, resulting in higher variance estimates for
smaller groups. We devise a procedure to measure subpopulation performance that
can be more sample-efficient than the typical subsample estimates. We propose
using an evaluation model $-$ a model that describes the conditional
distribution of the predictive model score $-$ to form model-based metric (MBM)
estimates. Our procedure incorporates model checking and validation, and we
propose a computationally efficient approximation of the traditional
nonparametric bootstrap to form confidence intervals. We evaluate MBMs on two
main tasks: a semi-synthetic setting where ground truth metrics are available
and a real-world hospital readmission prediction task. We find that MBMs
consistently produce more accurate and lower variance estimates of model
performance for small subpopulations.
|
We introduce a novel approach, the Cosmological Trajectories Method (CTM), to
model nonlinear structure formation in the Universe by expanding
gravitationally-induced particle trajectories around the Zel'dovich
approximation. A new Beyond Zel'dovich approximation is presented, which
expands the CTM to leading second-order in the gravitational interaction and
allows for post-Born gravitational scattering. In the Beyond Zel'dovich
approximation we derive the exact expression for the matter clustering power
spectrum. This is calculated to leading order and is available in the CTM
MODULE. We compare the Beyond Zel'dovich approximation power spectrum and
correlation function to other methods including 1-loop Standard Perturbation
Theory (SPT), 1-loop Lagrangian Perturbation Theory (LPT) and Convolution
Lagrangian Perturbation Theory (CLPT). We find that the Beyond Zel'dovich
approximation power spectrum performs well, matching simulations to within
$\pm{10}\%$, on mildly non-linear scales, and at redshifts above $z=1$ it
outperforms the Zel'dovich approximation. We also find that the Beyond
Zel'dovich approximation models the BAO peak in the correlation function at
$z=0$ more accurately, to within $\pm{5}\%$ of simulations, than the Zel'dovich
approximation, SPT 1-loop and CLPT.
|
In this study we analyze linear combinatorial optimization problems where the
cost vector is not known a priori, but is only observable through a finite data
set. In contrast to the related studies, we presume that the number of
observations with respect to particular components of the cost vector may vary.
The goal is to find a procedure that transforms the data set into an estimate
of the expected value of the objective function (which is referred to as a
prediction rule) and a procedure that retrieves a candidate decision (which is
referred to as a prescription rule). We aim at finding the least conservative
prediction and prescription rules, which satisfy some specified asymptotic
guarantees. We demonstrate that the resulting vector optimization problems
admit a weakly optimal solution, which can be obtained by solving a particular
distributionally robust optimization problem. Specifically, the decision-maker
may optimize the worst-case expected loss across all probability distributions
with given component-wise relative entropy distances from the empirical
marginal distributions. Finally, we perform numerical experiments to analyze
the out-of-sample performance of the proposed solution approach.
|
Word Sense Disambiguation (WSD) is a long-standing task in Natural Language
Processing(NLP) that aims to automatically identify the most relevant meaning
of the words in a given context. Developing standard WSD test collections can
be mentioned as an important prerequisite for developing and evaluating
different WSD systems in the language of interest. Although many WSD test
collections have been developed for a variety of languages, no standard
All-words WSD benchmark is available for Persian. In this paper, we address
this shortage for the Persian language by introducing SBU-WSD-Corpus, as the
first standard test set for the Persian All-words WSD task. SBU-WSD-Corpus is
manually annotated with senses from the Persian WordNet (FarsNet) sense
inventory. To this end, three annotators used SAMP (a tool for sense annotation
based on FarsNet lexical graph) to perform the annotation task. SBU-WSD-Corpus
consists of 19 Persian documents in different domains such as Sports, Science,
Arts, etc. It includes 5892 content words of Persian running text and 3371
manually sense annotated words (2073 nouns, 566 verbs, 610 adjectives, and 122
adverbs). Providing baselines for future studies on the Persian All-words WSD
task, we evaluate several WSD models on SBU-WSD-Corpus. The corpus is publicly
available at https://github.com/hrouhizadeh/SBU-WSD-Corpus.
|
In this article, we calculate the density of primes $\mathfrak{p}$ for which
the $\mathfrak{p}$-th Fourier coefficient $C^*(\mathfrak{p}, f)$ (resp.,
$C(\mathfrak{p}, f)$) of a primitive Hilbert modular form $f$ generates the
coefficient field $F_f$ (resp., $E_f$), under certain conditions on the images
of $\lambda$-adic residual Galois representations attached to $f$. Then, we
produce some examples of primitive forms $f$ satisfying these conditions. Our
work is a generalization of \cite{KSW08} to primitive Hilbert modular forms.
|
Adding propositional quantification to the modal logics K, T or S4 is known
to lead to undecidability but CTL with propositional quantification under the
tree semantics (tQCTL) admits a non-elementary Tower-complete satisfiability
problem. We investigate the complexity of strict fragments of tQCTL as well as
of the modal logic K with propositional quantification under the tree
semantics. More specifically, we show that tQCTL restricted to the temporal
operator EX is already Tower-hard, which is unexpected as EX can only enforce
local properties. When tQCTL restricted to EX is interpreted on N-bounded trees
for some N >= 2, we prove that the satisfiability problem is AExpPol-complete;
AExpPol-hardness is established by reduction from a recently introduced tiling
problem, instrumental for studying the model-checking problem for interval
temporal logics. As consequences of our proof method, we prove Tower-hardness
of tQCTL restricted to EF or to EXEF and of the well-known modal logics such as
K, KD, GL, K4 and S4 with propositional quantification under a semantics based
on classes of trees.
|
We are interested in the nature of the spectrum of the one-dimensional
Schr\"odinger operator $$
- \frac{d^2}{dx^2}-Fx + \sum_{n \in \mathbb{Z}}g_n \delta(x-n)
\qquad\text{in } L^2(\mathbb{R}) $$ with $F>0$ and two different choices of
the coupling constants $\{g_n\}_{n\in \mathbb{Z}}$. In the first model $g_n
\equiv \lambda$ and we prove that if $F\in \pi^2 \mathbb{Q}$ then the spectrum
is $\mathbb{R}$ and is furthermore absolutely continuous away from an explicit
discrete set of points. In the second model $g_n$ are independent random
variables with mean zero and variance $\lambda^2$. Under certain assumptions on
the distribution of these random variables we prove that almost surely the
spectrum is $\mathbb{R}$ and it is dense pure point if $F < \lambda^2/2$ and
purely singular continuous if $F> \lambda^2/2$.
|
We report the discovery of a 'folded' gravitationally lensed image,
'Hamilton's Object', found in a HST image of the field near the AGN SDSS
J223010.47-081017.8 ($z=0.62$). The lensed images are sourced by a galaxy at a
spectroscopic redshift of 0.8200$\pm0.0005$ and form a fold configuration on a
caustic caused by a foreground galaxy cluster at a photometric redshift of
0.526$\pm0.018$ seen in the corresponding Pan-STARRS PS1 image and marginally
detected as a faint ROSAT All-Sky Survey X-ray source. The lensed images
exhibit properties similar to those of other folds where the source galaxy
falls very close to or straddles the caustic of a galaxy cluster. The folded
images are stretched in a direction roughly orthogonal to the critical curve,
but the configuration is that of a tangential cusp. Guided by morphological
features, published simulations and similar fold observations in the
literature, we identify a third or counter-image, confirmed by spectroscopy.
Because the fold-configuration shows highly distinctive surface brightness
features, follow-up observations of microlensing or detailed investigations of
the individual surface brightness features at higher resolution can further
shed light on kpc-scale dark matter properties. We determine the local lens
properties at the positions of the multiple images according to the
observation-based lens reconstruction of Wagner et al. (2019). The analysis is
in accordance with a mass density which hardly varies on an arc-second scale (6
kpc) over the areas covered by the multiple images.
|
Over-the-air computation (OAC) is a promising technique to realize fast model
aggregation in the uplink of federated edge learning. OAC, however, hinges on
accurate channel-gain precoding and strict synchronization among the edge
devices, which are challenging in practice. As such, how to design the maximum
likelihood (ML) estimator in the presence of residual channel-gain mismatch and
asynchronies is an open problem. To fill this gap, this paper formulates the
problem of misaligned OAC for federated edge learning and puts forth a whitened
matched filtering and sampling scheme to obtain oversampled, but independent,
samples from the misaligned and overlapped signals. Given the whitened samples,
a sum-product ML estimator and an aligned-sample estimator are devised to
estimate the arithmetic sum of the transmitted symbols. In particular, the
computational complexity of our sum-product ML estimator is linear in the
packet length and hence is significantly lower than the conventional ML
estimator. Extensive simulations on the test accuracy versus the average
received energy per symbol to noise power spectral density ratio (EsN0) yield
two main results: 1) In the low EsN0 regime, the aligned-sample estimator can
achieve superior test accuracy provided that the phase misalignment is
non-severe. In contrast, the ML estimator does not work well due to the error
propagation and noise enhancement in the estimation process. 2) In the high
EsN0 regime, the ML estimator attains the optimal learning performance
regardless of the severity of phase misalignment. On the other hand, the
aligned-sample estimator suffers from a test-accuracy loss caused by phase
misalignment.
|
We investigate the properties of the glass phase of a recently introduced
spin glass model of soft spins subjected to an anharmonic quartic local
potential, which serves as a model of low temperature molecular or soft
glasses. We solve the model using mean field theory and show that, at low
temperatures, it is described by full replica symmetry breaking (fullRSB). As a
consequence, at zero temperature the glass phase is marginally stable. We show
that in this case, marginal stability comes from a combination of both soft
linear excitations -- appearing in a gapless spectrum of the Hessian of linear
excitations -- and pseudogapped non-linear excitations -- corresponding to
nearly degenerate two level systems. Therefore, this model is a natural
candidate to describe what happens in soft glasses, where quasi localized soft
modes in the density of states appear together with non-linear modes triggering
avalanches and conjectured to be essential to describe the universal
low-temperature anomalies of glasses.
|
For a rooted cluster algebra $\mathcal{A}(Q)$ over a valued quiver $Q$, a
\emph{symmetric cluster variable} is any cluster variable that belongs to a
cluster associated with a quiver $\sigma (Q)$, for some permutation $\sigma$.
The subalgebra of $\mathcal{A}(Q)$ generated by all symmetric cluster variables
is called the \emph{symmetric mutation subalgebra} and is denoted by
$\mathcal{B}(Q)$. In this paper, we identify the class of cluster algebras that
satisfy $\mathcal{B}(Q)=\mathcal{A}(Q)$, which contains almost every quiver of
finite mutation type. In the process of proving the main theorem, we provide a
classification of quivers mutation classes based on their weights. Some
properties of symmetric mutation subalgebras are given.
|
We investigate the State-Controlled Cellular Neural Network (SC-CNN)
framework of Murali-Lakshmanan-Chua (MLC) circuit system subjected to two
logical signals. By exploiting the attractors generated by this circuit in
different regions of phase-space, we show that the nonlinear circuit is capable
of producing all the logic gates, namely OR, AND, NOR, NAND, Ex-OR and Ex-NOR
gates available in digital systems. Further the circuit system emulates
three-input gates and Set-Reset flip-flop logic as well. Moreover, all these
logical elements and flip-flop are found to be tolerant to noise. These
phenomena are also experimentally demonstrated. Thus our investigation to
realize all logic gates and memory latch in a nonlinear circuit system paves
the way to replace or complement the existing technology with a limited number
of hardware.
|
Convolutional neural networks (CNNs) are able to attain better visual
recognition performance than fully connected neural networks despite having
much less parameters due to their parameter sharing principle. Hence, modern
architectures are designed to contain a very small number of fully-connected
layers, often at the end, after multiple layers of convolutions. It is
interesting to observe that we can replace large fully-connected layers with
relatively small groups of tiny matrices applied on the entire image. Moreover,
although this strategy already reduces the number of parameters, most of the
convolutions can be eliminated as well, without suffering any loss in
recognition performance. However, there is no solid recipe to detect this
hidden subset of convolutional neurons that is responsible for the majority of
the recognition work. Hence, in this work, we use the matrix characteristics
based on eigenvalues in addition to the classical weight-based importance
assignment approach for pruning to shed light on the internal mechanisms of a
widely used family of CNNs, namely residual neural networks (ResNets), for the
image classification problem using CIFAR-10, CIFAR-100 and Tiny ImageNet
datasets.
|
Access to informative databases is a crucial part of notable research
developments. In the field of domestic audio classification, there have been
significant advances in recent years. Although several audio databases exist,
these can be limited in terms of the amount of information they provide, such
as the exact location of the sound sources, and the associated noise levels. In
this work, we detail our approach on generating an unbiased synthetic domestic
audio database, consisting of sound scenes and events, emulated in both quiet
and noisy environments. Data is carefully curated such that it reflects issues
commonly faced in a dementia patients environment, and recreate scenarios that
could occur in real-world settings. Similarly, the room impulse response
generated is based on a typical one-bedroom apartment at Hebrew SeniorLife
Facility. As a result, we present an 11-class database containing excerpts of
clean and noisy signals at 5-seconds duration each, uniformly sampled at 16
kHz. Using our baseline model using Continues Wavelet Transform Scalograms and
AlexNet, this yielded a weighted F1-score of 86.24 percent.
|
Deuterium diffusion is investigated in nitrogen-doped homoepitaxial ZnO
layers. The samples were grown under slightly Zn-rich growth conditions by
plasma-assisted molecular beam epitaxy on m-plane ZnO substrates and have a
nitrogen content [N] varied up to 5x1018 at.cm-3 as measured by secondary ion
mass spectrometry (SIMS). All were exposed to a radio-frequency deuterium
plasma during 1h at room temperature. Deuterium diffusion is observed in all
epilayers while its penetration depth decreases as the nitrogen concentration
increases. This is a strong evidence of a diffusion mechanism limited by the
trapping of deuterium on a nitrogen-related trap. The SIMS profiles are
analyzed using a two-trap model including a shallow trap, associated with a
fast diffusion, and a deep trap, related to nitrogen. The capture radius of the
nitrogen-related trap is determined to be 20 times smaller than the value
expected for nitrogen-deuterium pairs formed by coulombic attraction between D+
and nitrogen-related acceptors. The (N2)O deep donor is proposed as the deep
trapping site for deuterium and accounts well for the small capture radius and
the observed photoluminescence quenching and recovery after deuteration of the
ZnO:N epilayers. It is also found that this defect is by far the N-related
defect with the highest concentration in the studied samples.
|
This paper considers a pursuit-evasion scenario among three agents -- an
evader, a pursuer, and a defender. We design cooperative guidance laws for the
evader and the defender team to safeguard the evader from an attacking pursuer.
Unlike differential games, optimal control formulations, and other heuristic
methods, we propose a novel perspective on designing effective nonlinear
feedback control laws for the evader-defender team using a time-constrained
guidance approach. The evader lures the pursuer on the collision course by
offering itself as bait. At the same time, the defender protects the evader
from the pursuer by exercising control over the engagement duration. Depending
on the nature of the mission, the defender may choose to take an aggressive or
defensive stance. Such consideration widens the applicability of the proposed
methods in various three-agent motion planning scenarios such as aircraft
defense, asset guarding, search and rescue, surveillance, and secure
transportation. We use a fixed-time sliding mode control strategy to design the
control laws for the evader-defender team and a nonlinear finite-time
disturbance observer to estimate the pursuer's maneuver. Finally, we present
simulations to demonstrate favorable performance under various engagement
geometries, thus vindicating the efficacy of the proposed designs.
|
We present a class of diffraction-free partially coherent beams each member
of which is comprised of a finite power, non-accelerating Airy bump residing on
a statistically homogeneous, Gaussian-correlated background. We examine
free-space propagation of soft apertured realizations of the proposed beams and
show that their evolution is governed by two spatial scales: the coherence
width of the background and aperture size. A relative magnitude of these
factors determines the practical range of propagation distances over which the
novel beams can withstand diffraction. The proposed beams can find applications
to imaging and optical communications through random media.
|
Unsupervised person re-identification (Re-ID) aims to match pedestrian images
from different camera views in unsupervised setting. Existing methods for
unsupervised person Re-ID are usually built upon the pseudo labels from
clustering. However, the quality of clustering depends heavily on the quality
of the learned features, which are overwhelmingly dominated by the colors in
images especially in the unsupervised setting. In this paper, we propose a
Cluster-guided Asymmetric Contrastive Learning (CACL) approach for unsupervised
person Re-ID, in which cluster structure is leveraged to guide the feature
learning in a properly designed asymmetric contrastive learning framework. To
be specific, we propose a novel cluster-level contrastive loss to help the
siamese network effectively mine the invariance in feature learning with
respect to the cluster structure within and between different data augmentation
views, respectively. Extensive experiments conducted on three benchmark
datasets demonstrate superior performance of our proposal.
|
The power conserving interconnection of port-thermodynamic systems via their
power ports results in another port-thermodynamic system, while the same holds
for the rate of entropy increasing interconnection via their entropy flow
ports. Control by interconnection of port-thermodynamic systems seeks to
control a plant port-thermodynamic system by the interconnection with a
controller port-thermodynamic system. The stability of the interconnected
port-thermodynamic system is investigated by Lyapunov functions based on
generating functions for the submanifold characterizing the state properties as
well as additional conserved quantities. Crucial tool is the use of canonical
point transformations on the symplectized thermodynamic phase space.
|
Graph neural networks (GNN) have been proven to be mature enough for handling
graph-structured data on node-level graph representation learning tasks.
However, the graph pooling technique for learning expressive graph-level
representation is critical yet still challenging. Existing pooling methods
either struggle to capture the local substructure or fail to effectively
utilize high-order dependency, thus diminishing the expression capability. In
this paper we propose HAP, a hierarchical graph-level representation learning
framework, which is adaptively sensitive to graph structures, i.e., HAP
clusters local substructures incorporating with high-order dependencies. HAP
utilizes a novel cross-level attention mechanism MOA to naturally focus more on
close neighborhood while effectively capture higher-order dependency that may
contain crucial information. It also learns a global graph content GCont that
extracts the graph pattern properties to make the pre- and post-coarsening
graph content maintain stable, thus providing global guidance in graph
coarsening. This novel innovation also facilitates generalization across graphs
with the same form of features. Extensive experiments on fourteen datasets show
that HAP significantly outperforms twelve popular graph pooling methods on
graph classification task with an maximum accuracy improvement of 22.79%, and
exceeds the performance of state-of-the-art graph matching and graph similarity
learning algorithms by over 3.5% and 16.7%.
|
We demonstrate the potential of dopamine modified
0.5(Ba0.7Ca0.3)TiO3-0.5Ba(Zr0.2Ti0.8)O3 filler incorporated poly-vinylidene
fluoride (PVDF) composite prepared by solution cast method as both flexible
energy storage and harvesting devices. The introduction of dopamine in filler
surface functionalization acts as bridging elements between filler and polymer
matrix and results in a better filler dispersion and an improved dielectric
loss tangent (<0.02) along with dielectric permittivity ranges from 9 to 34
which is favorable for both energy harvesting and storage. Additionally, a
significantly low DC conductivity (< 10-9 ohm-1cm-1) for all composites was
achieved leading to an improved breakdown strength and charge accumulation
capability. Maximum breakdown strength of 134 KV/mm and corresponding energy
storage density 0.72 J/cm3 were obtained from the filler content 10 weight%.
The improved energy harvesting performance was characterized by obtaining a
maximum piezoelectric charge constant (d33) = 78 pC/N, and output voltage
(Vout) = 0.84 V along with maximum power density of 3.46 microW/cm3 for the
filler content of 10 wt%. Thus, the results show
0.5(Ba0.7Ca0.3)TiO3-0.5Ba(Zr0.2Ti0.8)O3/PVDF composite has the potential for
energy storage and harvesting applications simultaneously that can
significantly suppress the excess energy loss arises while utilizing different
material.
|
The conformal module of conjugacy classes of braids is an invariant that
appeared earlier than the entropy of conjugacy classes of braids, and is
inverse proportional to the entropy. Using the relation between the two
invariants we give a short conceptional proof of an earlier result on the
conformal module. Mainly, we consider situations, when the conformal module of
conjugacy classes of braids serves as obstruction for the existence of
homotopies (or isotopies) of smooth objects involving braids to the respective
holomorphic objects, and present theorems on the restricted validity of
Gromov's Oka principle in these situations.
|
We study the asymptotic properties of the stochastic Cahn-Hilliard equation
with the logarithmic free energy by establishing different dimension-free
Harnack inequalities according to various kinds of noises. The main
characteristics of this equation are the singularities of the logarithmic free
energy at 1 and --1 and the conservation of the mass of the solution in its
spatial variable. Both the space-time colored noise and the space-time white
noise are considered. For the highly degenerate space-time colored noise, the
asymptotic log-Harnack inequality is established under the so-called
essentially elliptic conditions. And the Harnack inequality with power is
established for non-degenerate space-time white noise.
|
In this article, for positive integers $n\geq m\geq 1$, the parameter spaces
for the isomorphism classes of the generic point arrangements of cardinality
$n$, and the antipodal point arrangements of cardinality $2n$ in the Eulidean
space $\mathbb{R}^m$ are described using the space of totally nonzero
Grassmannian $Gr^{tnz}_{mn}(\mathbb{R})$. A stratification
$\mathcal{S}^{tnz}_{mn}(\mathbb{R})$ of the totally nonzero Grassmannian
$Gr^{tnz}_{mn}(\mathbb{R})$ is mentioned and the parameter spaces are
respectively expressed as quotients of the space
$\mathcal{S}^{tnz}_{mn}(\mathbb{R})$ of strata under suitable actions of the
symmetric group $S_n$ and the semidirect product group $(\mathbb{R}^*)^n\rtimes
S_n$. The cardinalities of the space $\mathcal{S}^{tnz}_{mn}(\mathbb{R})$ of
strata and of the parameter spaces $S_n\backslash
\mathcal{S}^{tnz}_{mn}(\mathbb{R}), ((\mathbb{R}^*)^n\rtimes S_n)\backslash
\mathcal{S}^{tnz}_{mn}(\mathbb{R})$ are enumerated in dimension $m=2$.
Interestingly enough, the enumerated value of the isomorphism classes of the
generic point arrangements in the Euclidean plane is expressed in terms of the
number theoretic Euler-totient function. The analogous enumeration questions
are still open in higher dimensions for $m\geq 3$.
|
There is growing interest in hydrogen (H$_2$) use for long-duration energy
storage in a future electric grid dominated by variable renewable energy (VRE)
resources. Modelling the role of H$_2$ as grid-scale energy storage, often
referred as "power-to-gas-to-power (P2G2P)" overlooks the cost-sharing and
emission benefits from using the deployed H$_2$ production and storage assets
to also supply H$_2$ for decarbonizing other end-use sectors where direct
electrification may be challenged. Here, we develop a generalized modelling
framework for co-optimizing energy infrastructure investment and operation
across power and transportation sectors and the supply chains of electricity
and H$_2$, while accounting for spatio-temporal variations in energy demand and
supply. Applying this sector-coupling framework to the U.S. Northeast under a
range of technology cost and carbon price scenarios, we find a greater value of
power-to-H$_2$ (P2G) versus P2G2P routes. P2G provides flexible demand
response, while the extra cost and efficiency penalties of P2G2P routes make
the solution less attractive for grid balancing. The effects of sector-coupling
are significant, boosting VRE generation by 12-55% with both increased
capacities and reduced curtailments and reducing the total system cost (or
levelized costs of energy) by 6-14% under 96% decarbonization scenarios. Both
the cost savings and emission reductions from sector coupling increase with
H$_2$ demand for other end-uses, more than doubling for a 96% decarbonization
scenario as H$_2$ demand quadraples. Moreover, we found that the deployment of
carbon capture and storage is more cost-effective in the H$_2$ sector because
of the lower cost and higher utilization rate. These findings highlight the
importance of using an integrated multi-sector energy system framework with
multiple energy vectors in planning energy system decarbonization pathways.
|
Massive multiple-input multiple-output (MIMO) is a key technology for
improving the spectral and energy efficiency in 5G-and-beyond wireless
networks. For a tractable analysis, most of the previous works on Massive MIMO
have been focused on the system performance with complex Gaussian channel
impulse responses under rich-scattering environments. In contrast, this paper
investigates the uplink ergodic spectral efficiency (SE) of each user under the
double scattering channel model. We derive a closed-form expression of the
uplink ergodic SE by exploiting the maximum ratio (MR) combining technique
based on imperfect channel state information. We further study the asymptotic
SE behaviors as a function of the number of antennas at each base station (BS)
and the number of scatterers available at each radio channel. We then formulate
and solve a total energy optimization problem for the uplink data transmission
that aims at simultaneously satisfying the required SEs from all the users with
limited data power resource. Notably, our proposed algorithms can cope with the
congestion issue appearing when at least one user is served by lower SE than
requested. Numerical results illustrate the effectiveness of the closed-form
ergodic SE over Monte-Carlo simulations. Besides, the system can still provide
the required SEs to many users even under congestion.
|
High-order implicit shock tracking is a new class of numerical methods to
approximate solutions of conservation laws with non-smooth features. These
methods align elements of the computational mesh with non-smooth features to
represent them perfectly, allowing high-order basis functions to approximate
smooth regions of the solution without the need for nonlinear stabilization,
which leads to accurate approximations on traditionally coarse meshes. The
hallmark of these methods is the underlying optimization formulation whose
solution is a feature-aligned mesh and the corresponding high-order
approximation to the flow; the key challenge is robustly solving the central
optimization problem. In this work, we develop a robust optimization solver for
high-order implicit shock tracking methods so they can be reliably used to
simulate complex, high-speed, compressible flows in multiple dimensions. The
proposed method integrates practical robustness measures into a sequential
quadratic programming method, including dimension- and order-independent
simplex element collapses, mesh smoothing, and element-wise solution
re-initialization, which prove to be necessary to reliably track complex
discontinuity surfaces, such as curved and reflecting shocks, shock formation,
and shock-shock interaction. A series of nine numerical experiments --
including two- and three-dimensional compressible flows with complex
discontinuity surfaces -- are used to demonstrate: 1) the robustness of the
solver, 2) the meshes produced are high-quality and track continuous,
non-smooth features in addition to discontinuities, 3) the method achieves the
optimal convergence rate of the underlying discretization even for flows
containing discontinuities, and 4) the method produces highly accurate
solutions on extremely coarse meshes relative to approaches based on shock
capturing.
|
Ben Reichardt showed in a series of results that the general adversary bound
of a function characterizes its quantum query complexity. This survey seeks to
aggregate the background and definitions necessary to understand the proof.
Notable among these are the lower bound proof, span programs, witness size, and
semi-definite programs. These definitions, in addition to examples and detailed
expositions, serve to give the reader a better intuition of the graph-theoretic
nature of the upper bound. We also include an applications of this result to
lower bounds on DeMorgan formula size.
|
In video object tracking, there exist rich temporal contexts among successive
frames, which have been largely overlooked in existing trackers. In this work,
we bridge the individual video frames and explore the temporal contexts across
them via a transformer architecture for robust object tracking. Different from
classic usage of the transformer in natural language processing tasks, we
separate its encoder and decoder into two parallel branches and carefully
design them within the Siamese-like tracking pipelines. The transformer encoder
promotes the target templates via attention-based feature reinforcement, which
benefits the high-quality tracking model generation. The transformer decoder
propagates the tracking cues from previous templates to the current frame,
which facilitates the object searching process. Our transformer-assisted
tracking framework is neat and trained in an end-to-end manner. With the
proposed transformer, a simple Siamese matching approach is able to outperform
the current top-performing trackers. By combining our transformer with the
recent discriminative tracking pipeline, our method sets several new
state-of-the-art records on prevalent tracking benchmarks.
|
Thoracic disease detection from chest radiographs using deep learning methods
has been an active area of research in the last decade. Most previous methods
attempt to focus on the diseased organs of the image by identifying spatial
regions responsible for significant contributions to the model's prediction. In
contrast, expert radiologists first locate the prominent anatomical structures
before determining if those regions are anomalous. Therefore, integrating
anatomical knowledge within deep learning models could bring substantial
improvement in automatic disease classification. This work proposes an
anatomy-aware attention-based architecture named Anatomy X-Net, that
prioritizes the spatial features guided by the pre-identified anatomy regions.
We leverage a semi-supervised learning method using the JSRT dataset containing
organ-level annotation to obtain the anatomical segmentation masks (for lungs
and heart) for the NIH and CheXpert datasets. The proposed Anatomy X-Net uses
the pre-trained DenseNet-121 as the backbone network with two corresponding
structured modules, the Anatomy Aware Attention (AAA) and Probabilistic
Weighted Average Pooling (PWAP), in a cohesive framework for anatomical
attention learning. Our proposed method sets new state-of-the-art performance
on the official NIH test set with an AUC score of 0.8439, proving the efficacy
of utilizing the anatomy segmentation knowledge to improve the thoracic disease
classification. Furthermore, the Anatomy X-Net yields an averaged AUC of 0.9020
on the Stanford CheXpert dataset, improving on existing methods that
demonstrate the generalizability of the proposed framework.
|
Neural architecture search (NAS) is a recent methodology for automating the
design of neural network architectures. Differentiable neural architecture
search (DARTS) is a promising NAS approach that dramatically increases search
efficiency. However, it has been shown to suffer from performance collapse,
where the search often leads to detrimental architectures. Many recent works
try to address this issue of DARTS by identifying indicators for early
stopping, regularising the search objective to reduce the dominance of some
operations, or changing the parameterisation of the search problem. In this
work, we hypothesise that performance collapses can arise from poor local
optima around typical initial architectures and weights. We address this issue
by developing a more global optimisation scheme that is able to better explore
the space without changing the DARTS problem formulation. Our experiments show
that our changes in the search algorithm allow the discovery of architectures
with both better test performance and fewer parameters.
|
[Zhang, ICML 2018] provided the first decentralized actor-critic algorithm
for multi-agent reinforcement learning (MARL) that offers convergence
guarantees. In that work, policies are stochastic and are defined on finite
action spaces. We extend those results to offer a provably-convergent
decentralized actor-critic algorithm for learning deterministic policies on
continuous action spaces. Deterministic policies are important in real-world
settings. To handle the lack of exploration inherent in deterministic policies,
we consider both off-policy and on-policy settings. We provide the expression
of a local deterministic policy gradient, decentralized deterministic
actor-critic algorithms and convergence guarantees for linearly-approximated
value functions. This work will help enable decentralized MARL in
high-dimensional action spaces and pave the way for more widespread use of
MARL.
|
The Force Concept Inventory (FCI) can be used as an assessment tool to
measure the gains in a cohort of students. In this study it was given to first
year mechanics students (N=256 students) pre- and post-mechanics lectures, for
students at the University of Johannesburg. From these results we examine the
effect of switching mid-semester from traditional classes to online classes, as
imposed by the COVID-19 lockdown in South Africa. Overall gains and student
perspectives indicate no appreciable difference of gain, when bench-marked
against previous studies using this assessment tool. When compared with 2019
grades, the 2020 semester grades do not appear to be greatly affected.
Furthermore, initial statistical analyses also indicate a gender difference in
mean gains in favour of females at the 95% significance level (for paired data,
N=48). A survey given to students also appeared to indicate that most students
were aware of their conceptual performance in physics, and the main constraint
to their studies was due to difficulties associated with being online. As such,
the change in pedagogy and the stresses of lockdown were found to not be
suggestive of a depreciation of FCI gains and grades.
|
Neural Machine Translation model is a sequence-to-sequence converter based on
neural networks. Existing models use recurrent neural networks to construct
both the encoder and decoder modules. In alternative research, the recurrent
networks were substituted by convolutional neural networks for capturing the
syntactic structure in the input sentence and decreasing the processing time.
We incorporate the goodness of both approaches by proposing a
convolutional-recurrent encoder for capturing the context information as well
as the sequential information from the source sentence. Word embedding and
position embedding of the source sentence is performed prior to the
convolutional encoding layer which is basically a n-gram feature extractor
capturing phrase-level context information. The rectified output of the
convolutional encoding layer is added to the original embedding vector, and the
sum is normalized by layer normalization. The normalized output is given as a
sequential input to the recurrent encoding layer that captures the temporal
information in the sequence. For the decoder, we use the attention-based
recurrent neural network. Translation task on the German-English dataset
verifies the efficacy of the proposed approach from the higher BLEU scores
achieved as compared to the state of the art.
|
A sample of 1.3 mm continuum cores in the Dragon infrared dark cloud (also
known as G28.37+0.07 or G28.34+0.06) is analyzed statistically. Based on their
association with molecular outflows, the sample is divided into protostellar
and starless cores. Statistical tests suggest that the protostellar cores are
more massive than the starless cores, even after temperature and opacity biases
are accounted for. We suggest that the mass difference indicates core mass
growth since their formation. The mass growth implies that massive star
formation may not have to start with massive prestellar cores, depending on the
core mass growth rate. Its impact on the relation between core mass function
and stellar initial mass function is to be further explored.
|
In this paper, we investigate the algebraic nature of the value of a higher
Green function on an orthogonal Shimura variety at a single CM point. This is
motivated by a conjecture of Gross and Zagier in the setting of higher Green
functions on the product of two modular curves. In the process, we will study
analogue of harmonic Maass forms in the setting of Hilbert modular forms, and
obtain results concerning the arithmetic of their holomorphic part Fourier
coefficients. As a consequence, we confirm the conjecture of Gross and Zagier
under mild condition on the discriminant of the CM point.
|
There is a strong consensus that combining the versatility of machine
learning with the assurances given by formal verification is highly desirable.
It is much less clear what verified machine learning should mean exactly. We
consider this question from the (unexpected?) perspective of computable
analysis. This allows us to define the computational tasks underlying verified
ML in a model-agnostic way, and show that they are in principle computable.
|
Value functions are central to Dynamic Programming and Reinforcement Learning
but their exact estimation suffers from the curse of dimensionality,
challenging the development of practical value-function (VF) estimation
algorithms. Several approaches have been proposed to overcome this issue, from
non-parametric schemes that aggregate states or actions to parametric
approximations of state and action VFs via, e.g., linear estimators or deep
neural networks. Relevantly, several high-dimensional state problems can be
well-approximated by an intrinsic low-rank structure. Motivated by this and
leveraging results from low-rank optimization, this paper proposes different
stochastic algorithms to estimate a low-rank factorization of the $Q(s, a)$
matrix. This is a non-parametric alternative to VF approximation that
dramatically reduces the computational and sample complexities relative to
classical $Q$-learning methods that estimate $Q(s,a)$ separately for each
state-action pair.
|
Low-noise frequency conversion of single photons is a critical tool in
establishing fibre-based quantum networks. We show that a single photonic
crystal fibre can achieve frequency conversion by Bragg-scattering four-wave
mixing of source photons from an ultra-broad wavelength range by engineering a
symmetric group velocity profile. Furthermore, we discuss how pump tuning can
mitigate realistic discrepancies in device fabrication. This enables a single
highly adaptable frequency conversion interface to link disparate nodes in a
quantum network via the telecoms band.
|
In this paper, we establish the existence and uniqueness of Ricci flow that
admits an embedded closed convex surface in $\mathbb{R}^3$ as metric initial
condition. The main point is a family of smooth Ricci flows starting from
smooth convex surfaces whose metrics converge uniformly to the metric of the
initial surface in intrinsic sense.
|
We derive the stellar-to-halo mass relation (SHMR), namely $f_\star\propto
M_\star/M_{\rm h}$ versus $M_\star$ and $M_{\rm h}$, for early-type galaxies
from their near-IR luminosities (for $M_\star$) and the position-velocity
distributions of their globular cluster systems (for $M_{\rm h}$). Our
individual estimates of $M_{\rm h}$ are based on fitting a dynamical model with
a distribution function expressed in terms of action-angle variables and
imposing a prior on $M_{\rm h}$ from the concentration-mass relation in the
standard $\Lambda$CDM cosmology. We find that the SHMR for early-type galaxies
declines with mass beyond a peak at $M_\star\sim 5\times 10^{10}M_\odot$ and
$M_{\rm h}\sim 10^{12}M_\odot$ (near the mass of the Milky Way). This result is
consistent with the standard SHMR derived by abundance matching for the general
population of galaxies, and with previous, less robust derivations of the SHMR
for early types. However, it contrasts sharply with the monotonically rising
SHMR for late types derived from extended HI rotation curves and the same
$\Lambda$CDM prior on $M_{\rm h}$ as we adopt for early types. The SHMR for
massive galaxies varies more or less continuously, from rising to falling, with
decreasing disc fraction and decreasing Hubble type. We also show that the
different SHMRs for late and early types are consistent with the similar
scaling relations between their stellar velocities and masses (Tully-Fisher and
Faber-Jackson relations). Differences in the relations between the stellar and
halo virial velocities account for the similarity of the scaling relations. We
argue that all these empirical findings are natural consequences of a picture
in which galactic discs are built mainly by smooth and gradual inflow,
regulated by feedback from young stars, while galactic spheroids are built by a
cooperation between merging, black-hole fuelling, and feedback from AGNs.
|
Recently, the experimental discovery of high-$T_c$ superconductivity in
compressed hydrides H$_3$S and LaH$_{10}$ at megabar pressures has triggered
searches for various superconducting superhydrides. It was experimentally
observed that thorium hydrides, ThH$_{10}$ and ThH$_9$, are stabilized at much
lower pressures compared to LaH$_{10}$. Based on first-principles
density-functional theory calculations, we reveal that the isolated Th
frameworks of ThH$_{10}$ and ThH$_9$ have relatively more excess electrons in
interstitial regions than the La framework of LaH$_{10}$. Such interstitial
excess electrons easily participate in the formation of anionic H cage
surrounding metal atom. The resulting Coulomb attraction between cationic Th
atoms and anionic H cages is estimated to be stronger than the corresponding
one of LaH$_{10}$, thereby giving rise to larger chemical precompressions in
ThH$_{10}$ and ThH$_9$. Such a formation mechanism of H clathrates can also be
applied to another experimentally synthesized superhydride CeH$_9$, confirming
the experimental evidence that the chemical precompression in CeH$_9$ is larger
than that in LaH$_{10}$. Our findings demonstrate that interstitial excess
electrons in the isolated metal frameworks of high-pressure superhydrides play
an important role in generating the chemical precompression of H clathrates.
|
The fluid flow along the Riga plate with the influence of magnetic force in a
rotating system has been investigated numerically. The governing equations have
been derived from Navier-Stokes equations. Applying the boundary layer
approximation, the appropriate boundary layer equations have been obtained. By
using usual transformation, the obtained governing equations have been
transformed into a coupled dimensionless non-linear partial differential
equation. The obtained dimensionless equations have been solved numerically by
explicit finite difference scheme. The simulated results have been obtained by
using MATLAB R2015a. Also the stability and convergence criteria have been
analyzed. The effect of several parameters on the primary velocity, secondary
velocity, temperature distributions as well as local shear stress and Nusselt
number have been shown graphically.
|
Mobile app developers use paid advertising campaigns to acquire new users,
and they need to know the campaigns' performance to guide their spending.
Determining the campaign that led to an install requires that the app and
advertising network share an identifier that allows matching ad clicks to
installs. Ad networks use the identifier to build user profiles that help with
targeting and personalization. Modern mobile operating systems have features to
protect the privacy of the user. The privacy features of Apple's iOS 14
enforces all apps to get system permission for tracking explicitly instead of
asking the user to opt-out of tracking as before. If the user does not allow
tracking, the identifier for advertisers (IDFA) required for attributing the
installation to the campaign is not shared. The lack of an identifier for the
attribution changes profoundly how user acquisition campaigns' performance is
measured. For users who do not allow tracking, there is a new feature that
still allows following campaign performance. The app can set an integer, so
called conversion value for each user, and the developer can get the number of
installs per conversion value for each campaign. This paper investigates the
task of distributing revenue to advertising campaigns using the conversion
values. Our contributions are to formalize the problem, find the theoretically
optimal revenue attribution function for any conversion value schema, and show
empirical results on past data of a free-to-play mobile game using different
conversion value schemas.
|
We show that for Lebesgue almost all $d$-tuples $(\theta_1,\ldots,\theta_d)$,
with $|\theta_j|>1$, any self-affine measure for a homogeneous non-degenerate
iterated function system $\{Ax+a_j\}_{j=1}^m$ in ${\mathbb R}^d$, where
$A^{-1}$ is a diagonal matrix with the entries $(\theta_1,\ldots,\theta_d)$,
has power Fourier decay at infinity.
|
We present ALMA [C II] 158 $\mu$m line and far-infrared (FIR) continuum
emission observations toward HSC J120505.09$-$000027.9 (J1205$-$0000) at $z =
6.72$ with the beam size of $\sim 0''.8 \times 0''.5$ (or 4.1 kpc $\times$ 2.6
kpc), the most distant red quasar known to date. Red quasars are modestly
reddened by dust, and are thought to be in rapid transition from an obscured
starburst to an unobscured normal quasar, driven by powerful active galactic
nucleus (AGN) feedback which blows out a cocoon of interstellar medium (ISM).
The FIR continuum of J1205$-$0000 is bright, with an estimated luminosity of
$L_{\rm FIR} \sim 3 \times 10^{12}~L_\odot$. The [C II] line emission is
extended on scales of $r \sim 5$ kpc, greater than the FIR continuum. The line
profiles at the extended regions are complex and broad (FWHM $\sim 630-780$ km
s$^{-1}$). Although it is not practical to identify the nature of this extended
structure, possible explanations include (i) companion/merging galaxies and
(ii) massive AGN-driven outflows. For the case of (i), the companions are
modestly star-forming ($\sim 10~M_\odot$ yr$^{-1}$), but are not detected by
our Subaru optical observations ($y_{\rm AB,5\sigma} = 24.4$ mag). For the case
of (ii), our lower-limit to the cold neutral outflow rate is $\sim 100~M_\odot$
yr$^{-1}$. The outflow kinetic energy and momentum are both much smaller than
what predicted in energy-conserving wind models, suggesting that the AGN
feedback in this quasar is not capable of completely suppressing its star
formation.
|
Baryon production is studied within the framework of quantized fragmentation
of QCD string. Baryons appear in the model in a fairly intuitive way, with help
of causally connected string breakups. A simple helical approximation of QCD
flux tube with parameters constrained by mass spectrum of light mesons is
sufficient to reproduce masses of light baryons.
|
The minimal flavor structures for both quarks and leptons are proposed to
address fermion mass hierarchy and flavor mixings by bi-unitary decomposition
of the fermion mass matrix. The real matrix ${\bf M}_0^f$ is completely
responsive to family mass hierarchy, which is expressed by a close-to-flat
matrix structure. The left-handed unitary phase ${\bf F}_L^f$ provides the
origin of CP violation in quark and lepton mixings, which can be explained as a
quantum effect between Yukawa interaction states and weak gauge states. The
minimal flavor structure is realized by just 10 parameters without any
redundancy, corresponding to 6 fermion masses, 3 mixing angles and 1 CP
violation in the quark/lepton sector. This approach provides a general flavor
structure independent of the specific quark or lepton flavor data. We verify
the validation of the flavor structure by reproducing quark/lepton masses and
mixings. Some possible scenarios that yield the flavor structure are also
discussed.
|
Our goal is to develop a flux limiter of the Flux-Corrected Transport method
for a nonconservative convection-diffusion equation. For this, we consider a
hybrid difference scheme that is a linear combination of a monotone scheme and
a scheme of high-order accuracy. The flux limiter is computed as an approximate
solution of a corresponding optimization problem with a linear objective
function. The constraints for this optimization problem are derived from
inequalities that are valid for the monotone scheme and apply to the hybrid
scheme. Our numerical results with the flux limiters, which are exact and
approximate solutions to the optimization problem, are in good agreement.
|
A scalable system for real-time analysis of electron temperature and density
based on signals from the Thomson scattering diagnostic, initially developed
for and installed on the NSTX-U experiment, was recently adapted for the Large
Helical Device (LHD) and operated for the first time during plasma discharges.
During its initial operation run, it routinely recorded and processed signals
for four spatial points at the laser repetition rate of 30 Hz, well within the
system's rated capability for 60 Hz. We present examples of data collected from
this initial run and describe subsequent adaptations to the analysis code to
improve the fidelity of the temperature calculations.
|
The DARWIN observatory is a proposed next-generation experiment to search for
particle dark matter and other rare interactions. It will operate a 50 t liquid
xenon detector, with 40 t in the time projection chamber (TPC). To inform the
final detector design and technical choices, a series of technological
questions must first be addressed. Here we describe a full-scale demonstrator
in the vertical dimension, Xenoscope, with the main goal of achieving electron
drift over a 2.6 m distance, which is the scale of the DARWIN TPC. We have
designed and constructed the facility infrastructure, including the cryostat,
cryogenic and purification systems, the xenon storage and recuperation system,
as well as the slow control system. We have also designed a xenon purity
monitor and the TPC, with the fabrication of the former nearly complete. In a
first commissioning run of the facility without an inner detector, we
demonstrated the nominal operational reach of Xenoscope and benchmarked the
components of the cryogenic and slow control systems, demonstrating reliable
and continuous operation of all subsystems over 40 days. The infrastructure is
thus ready for the integration of the purity monitor, followed by the TPC.
Further applications of the facility include R&D on the high voltage
feedthrough for DARWIN, measurements of electron cloud diffusion, as well as
measurements of optical properties of liquid xenon. In the future, Xenoscope
will be available as a test platform for the DARWIN collaboration to
characterise new detector technologies.
|
Due to their broad application to different fields of theory and practice,
generalized Petersen graphs $GPG(n,s)$ have been extensively investigated.
Despite the regularity of generalized Petersen graphs, determining an exact
formula for the diameter is still a difficult problem. In their paper, Beenker
and Van Lint have proved that if the circulant graph $C_n(1,s)$ has diameter
$d$, then $GPG(n,s)$ has diameter at least $d+1$ and at most $d+2$. In this
paper, we provide necessary and sufficient conditions so that the diameter of
$GPG(n,s)$ is equal to $d+1,$ and sufficient conditions so that the diameter of
$GPG(n,s)$ is equal to $d+2.$ Afterwards, we give exact values for the diameter
of $GPG(n,s)$ for almost all cases of $n$ and $s.$ Furthermore, we show that
there exists an algorithm computing the diameter of generalized Petersen graphs
with running time $O$(log$n$).
|
In many astrophysical applications, the cost of solving a chemical network
represented by a system of ordinary differential equations (ODEs) grows
significantly with the size of the network, and can often represent a
significant computational bottleneck, particularly in coupled chemo-dynamical
models. Although standard numerical techniques and complex solutions tailored
to thermochemistry can somewhat reduce the cost, more recently, machine
learning algorithms have begun to attack this challenge via data-driven
dimensional reduction techniques. In this work, we present a new class of
methods that take advantage of machine learning techniques to reduce complex
data sets (autoencoders), the optimization of multi-parameter systems (standard
backpropagation), and the robustness of well-established ODE solvers to to
explicitly incorporate time-dependence. This new method allows us to find a
compressed and simplified version of a large chemical network in a
semi-automated fashion that can be solved with a standard ODE solver, while
also enabling interpretability of the compressed, latent network. As a proof of
concept, we tested the method on an astrophysically-relevant chemical network
with 29 species and 224 reactions, obtaining a reduced but representative
network with only 5 species and 12 reactions, and a x65 speed-up.
|
Short Read Alignment Mapping Metrics (SRAMM): is an efficient and versatile
command line tool providing additional short read mapping metrics, filtering,
and graphs. Short read aligners report MAPing Quality (MAPQ), but these methods
generally are neither standardized nor well described in literature or software
manuals. Additionally, third party mapping quality programs are typically
computationally intensive or designed for specific applications. SRAMM
efficiently generates multiple different concept-based mapping scores to
provide for an informative post alignment examination and filtering process of
aligned short reads for various downstream applications. SRAMM is compatible
with Python 2.6+ and Python 3.6+ on all operating systems. It works with any
short read aligner that generates SAM/BAM/CRAM file outputs and reports 'AS'
tags. It is freely available under the MIT license at
http://github.com/achon/sramm.
|
We aim to give more insights on adiabatic evolution concerning the occurrence
of anti-crossings and their link to the spectral minimum gap $\Delta_{min}$. We
study in detail adiabatic quantum computation applied to a specific
combinatorial problem called weighted max $k$-clique. A clear intuition of the
parametrization introduced by V. Choi is given which explains why the
characterization isn't general enough. We show that the instantaneous vectors
involved in the anti-crossing vary brutally through it making the instantaneous
ground-state hard to follow during the evolution. This result leads to a
relaxation of the parametrization to be more general.
|
A q-Levenberg-Marquardt method is an iterative procedure that blends a
q-steepest descent and q-Gauss-Newton methods. When the current solution is far
from the correct one the algorithm acts as the q-steepest descent method.
Otherwise the algorithm acts as the q-Gauss-Newton method. A damping parameter
is used to interpolate between these two methods. The q-parameter is used to
escape from local minima and to speed up the search process near the optimal
solution.
|
For a complete graph $K_n$ of order $n$, an edge-labeling $c:E(K_n)\to \{
-1,1\}$ satisfying $c(E(K_n))=0$, and a spanning forest $F$ of $K_n$, we
consider the problem to minimize $|c(E(F'))|$ over all isomorphic copies $F'$
of $F$ in $K_n$. In particular, we ask under which additional conditions there
is a zero-sum copy, that is, a copy $F'$ of $F$ with $c(E(F'))=0$.
We show that there is always a copy $F'$ of $F$ with $|c(E(F'))|\leq
\Delta(F)+1$, where $\Delta(F)$ is the maximum degree of $F$. We conjecture
that this bound can be improved to $|c(E(F'))|\leq (\Delta(F)-1)/2$ and verify
this for $F$ being the star $K_{1,n-1}$. Under some simple necessary
divisibility conditions, we show the existence of a zero-sum $P_3$-factor, and,
for sufficiently large $n$, also of a zero-sum $P_4$-factor.
|
Deepfakes raised serious concerns on the authenticity of visual contents.
Prior works revealed the possibility to disrupt deepfakes by adding adversarial
perturbations to the source data, but we argue that the threat has not been
eliminated yet. This paper presents MagDR, a mask-guided detection and
reconstruction pipeline for defending deepfakes from adversarial attacks. MagDR
starts with a detection module that defines a few criteria to judge the
abnormality of the output of deepfakes, and then uses it to guide a learnable
reconstruction procedure. Adaptive masks are extracted to capture the change in
local facial regions. In experiments, MagDR defends three main tasks of
deepfakes, and the learned reconstruction pipeline transfers across input data,
showing promising performance in defending both black-box and white-box
attacks.
|
We propose a variational autoencoder architecture to model both ignorable and
nonignorable missing data using pattern-set mixtures as proposed by Little
(1993). Our model explicitly learns to cluster the missing data into
missingness pattern sets based on the observed data and missingness masks.
Underpinning our approach is the assumption that the data distribution under
missingness is probabilistically semi-supervised by samples from the observed
data distribution. Our setup trades off the characteristics of ignorable and
nonignorable missingness and can thus be applied to data of both types. We
evaluate our method on a wide range of data sets with different types of
missingness and achieve state-of-the-art imputation performance. Our model
outperforms many common imputation algorithms, especially when the amount of
missing data is high and the missingness mechanism is nonignorable.
|
In this paper, we study linear filters to process signals defined on
simplicial complexes, i.e., signals defined on nodes, edges, triangles, etc. of
a simplicial complex, thereby generalizing filtering operations for graph
signals. We propose a finite impulse response filter based on the Hodge
Laplacian, and demonstrate how this filter can be designed to amplify or
attenuate certain spectral components of simplicial signals. Specifically, we
discuss how, unlike in the case of node signals, the Fourier transform in the
context of edge signals can be understood in terms of two orthogonal subspaces
corresponding to the gradient-flow signals and curl-flow signals arising from
the Hodge decomposition. By assigning different filter coefficients to the
associated terms of the Hodge Laplacian, we develop a subspace-varying filter
which enables more nuanced control over these signal types. Numerical
experiments are conducted to show the potential of simplicial filters for
sub-component extraction, denoising and model approximation.
|
In this paper we address the explainability of web search engines. We propose
two explainable elements on the search engine result page: a visualization of
query term weights and a visualization of passage relevance. The idea is that
search engines that indicate to the user why results are retrieved are valued
higher by users and gain user trust. We deduce the query term weights from the
term gating network in the Deep Relevance Matching Model (DRMM) and visualize
them as a doughnut chart. In addition, we train a passage-level ranker with
DRMM that selects the most relevant passage from each document and shows it as
snippet on the result page. Next to the snippet we show a document thumbnail
with this passage highlighted. We evaluate the proposed interface in an online
user study, asking users to judge the explainability and assessability of the
interface. We found that users judge our proposed interface significantly more
explainable and easier to assess than a regular search engine result page.
However, they are not significantly better in selecting the relevant documents
from the top-5. This indicates that the explainability of the search engine
result page leads to a better user experience. Thus, we conclude that the
proposed explainable elements are promising as visualization for search engine
users.
|
Simulating time evolution of quantum systems is one of the most promising
applications of quantum computing and also appears as a subroutine in many
applications such as Green's function methods. In the current era of NISQ
machines we assess the state of algorithms for simulating time dynamics with
limited resources. We propose the Jaynes-Cummings model and extensions to it as
useful toy models to investigate time evolution algorithms on near-term quantum
computers. Using these simple models, direct Trotterisation of the time
evolution operator produces deep circuits, requiring coherence times out of
reach on current NISQ hardware. Therefore we test two alternative responses to
this problem: variational compilation of the time evolution operator, and
variational quantum simulation of the wavefunction ansatz. We demonstrate
numerically to what extent these methods are successful in time evolving this
system. The costs in terms of circuit depth and number of measurements are
compared quantitatively, along with other drawbacks and advantages of each
method. We find that computational requirements for both methods make them
suitable for performing time evolution simulations of our models on NISQ
hardware. Our results also indicate that variational quantum compilation
produces more accurate results than variational quantum simulation, at the cost
of a larger number of measurements.
|
In this paper we give a systematic review of the theory of Gibbs measures of
Potts model on Cayley trees (developed since 2013) and discuss many
applications of the Potts model to real world situations: mainly biology,
physics, and some examples of alloy behavior, cell sorting, financial
engineering, flocking birds, flowing foams, image segmentation, medicine,
sociology etc.
|
We introduce a new class of commutative noetherian DG-rings which generalizes
the class of regular local rings. These are defined to be local DG-rings
$(A,\bar{\mathfrak{m}})$ such that the maximal ideal $\bar{\mathfrak{m}}
\subseteq \mathrm{H}^0(A)$ can be generated by an $A$-regular sequence. We call
these DG-rings sequence-regular DG-rings, and make a detailed study of them.
Using methods of Cohen-Macaulay differential graded algebra, we prove that the
Auslander-Buchsbaum-Serre theorem about localization generalizes to this
setting. This allows us to define global sequence-regular DG-rings, and to
introduce this regularity condition to derived algebraic geometry. It is shown
that these DG-rings share many properties of classical regular local rings, and
in particular we are able to construct canonical residue DG-fields in this
context. Finally, we show that sequence-regular DG-rings are ubiquitous, and in
particular, any eventually coconnective derived algebraic variety over a
perfect field is generically sequence-regular.
|
Tissues are characterized by layers of functional units such as cells and
extracellular matrix (ECM). Nevertheless, how dynamics at interlayer interfaces
help transmit cellular forces in tissues remains overlooked. Here, we
investigate a multi-layer system where a layer of epithelial cells is seeded
upon an elastic substrate in contact with a hard surface. Our experiments show
that, upon a cell extrusion event in the cellular layer, long-range wave
propagation emerges in the substrate only when the two substrate layers were
weakly attached to each other. We then derive a theoretical model which
quantitatively reproduces the wave dynamics and explains how frictional sliding
between substrate layers helps propagate cellular forces at a variety of
scales, depending on the stiffness, thickness, and slipperiness of the
substrate. These results highlight the importance of interfacial friction
between layers in transmitting mechanical cues in tissues in vivo.
|
This paper proposes a differentiable robust LQR layer for reinforcement
learning and imitation learning under model uncertainty and stochastic
dynamics. The robust LQR layer can exploit the advantages of robust optimal
control and model-free learning. It provides a new type of inductive bias for
stochasticity and uncertainty modeling in control systems. In particular, we
propose an efficient way to differentiate through a robust LQR optimization
program by rewriting it as a convex program (i.e. semi-definite program) of the
worst-case cost. Based on recent work on using convex optimization inside
neural network layers, we develop a fully differentiable layer for optimizing
this worst-case cost, i.e. we compute the derivative of a performance measure
w.r.t the model's unknown parameters, model uncertainty and stochasticity
parameters. We demonstrate the proposed method on imitation learning and
approximate dynamic programming on stochastic and uncertain domains. The
experiment results show that the proposed method can optimize robust policies
under uncertain situations, and are able to achieve a significantly better
performance than existing methods that do not model uncertainty directly.
|
Recent work on graph generative models has made remarkable progress towards
generating increasingly realistic graphs, as measured by global graph features
such as degree distribution, density, and clustering coefficients. Deep
generative models have also made significant advances through better modelling
of the local correlations in the graph topology, which have been very useful
for predicting unobserved graph components, such as the existence of a link or
the class of a node, from nearby observed graph components. A complete
scientific understanding of graph data should address both global and local
structure. In this paper, we propose a joint model for both as complementary
objectives in a graph VAE framework. Global structure is captured by
incorporating graph kernels in a probabilistic model whose loss function is
closely related to the maximum mean discrepancy(MMD) between the global
structures of the reconstructed and the input graphs. The ELBO objective
derived from the model regularizes a standard local link reconstruction term
with an MMD term. Our experiments demonstrate a significant improvement in the
realism of the generated graph structures, typically by 1-2 orders of magnitude
of graph structure metrics, compared to leading graph VAEand GAN models. Local
link reconstruction improves as well in many cases.
|
All solid state batteries are claimed to be the next-generation battery
system, in view of their safety accompanied by high energy densities. A new
advanced, multiscale compatible, and fully three dimensional model for solid
electrolytes is presented in this note. The response of the electrolyte is
profoundly studied theoretically and numerically, analyzing the equilibrium and
steady state behaviors, the limiting factors, as well as the most relevant
constitutive parameters according to the sensitivity analysis of the model.
|
Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can be
deployed to provide wireless connectivity to ground devices in events of
increased network demand, points-of-failure in existing infrastructure, or
disasters. However, it is challenging to conserve the energy of UAVs during
prolonged coverage tasks, considering their limited on-board battery capacity.
Reinforcement learning-based (RL) approaches have been previously used to
improve energy utilization of multiple UAVs, however, a central cloud
controller is assumed to have complete knowledge of the end-devices' locations,
i.e., the controller periodically scans and sends updates for UAV
decision-making. This assumption is impractical in dynamic network environments
with UAVs serving mobile ground devices. To address this problem, we propose a
decentralized Q-learning approach, where each UAV-BS is equipped with an
autonomous agent that maximizes the connectivity of mobile ground devices while
improving its energy utilization. Experimental results show that the proposed
design significantly outperforms the centralized approaches in jointly
maximizing the number of connected ground devices and the energy utilization of
the UAV-BSs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.