abstract
stringlengths 42
2.09k
|
---|
Edge-localized stationary states of the focusing nonlinear Schrodinger
equation on a general quantum graph are considered in the limit of large mass.
Compared to the previous works, we include arbitrary multi-pulse positive
states which approach asymptotically to a composition of N solitons, each
sitting on a bounded (pendant, looping, or internal) edge. Not only we prove
that such states exist in the limit of large mass, but also we compute the
precise Morse index (the number of negative eigenvalues in the corresponding
linearized operator). In the case of the edge-localized N-soliton states on the
pendant and looping edges, we prove that the Morse index is exactly N. The
technical novelty of this work is achieved by avoiding elliptic functions (and
related exponentially small scalings) and closing the existence arguments in
terms of the Dirichlet-to-Neumann maps for relevant parts of the given graph.
|
We investigate a generalization of Binet's factorial series in the parameter
$\alpha$ \[ \mu\left( z\right) =\sum_{m=1}^{\infty}\frac{b_{m}\left(
\alpha\right) }{\prod_{k=0}^{m-1}(z+\alpha+k)}% \] for the Binet function \[
\mu\left( z\right) =\log\Gamma\left( z\right) -\left( z-\frac{1}% {2}\right)
\log z+z-\frac{1}{2}\log\left( 2\pi\right) \] After a brief review of the Binet
function $\mu\left( z\right) $, several properties of the Binet polynomials
$b_{m}\left( \alpha\right) $ are presented. We compute the corresponding
factorial series for the derivatives of the Binet function and apply those
series to the digamma and polygamma functions. We compare Binet's generalized
factorial series with Stirling's \emph{asymptotic} expansion and demonstrate by
a numerical example that, with a same number of terms evaluated, the Binet
generalized factorial series with an optimized value of $\alpha$ can beat the
best possible accuracy of Stirling's expansion. Finally, we extend Binet's
method to factorial series of Laplace transforms.
|
The representation dimension of a finite group $G$ is the minimal dimension
of a faithful complex linear representation of $G$. We prove that the
representation dimension of any finite group $G$ is at most $\sqrt{|G|}$ except
if $G$ is a $2$-group with elementary abelian center of order $8$ and all
irreducible characters of $G$ whose kernel does not contain $Z(G)$ are fully
ramified with respect to $G/Z(G)$. We also obtain bounds for the representation
dimension of quotients of $G$ in terms of the representation dimension of $G$,
and discuss the relation of this invariant with the essential dimension of $G$.
|
We compute the scaling dimensions of a family of fixed-charge operators at
the infrared fixed point of the $O(N)$ model featuring cubic interactions in
$d=6-\epsilon$ for arbitrary $N$ to leading and subleading order in the charge
but to all orders in the couplings. The results are used to analyze the
conjectured equivalence with the $O(N)$ model displaying quartic interactions
at its ultraviolet fixed point. This is performed by comparing the cubic model
scaling dimensions against the known large $N$ results for the quartic model
and demonstrating that they match. Our results reinforce the conjectured
equivalence and further provide novel information on the finite $N$ physics
stemming from the computations in the cubic model just below 6 dimensions.
|
This paper addresses the degraded discrete-time Poisson wiretap channel
(DT--PWC) in an optical wireless communication system based on intensity
modulation and direct detection. Subject to nonnegativity, peak- and
average-intensity as well as bandwidth constraints, we study the
secrecy-capacity-achieving input distribution of this wiretap channel and prove
it to be unique and discrete with a finite number of mass points; one of them
located at the origin. Furthermore, we establish that every point on the
boundary of the rate-equivocation region of this wiretap channel is also
obtained by a unique and discrete input distribution with finitely many mass
points. In general, the number of mass points of the optimal distributions is
greater than two. This is in contrast with the degraded continuous-time PWC
when the signaling bandwidth is not restricted and where the secrecy capacity
and the entire boundary of the rate-equivocation region are achieved by binary
distributions. Furthermore, we extend our analysis to the case where only an
average-intensity constraint is active. For this case, we find that the secrecy
capacity and the entire boundary of the rate-equivocation region are attained
by discrete distributions with countably \textit{infinite} number of mass
points, but with finitely many mass points in any bounded interval.
|
We generalize solid-state tight-binding techniques for the spectral analysis
of large superconducting circuits. We find that tight-binding states can be
better suited for approximating the low-energy excitations than charge-basis
states, as illustrated for the interesting example of the current-mirror
circuit. The use of tight binding can dramatically lower the Hilbert space
dimension required for convergence to the true spectrum, and allows for the
accurate simulation of larger circuits that are out of reach of charge basis
diagonalization.
|
Diffuse phonon scattering strongly affects the phonon transport through a
disordered interface. The often-used diffuse mismatch model assumes that
phonons lose memory of their origin after being scattered by the interface.
Using mode-resolved atomic Green's function simulation, we demonstrate that
diffuse phonon scattering by a single disordered interface cannot make a phonon
lose its memory and thus the applicability of diffusive mismatch model is
limited. An analytical expression for diffuse scattering probability based on
the continuum approximation is also derived and shown to work reasonably well
at low frequencies.
|
Understanding who blames or supports whom in news text is a critical research
question in computational social science. Traditional methods and datasets for
sentiment analysis are, however, not suitable for the domain of political text
as they do not consider the direction of sentiments expressed between entities.
In this paper, we propose a novel NLP task of identifying directed sentiment
relationship between political entities from a given news document, which we
call directed sentiment extraction. From a million-scale news corpus, we
construct a dataset of news sentences where sentiment relations of political
entities are manually annotated. We present a simple but effective approach for
utilizing a pretrained transformer, which infers the target class by predicting
multiple question-answering tasks and combining the outcomes. We demonstrate
the utility of our proposed method for social science research questions by
analyzing positive and negative opinions between political entities in two
major events: 2016 U.S. presidential election and COVID-19. The newly proposed
problem, data, and method will facilitate future studies on interdisciplinary
NLP methods and applications.
|
If S is a structure over (a concrete category over) metric spaces, we say
that a group G has property BS if any action on a S-space has bounded orbits.
Examples of such structures include metric spaces, Hilbert spaces, CAT(0) cube
complexes, connected median graphs, trees or ultra-metric spaces. The
corresponding properties BS are respectively Bergman's property, property FH
(which, for countable groups, is equivalent to the celebrated Kazhdan's
property (T)), property FW (both for CAT(0) cube complexes and for connected
median graphs), property FA and uncountable cofinality (cof $\neq\omega$).
Our main result is that for a large class of structures S, the wreath product
$G\wr_XH$ has property BS if and only if both $G$ and $H$ have property BS and
$X$ is finite. On one hand, this encompasses in a general setting previously
known results for properties FH and FW. On the other hand, this also applies to
the Bergman's property. Finally, we also obtain that $G\wr_XH$ has cof
$\neq\omega$ if and only if both $G$ and $H$ have cof $\neq\omega$ and $H$ acts
on $X$ with finitely many orbits.
|
We present here results from a survey of intervening C IV absorbers at $z <
0.16$ conducted using 223 sightlines from the Hubble Spectroscopic Legacy
Archive. Most systems (83%) out of the total sample of 69 have simple
kinematics with 1 or 2 C IV components. In the 22 C IV systems with well
constrained H I column densities, the temperatures from the $b$-values imply
predominantly photoionized plasma ($T\leq 10^5$ K) and non-thermal dynamics.
These systems also have solar or higher metallicities. We obtain a C IV line
density of $d\mathcal{N}/dX = 5.1\pm 1.0$ for $\log
[N(C~IV)~(cm^{-2})]\geq12.9$, and $\Omega_{C~IV}=(8.01\pm 1.62) \times 10^{-8}$
for $12.9 \leq \log [N(C~IV)~(cm^{-2})] \leq 15.0$. The C IV bearing diffuse
gas in the $z < 0.16$ Universe has a metallicity of
$(2.07~{\pm}~0.43)~\times~10^{-3}$ Z$_{\odot}$, an order of magnitude more than
the metal abundances in the IGM at high redshifts ($z \gtrsim 5$), and
consistent with the slow build-up of metals in the diffuse circum/intergalactic
space with cosmic time. For $z<0.015$ (complete above $L>0.01L^\star$), the
Sloan Digital Sky Survey provides a tentative evidence of declining covering
fraction for strong C IV ($N>10^{13.5}~cm^{-2}$) with $\rho$ (impact parameter)
and $\rho/R_\mathrm{vir}$. However, the increase at high separations suggests
that strong systems are not necessarily coincident with such galaxies. We also
find that strong C IV absorption at $z<0.051$ is not coincident with galaxy
over-dense regions complete for $L>0.13L^\star$
|
Network anomaly detection aims to find network elements (e.g., nodes, edges,
subgraphs) with significantly different behaviors from the vast majority. It
has a profound impact in a variety of applications ranging from finance,
healthcare to social network analysis. Due to the unbearable labeling cost,
existing methods are predominately developed in an unsupervised manner.
Nonetheless, the anomalies they identify may turn out to be data noises or
uninteresting data instances due to the lack of prior knowledge on the
anomalies of interest. Hence, it is critical to investigate and develop
few-shot learning for network anomaly detection. In real-world scenarios, few
labeled anomalies are also easy to be accessed on similar networks from the
same domain as of the target network, while most of the existing works omit to
leverage them and merely focus on a single network. Taking advantage of this
potential, in this work, we tackle the problem of few-shot network anomaly
detection by (1) proposing a new family of graph neural networks -- Graph
Deviation Networks (GDN) that can leverage a small number of labeled anomalies
for enforcing statistically significant deviations between abnormal and normal
nodes on a network; and (2) equipping the proposed GDN with a new cross-network
meta-learning algorithm to realize few-shot network anomaly detection by
transferring meta-knowledge from multiple auxiliary networks. Extensive
evaluations demonstrate the efficacy of the proposed approach on few-shot or
even one-shot network anomaly detection.
|
We compute the expected sensitivity on measurements of optical depth to
reionization for a ground-based experiment at Teide Observatory. We simulate
polarized partial sky maps for the GroundBIRD experiment at the frequencies 145
and 220 GHz. We perform fits for the simulated maps with our pixel-based
likelihood to extract the optical depth to reionization. The noise levels of
polarization maps are estimated as 110 $\mu\mathrm{K~arcmin}$ and 780 $
\mu\mathrm{K~arcmin}$ for 145 and 220 GHz, respectively, by assuming a
three-year observing campaign and sky coverages of 0.537 for 145 GHz and 0.462
for 220 GHz. Our sensitivities for the optical depth to reionization are found
to be $\sigma_\tau$=0.030 with the simulated GroundBIRD maps, and
$\sigma_\tau$=0.012 by combining with the simulated QUIJOTE maps at 11, 13, 17,
19, 30, and 40 GHz.
|
With the growing awareness to fairness in machine learning and the
realization of the central role that data representation has in data processing
tasks, there is an obvious interest in notions of fair data representations.
The goal of such representations is that a model trained on data under the
representation (e.g., a classifier) will be guaranteed to respect some fairness
constraints.
Such representations are useful when they can be fixed for training models on
various different tasks and also when they serve as data filtering between the
raw data (known to the representation designer) and potentially malicious
agents that use the data under the representation to learn predictive models
and make decisions.
A long list of recent research papers strive to provide tools for achieving
these goals.
However, we prove that this is basically a futile effort. Roughly stated, we
prove that no representation can guarantee the fairness of classifiers for
different tasks trained using it; even the basic goal of achieving
label-independent Demographic Parity fairness fails once the marginal data
distribution shifts. More refined notions of fairness, like Odds Equality,
cannot be guaranteed by a representation that does not take into account the
task specific labeling rule with respect to which such fairness will be
evaluated (even if the marginal data distribution is known a priory).
Furthermore, except for trivial cases, no representation can guarantee Odds
Equality fairness for any two different tasks, while allowing accurate label
predictions for both.
While some of our conclusions are intuitive, we formulate (and prove) crisp
statements of such impossibilities, often contrasting impressions conveyed by
many recent works on fair representations.
|
We construct a bi-Hamiltonian structure for the holomorphic spin Sutherland
hierarchy based on collective spin variables. The construction relies on
Poisson reduction of a bi-Hamiltonian structure on the holomorphic cotangent
bundle of GL(n,C), which itself arises from the canonical symplectic structure
and the Poisson structure of the Heisenberg double of the standard GL(n,C)
Poisson--Lie group. The previously obtained bi-Hamiltonian structures of the
hyperbolic and trigonometric real forms are recovered on real slices of the
holomorphic spin Sutherland model.
|
We introduce the concept of a Gr\"obner nice pair of ideals in a polynomial
ring and we present some applications.
|
The use of the nuclear spins surrounding electron spin qubits as quantum
registers and long-lived memories opens the way to new applications in quantum
information and biological sensing. Hence, there is a need for generic and
robust forms of control of the nuclear registers. Although adiabatic gates are
widely used in quantum information, they can become too slow to outpace
decoherence. Here, we introduce a technique whereby adiabatic gates arise from
the dynamical decoupling protocols that simultaneously extend coherence. We
illustrate this pulse-based adiabatic control for nuclear spins around NV
centers in diamond. We obtain a closed-form expression from Landau-Zener theory
and show that it reliably describes the dynamics. By identifying robust Floquet
states, we show that the technique enables polarisation, one-shot flips and
state storage for nuclear spins. These results introduce a new control paradigm
that combines dynamical decoupling with adiabatic evolution.
|
We prove that in the Cayley graph of any braid group modulo its center
$B_n/Z(B_n)$, equipped with Garside's generating set, the axes of all
pseudo-Anosov braids are strongly contracting. More generally, we consider a
Garside group $G$ of finite type with cyclic center. We prove that in the
Cayley graph of $G/Z(G)$, equipped with the Garside generators, the axis of any
Morse element is strongly contracting. As a consequence, we prove that Morse
elements act loxodromically on the additional length graph of $G$.
|
We establish the existence and uniqueness of weak and renormalized solutions
to a degenerate, hypoelliptic Mean Field Games system with local coupling. An
important step is to obtain $L^{\infty}-$bounds for solutions to a degenerate
Fokker-Planck equation with a De-Giorgi type argument. In particular, we show
existence and uniqueness of weak solutions to Mean Fields Games systems with
Lipschitz Hamiltonians. Furthermore, we establish existence and uniqueness of
renormalized solutions for Hamiltonians with quadratic growth. Our approach
relies on the kinetic regularity of hypoelliptic equations obtained by Bouchut
and the work of Porretta on the existence and uniqueness of renormalized
solutions for the Mean Field Game system, in the non-degenerate setting.
|
The design of machines and algorithms capable of learning in a dynamically
changing environment has become an increasingly topical problem with the
increase of the size and heterogeneity of data available to learning systems.
As a consequence, the key issue of Continual Learning has become that of
addressing the stability-plasticity dilemma of connectionist systems, as they
need to adapt their model without forgetting previously acquired knowledge.
Within this context, rehearsal-based methods i.e., solutions in where the
learner exploits memory to revisit past data, has proven to be very effective,
leading to performance at the state-of-the-art. In our study, we propose an
analysis of the memory quantity/quality trade-off adopting various data
reduction approaches to increase the number of instances storable in memory. In
particular, we investigate complex instance compression techniques such as deep
encoders, but also trivial approaches such as image resizing and linear
dimensionality reduction. Our findings suggest that the optimal trade-off is
severely skewed toward instance quantity, where rehearsal approaches with
several heavily compressed instances easily outperform state-of-the-art
approaches with the same amount of memory at their disposal. Further, in high
memory configurations, deep approaches extracting spatial structure combined
with extreme resizing (of the order of $8\times8$ images) yield the best
results, while in memory-constrained configurations where deep approaches
cannot be used due to their memory requirement in training, Extreme Learning
Machines (ELM) offer a clear advantage.
|
Let $V$ be a finite set. Let $\mathcal{K}$ be a simplicial complex with its
vertices in $V$. In this paper, we discuss some differential calculus on $V$.
We construct some generalized homology groups of $\mathcal{K}$ by using the
differential calculus on $V$. Moreover, we define a co-simplicial complex to be
the complement of a simplicial complex in the complete hypergraph on $V$. Let
$\mathcal{L}$ be a co-simplicial complex with its vertices in $V$. We construct
some generalized cohomology groups of $\mathcal{L}$ by using the differential
calculus on $V$.
|
In stationary subspace analysis (SSA) one assumes that the observable
p-variate time series is a linear mixture of a k-variate nonstationary time
series and a (p-k)-variate stationary time series. The aim is then to estimate
the unmixing matrix which transforms the observed multivariate time series onto
stationary and nonstationary components. In the classical approach multivariate
data are projected onto stationary and nonstationary subspaces by minimizing a
Kullback-Leibler divergence between Gaussian distributions, and the method only
detects nonstationarities in the first two moments. In this paper we consider
SSA in a more general multivariate time series setting and propose SSA methods
which are able to detect nonstationarities in mean, variance and
autocorrelation, or in all of them. Simulation studies illustrate the
performances of proposed methods, and it is shown that especially the method
that detects all three types of nonstationarities performs well in various time
series settings. The paper is concluded with an illustrative example.
|
Graph neural networks (GNNs) are a powerful architecture for tackling graph
learning tasks, yet have been shown to be oblivious to eminent substructures
such as cycles. We present TOGL, a novel layer that incorporates global
topological information of a graph using persistent homology. TOGL can be
easily integrated into any type of GNN and is strictly more expressive in terms
of the Weisfeiler-Lehman graph isomorphism test. Augmenting GNNs with our layer
leads to improved predictive performance for graph and node classification
tasks, both on synthetic data sets (which can be classified by humans using
their topology but not by ordinary GNNs) and on real-world data.
|
This paper introduces the Gene Mover's Distance, a measure of similarity
between a pair of cells based on their gene expression profiles obtained via
single-cell RNA sequencing. The underlying idea of the proposed distance is to
interpret the gene expression array of a single cell as a discrete probability
measure. The distance between two cells is hence computed by solving an Optimal
Transport problem between the two corresponding discrete measures. In the
Optimal Transport model, we use two types of cost function for measuring the
distance between a pair of genes. The first cost function exploits a gene
embedding, called gene2vec, which is used to map each gene to a high
dimensional vector: the cost of moving a unit of mass of gene expression from a
gene to another is set to the Euclidean distance between the corresponding
embedded vectors. The second cost function is based on a Pearson distance among
pairs of genes. In both cost functions, the more two genes are correlated, the
lower is their distance. We exploit the Gene Mover's Distance to solve two
classification problems: the classification of cells according to their
condition and according to their type. To assess the impact of our new metric,
we compare the performances of a $k$-Nearest Neighbor classifier using
different distances. The computational results show that the Gene Mover's
Distance is competitive with the state-of-the-art distances used in the
literature.
|
Nearby, low-metallicity dwarf starburst galaxies hosting active galactic
nuclei (AGNs) offer the best local analogs to study the early evolution of
galaxies and their supermassive black holes (BHs). Here we present a detailed
multi-wavelength investigation of star formation and BH activity in the
low-metallicity dwarf-dwarf galaxy merger Mrk 709. Using Hubble Space Telescope
H$\alpha$ and continuum imaging combined with Keck spectroscopy, we determine
that the two dwarf galaxies are likely in the early stages of a merger (i.e.,
their first pass) and discover a spectacular $\sim 10$ kpc-long string of young
massive star clusters ($t \lesssim 10$ Myr; $M_\star \gtrsim 10^5~M_\odot$)
between the galaxies triggered by the interaction. We find that the southern
galaxy, Mrk 709 S, is undergoing a clumpy mode of star formation resembling
that seen in high-redshift galaxies, with multiple young clusters/clumps having
stellar masses between $10^7$ and $10^8~M_\odot$. Furthermore, we present
additional evidence for a low-luminosity AGN in Mrk 709 S (first identified by
Reines et al. 2014 (arXiv:1405.0278) using radio and X-ray observations),
including the detection of the coronal [Fe X] optical emission line. The work
presented here provides a unique glimpse into processes key to hierarchical
galaxy formation and BH growth in the early Universe.
|
We present a field theoretical description of quarkyonic matter consisting of
quark, nucleon and ghost fields coupling to mesonic degrees of freedom. The
ghosts are present to cancel over-counting of nucleon states that are Pauli
blocked by the quark Fermi sea. Such a theory becomes an effective field theory
of nucleons at low baryon density, and as such will reproduce nucleonic matter
phenomenology. This theory can accommodate chiral symmetry restoration and the
dynamical generation of a shell of nucleons at the Fermi surface. It is valid
for finite temperature and density. In such a theory, quark-nucleon duality is
accomplished by inclusion of ghost fields so that the nucleons extra degrees of
freedom, that are beyond those of quarks, are compensated by the ghost fields.
|
The manuscript contains elegant extensions of the fundamental variational
principles: Br\o ndsted's and Ekeland's. On the other hand we get general and
precise version of the Takahashi and the Cariste fixed point theorems. The
results are based on the notion of istance for uniform spaces.
|
Topology effects have being extensively studied and confirmed in strongly
correlated condensed matter physics. In the large color number limit of QCD,
baryons can be regarded as topological objects -- skyrmions -- and the baryonic
matter can be regarded as a skyrmion matter. We review in this paper the
generalized effective field theory for dense compact-star matter constructed
with the robust inputs obtained from the skyrmion approach to dense nuclear
matter, relying to possible ``emergent" scale and local flavor symmetries at
high density. All nuclear matter properties from the saturation density $n_0$
up to several times $n_0$ can be fairly well described. A uniquely novel -- and
unorthdox -- feature of this theory is the precocious appearance of the
pseudo-conformal sound velocity $v^2_{s}/c^2 \approx 1/3$, with the
non-vanishing trace of the energy momentum tensor of the system. The topology
change encoded in the density scaling of low energy constants is interpreted as
the quark-hadron continuity in the sense of Cheshire Cat Principle (CCP) at
density $\gsim 2n_0$ in accessing massive compact stars. We confront the
approach with the data from GW170817 and GW190425.
|
The COVID-19 pandemic is a global crisis that has been testing every society
and exposing the critical role of local politics in crisis response. In the
United States, there has been a strong partisan divide between the Democratic
and Republican party's narratives about the pandemic which resulted in
polarization of individual behaviors and divergent policy adoption across
regions. As shown in this case, as well as in most major social issues,
strongly polarized narrative frameworks facilitate such narratives. To
understand polarization and other social chasms, it is critical to dissect
these diverging narratives. Here, taking the Democratic and Republican
political social media posts about the pandemic as a case study, we demonstrate
that a combination of computational methods can provide useful insights into
the different contexts, framing, and characters and relationships that
construct their narrative frameworks which individual posts source from.
Leveraging a dataset of tweets from elite politicians in the U.S., we found
that the Democrats' narrative tends to be more concerned with the pandemic as
well as financial and social support, while the Republicans discuss more about
other political entities such as China. We then perform an automatic framing
analysis to characterize the ways in which they frame their narratives, where
we found that the Democrats emphasize the government's role in responding to
the pandemic, and the Republicans emphasize the roles of individuals and
support for small businesses. Finally, we present a semantic role analysis that
uncovers the important characters and relationships in their narratives as well
as how they facilitate a membership categorization process. Our findings
concretely expose the gaps in the "elusive consensus" between the two parties.
Our methodologies may be applied to computationally study narratives in various
domains.
|
Closed-loop control systems employ continuous sensing and actuation to
maintain controlled variables within preset bounds and achieve the desired
system output. Intentional disturbances in the system, such as in the case of
cyberattacks, can compromise reachability of control goals, and in several
cases jeopardize safety. The increasing connectivity and exposure of networked
control to external networks has enabled attackers to compromise these systems
by exploiting security vulnerabilities. Attacks against safety-critical control
loops can not only drive the system over a trajectory different from the
desired, but also cause fatal consequences to humans. In this paper we present
a physics-based Intrusion Detection System (IDS) aimed at increasing the
security in control systems. In addition to conventional process state
estimation for intrusion detection, since the controller cannot be trusted, we
introduce a controller state estimator. Additionally, we make our detector
context-aware by utilizing sensor measurements from other control loops, which
allows to distinguish and characterize disturbances from attacks. We introduce
adaptive thresholding and adaptive filtering as means to achieve
context-awareness. Together, these methodologies allow detection and
localization of attacks in closed-loop controls. Finally, we demonstrate
feasibility of the approach by mounting a series of attacks against a networked
Direct Current (DC) motor closed-loop speed control deployed on an ECU testbed,
as well as on a simulated automated lane keeping system. Among other
application domains, this set of approaches is key to support security in
automotive systems, and ultimately increase road and passenger safety.
|
Current dialogue summarization systems usually encode the text with a number
of general semantic features (e.g., keywords and topics) to gain more powerful
dialogue modeling capabilities. However, these features are obtained via
open-domain toolkits that are dialog-agnostic or heavily relied on human
annotations. In this paper, we show how DialoGPT, a pre-trained model for
conversational response generation, can be developed as an unsupervised
dialogue annotator, which takes advantage of dialogue background knowledge
encoded in DialoGPT. We apply DialoGPT to label three types of features on two
dialogue summarization datasets, SAMSum and AMI, and employ pre-trained and non
pre-trained models as our summarizes. Experimental results show that our
proposed method can obtain remarkable improvements on both datasets and
achieves new state-of-the-art performance on the SAMSum dataset.
|
It is common knowledge that leverage can increase the potential returns of an
investment, at the expense of increased risk. For a passive investor in the
stock market, leverage can be achieved using margin debt or leveraged-ETFs. We
perform bootstrapped Monte-Carlo simulations of leveraged (and unleveraged)
mixed portfolios of stocks and bonds, based on past stock market data, and show
that leverage can amplify the potential returns, without significantly
increasing the risk for long-term investors.
|
In this paper, we inaugurate the field of quantum fair machine learning. We
undertake a comparative analysis of differences and similarities between
classical and quantum fair machine learning algorithms, specifying how the
unique features of quantum computation alter measures, metrics and remediation
strategies when quantum algorithms are subject to fairness constraints. We
present the first results in quantum fair machine learning by demonstrating the
use of Grover's search algorithm to satisfy statistical parity constraints
imposed on quantum algorithms. We provide lower-bounds on iterations needed to
achieve such statistical parity within $\epsilon$-tolerance. We extend
canonical Lipschitz-conditioned individual fairness criteria to the quantum
setting using quantum metrics. We examine the consequences for typical measures
of fairness in machine learning context when quantum information processing and
quantum data are involved. Finally, we propose open questions and research
programmes for this new field of interest to researchers in computer science,
ethics and quantum computation.
|
Constraint-based recommenders support users in the identification of items
(products) fitting their wishes and needs. Example domains are financial
services and electronic equipment. In this paper we show how divide-and-conquer
based (direct) diagnosis algorithms (no conflict detection is needed) can be
exploited in constraint-based recommendation scenarios. In this context, we
provide an overview of the MediaWiki-based recommendation environment WeeVis.
|
The level of star formation in elliptical galaxies is poorly constrained, due
to difficulties in quantifying the contamination of flux-based estimates of
star formation from unrelated phenomena, such as AGN and old stellar
populations. We here utilise core-collapse supernovae (CCSNe) as unambiguous
tracers of recent star formation in ellipticals within a cosmic volume. We
firstly isolate a sample of 421 z < 0.2, r < 21.8 mag CCSNe from the SDSS-II
Supernova Survey. We then introduce a Bayesian method of identifying
ellipticals via their colours and morphologies in a manner unbiased by redshift
and yet consistent with manual classification from Galaxy Zoo 1. We find ~ 25 %
of z < 0.2 r < 20 mag galaxies in the Stripe 82 region are ellipticals (~ 28000
galaxies). In total, 36 CCSNe are found to reside in ellipticals. We
demonstrate that such early-types contribute a non-negligible fraction of star
formation to the present-day cosmic budget, at 11.2 $\pm$ 3.1 (stat)
$^{+3.0}_{-4.2}$ (sys) %. Coupling this result with the galaxy stellar mass
function of ellipticals, the mean specific star formation rate (SSFR;
$\overline{S}$) of these systems is derived. The best-fit slope is given by log
($\overline{S}(M)$/yr) = - (0.80 $\pm$ 0.59) log ($M/10^{10.5}\rm{M}_{\odot}$)
- 10.83 $\pm$ 0.18. The mean SSFR for all log ($M/\rm{M}_{\odot}$) > 10.0
ellipticals is found to be $\overline{S} = 9.2 \pm 2.4$ (stat) $^{+2.7}_{-2.3}$
(sys) $\times 10^{-12}$ yr$^{-1}$, which is consistent with recent estimates
via SED-fitting, and is 11.8 $\pm$ 3.7 (stat) $^{+3.5}_{-2.9}$ (sys) % of the
mean SSFR level on the main sequence as also derived from CCSNe. We find the
median optical spectrum of elliptical CCSN hosts is statistically consistent
with that of a control sample of ellipticals that do not host CCSNe, implying
that these SN-derived results are well-representative of the total low-z
elliptical population.
|
We present a novel Material Point Method (MPM) discretization of surface
tension forces that arise from spatially varying surface energies. These
variations typically arise from surface energy dependence on temperature and/or
concentration. Furthermore, since the surface energy is an interfacial property
depending on the types of materials on either side of an interface, spatial
variation is required for modeling the contact angle at the triple junction
between a liquid, solid and surrounding air. Our discretization is based on the
surface energy itself, rather than on the associated traction condition most
commonly used for discretization with particle methods. Our energy based
approach automatically captures surface gradients without the explicit need to
resolve them as in traction condition based approaches. We include an implicit
discretization of thermomechanical material coupling with a novel
particle-based enforcement of Robin boundary conditions associated with
convective heating. Lastly, we design a particle resampling approach needed to
achieve perfect conservation of linear and angular momentum with
AffineParticle-In-Cell (APIC) [Jiang et al. 2015]. We show that our approach
enables implicit time stepping for complex behaviors like the Marangoni effect
and hydrophobicity/hydrophilicity. We demonstrate the robustness and utility of
our method by simulating materials that exhibit highly diverse degrees of
surface tension and thermomechanical effects, such as water, wine and wax.
|
We consider the lower-triangular matrix of generating polynomials that
enumerate $k$-component forests of rooted trees on the vertex set $[n]$
according to the number of improper edges (generalizations of the Ramanujan
polynomials). We show that this matrix is coefficientwise totally positive and
that the sequence of its row-generating polynomials is coefficientwise
Hankel-totally positive. More generally, we define the generic rooted-forest
polynomials by introducing also a weight $m! \, \phi_m$ for each vertex with
$m$ proper children. We show that if the weight sequence $\phi$ is
Toeplitz-totally positive, then the two foregoing total-positivity results
continue to hold. Our proofs use production matrices and exponential Riordan
arrays.
|
The state of the art in video super-resolution (SR) are techniques based on
deep learning, but they perform poorly on real-world videos (see Figure 1). The
reason is that training image-pairs are commonly created by downscaling a
high-resolution image to produce a low-resolution counterpart. Deep models are
therefore trained to undo downscaling and do not generalize to super-resolving
real-world images. Several recent publications present techniques for improving
the generalization of learning-based SR, but are all ill-suited for real-time
application.
We present a novel approach to synthesizing training data by simulating two
digital-camera image-capture processes at different scales. Our method produces
image-pairs in which both images have properties of natural images. Training an
SR model using this data leads to far better generalization to real-world
images and videos.
In addition, deep video-SR models are characterized by a high
operations-per-pixel count, which prohibits their application in real-time. We
present an efficient CNN architecture, which enables real-time application of
video SR on low-power edge-devices. We split the SR task into two sub-tasks: a
control-flow which estimates global properties of the input video and adapts
the weights and biases of a processing-CNN that performs the actual processing.
Since the process-CNN is tailored to the statistics of the input, its capacity
kept low, while retaining effectivity. Also, since video-statistics evolve
slowly, the control-flow operates at a much lower rate than the video
frame-rate. This reduces the overall computational load by as much as two
orders of magnitude. This framework of decoupling the adaptivity of the
algorithm from the pixel processing, can be applied in a large family of
real-time video enhancement applications, e.g., video denoising, local
tone-mapping, stabilization, etc.
|
This article presents a bidirectional type system for the Calculus of
Inductive Constructions (CIC). It introduces a new judgement intermediate
between the usual inference and checking, dubbed constrained inference, to
handle the presence of computation in types. The key property of the system is
its completeness with respect to the usual undirected one, which has been
formally proven in Coq as a part of the MetaCoq project. Although it plays an
important role in an ongoing completeness proof for a realistic typing
algorithm, the interest of bidirectionality is wider, as it gives insights and
structure when trying to prove properties on CIC or design variations and
extensions. In particular, we put forward constrained inference, an
intermediate between the usual inference and checking judgements, to handle the
presence of computation in types.
|
Federated learning allows distributed medical institutions to collaboratively
learn a shared prediction model with privacy protection. While at clinical
deployment, the models trained in federated learning can still suffer from
performance drop when applied to completely unseen hospitals outside the
federation. In this paper, we point out and solve a novel problem setting of
federated domain generalization (FedDG), which aims to learn a federated model
from multiple distributed source domains such that it can directly generalize
to unseen target domains. We present a novel approach, named as Episodic
Learning in Continuous Frequency Space (ELCFS), for this problem by enabling
each client to exploit multi-source data distributions under the challenging
constraint of data decentralization. Our approach transmits the distribution
information across clients in a privacy-protecting way through an effective
continuous frequency space interpolation mechanism. With the transferred
multi-source distributions, we further carefully design a boundary-oriented
episodic learning paradigm to expose the local learning to domain distribution
shifts and particularly meet the challenges of model generalization in medical
image segmentation scenario. The effectiveness of our method is demonstrated
with superior performance over state-of-the-arts and in-depth ablation
experiments on two medical image segmentation tasks. The code is available at
"https://github.com/liuquande/FedDG-ELCFS".
|
Molecular adhesion promoters are a central component of modern coating
systems for the corrosion protection of structural materials. They are
interface active and form ultrathin corrosion inhibiting and adhesion-promoting
layers. Here we utilize thiol-based self-assembled monolayers (SAMs) as model
system for demonstrating a comprehensive combinatorial approach to understand
molecular level corrosion protection mechanisms under anodic polarization.
Specifically, we compare hydrophilic 11-Mercapto-1-undecanol and hydrophobic
1-Undecanethiol SAMs and their gold-dissolution inhibiting properties. We can
show that the intermolecular forces (hydrophobic vs hydrophilic effects)
control how SAM layers perform under oxidative conditions. Specifically, using
\textit{in situ} electrochemical AFM and a scanning-flow cell coupled to an
ICP-MS a complementary view on both corrosion resistance, as well as on changes
in surface morphology/adhesion of the SAM is possible. Protection from
oxidative dissolution is higher with hydrophobic SAMs, which detach under
micelle formation, while the hydrophilic SAM exhibits lower protective effects
on gold dissolution rates, although it stays intact as highly mobile layer
under anodic polarization. The developed multi-technique approach will prove
useful for studying the interfacial activity and corrosion suppression
mechanism of inhibiting molecules on other metals and alloys.
|
We report on a novel material, namely two-dimensional (2D)
V$_{1-x}$Pt$_x$Se$_2$ alloy, exhibiting simultaneously ferromagnetic order and
Rashba spin-orbit coupling. While ferromagnetism is absent in 1T-VSe$_2$ due to
the competition with the charge density wave phase, we demonstrate
theoretically and experimentally that the substitution of vanadium by platinum
in VSe$_2$ (10-50 %) to form an homogeneous 2D alloy restores ferromagnetic
order with Curie temperatures of 6 K for 5 monolayers and 25 K for one
monolayer of V$_{0.65}$Pt$_{0.35}$Se$_2$. Moreover, the presence of platinum
atoms gives rise to Rashba spin-orbit coupling in (V,Pt)Se$_2$ providing an
original platform to study the interplay between ferromagnetism and spin-orbit
coupling in the 2D limit.
|
The present work clarifies a failure of the effective-field theory in
predicting a false spontaneous long-range order and phase transition of Ising
nanoparticles, nanoislands, nanotubes and nanowires with either zero- or
one-dimensional magnetic dimensionality. It is conjectured that the standard
formulation of the effective-field theory due to Honmura and Kaneyoshi
generally predicts for the Ising spin systems a spurious spontaneous long-range
order with nonzero critical temperature regardless of their magnetic
dimensionality whenever at least one Ising spin has coordination number greater
than two. The failure of the effective-field theory is exemplified on a few
paradigmatic exactly solved examples of zero- and one-dimensional Ising
nanosystems: star, cube, decorated hexagon, star of David, branched chain,
sawtooth chain, two-leg and hexagonal ladders. The presented exact solutions
illustrate eligibility of a few rigorous analytical methods for exact treatment
of the Ising nanosystems: exact enumeration, graph-theoretical approach,
transfer-matrix method and decoration-iteration transformation. The paper also
provides a substantial survey of the scientific literature, in which the
effective-field theory led to a false prediction of the spontaneous long-range
ordering and phase transition.
|
Let $L = \Delta + V$ be Schr{\"o}dinger operator with a non-negative
potential $V$ on a complete Riemannian manifold $M$. We prove that the conical
square functional associated with $L$ is bounded on $L^p$ under different
assumptions. This functional is defined by $$ \mathcal{G}_L (f) (x) = \left(
\int_0^\infty \int_{B(x,t^{1/2})} |\nabla e^{-tL} f(y)|^2 + V |e^{-tL} f(y)|^2
\frac{\mathrm{d}t \mathrm{d}y}{Vol(y,t^{1/2})} \right)^{1/2}.$$For $p \in
[2,+\infty)$ we show that it is sufficient to assume that the manifold has the
volume doubling property whereas for $p \in (1,2)$ we need extra assumptions of
$L^p-L^2$ of diagonal estimates for $\{ \sqrt{t} \nabla e^{-tL}, t\geq 0 \}$
and $ \{ \sqrt{t} \sqrt{V} e^{-tL} , t \geq 0\}$.Given a bounded holomorphic
function $F$ on some angular sector, we introduce the generalized conical
vertical square functional$$\mathcal{G}_L^F (f) (x) = \left( \int_0^\infty
\int_{B(x,t^{1/2})} |\nabla F(tL) f(y)|^2 + V |F(tL) f(y)|^2 \frac{\mathrm{d}t
\mathrm{d}y}{Vol(y,t^{1/2})} \right)^{1/2}$$ and prove its boundedness on $L^p$
if $F$ has sufficient decay at zero and infinity. We also consider conical
square functions associated with the Poisson semigroup, lower bounds, and make
a link with the Riesz transform.
|
The use of a policy and a heuristic function for guiding search can be quite
effective in adversarial problems, as demonstrated by AlphaGo and its
successors, which are based on the PUCT search algorithm. While PUCT can also
be used to solve single-agent deterministic problems, it lacks guarantees on
its search effort and it can be computationally inefficient in practice.
Combining the A* algorithm with a learned heuristic function tends to work
better in these domains, but A* and its variants do not use a policy. Moreover,
the purpose of using A* is to find solutions of minimum cost, while we seek
instead to minimize the search loss (e.g., the number of search steps). LevinTS
is guided by a policy and provides guarantees on the number of search steps
that relate to the quality of the policy, but it does not make use of a
heuristic function. In this work we introduce Policy-guided Heuristic Search
(PHS), a novel search algorithm that uses both a heuristic function and a
policy and has theoretical guarantees on the search loss that relates to both
the quality of the heuristic and of the policy. We show empirically on the
sliding-tile puzzle, Sokoban, and a puzzle from the commercial game `The
Witness' that PHS enables the rapid learning of both a policy and a heuristic
function and compares favorably with A*, Weighted A*, Greedy Best-First Search,
LevinTS, and PUCT in terms of number of problems solved and search time in all
three domains tested.
|
The two-dimensional thermoelastic problem of an adiabatic cavity in an
infinite isotropic homogeneous medium subjected to uniform heat flux is
studied, where the shape of the cavity is characterized by the Laurent
polynomial. By virtue of a novel tactics, the obtained K-M potentials can be
explicitly worked out to satisfy the boundary conditions precisely, and the
possible translation of the cavity is also available. The new and explicit
analytical solutions are compared with the those reported in literature and
some serious problems are found and corrected. Finally, some discussions on the
thermal stress concentration around the tips of three typical cavities are
provided.
|
Recent investigations have been carried out on critical analyses of beginning
physics teachers confronted with questionable explanations. These studies raise
the question of the choices made by teachers for their teaching once they have
become aware of one (or more) flaw(s) in the explanations analysed. This
presentation will focus on the possible conflicts, for beginning teachers,
between various selection criteria declared for their explanations, including:
appropriateness (from the points of view of internal coherence, logical
completeness and compliance with accepted physical laws) and simplicity. It
will introduce the definition of "mathematical efficiency" as a criterion for
assessing an explanation, on the basis of a study whose results show that it is
a priority for some EDs even at the cost of coherence and of simplicity. The
article concludes with the implications of the issues addressed for research
and teacher training
|
A covariant, scalar-tensor gravity is constructed such that the static,
spherically symmetric Rezzolla-Zhidenko metric is an exact solution to the
theory. The equations describing gravitational perturbations of this spacetime,
which represents a generic black hole possessing an arbitrary number of hairs,
can then be derived. This allows for a self-consistent study of the associated
quasinormal modes. It is shown that mode spectra are tied to not only the
non-Einstein parameters in the metric but also to those that appear at the
level of the action, and that different branches of the exact theory can, in
some cases, predict significantly different oscillation frequencies and damping
times. For choices which make the theory appear more like general relativity in
some precise sense, we find that a nontrivial Rezzolla-Zhidenko parameter space
is permissible under current constraints on fundamental ringdown modes observed
by Advanced LIGO.
|
We perform a study of $W$-boson production in polarized proton-proton
collisions through next-to-next-to-leading order (NNLO) in perturbative QCD.
This calculation is required to extend the extraction of polarized parton
distribution functions to NNLO accuracy. We present differential distributions
at $\sqrt{s}=510$ GeV, relevant for comparison to measurements from the
Relativistic Heavy Ion Collider (RHIC). The NNLO QCD corrections significantly
reduce the scale dependence of the cross section. We compare the longitudinal
single-spin asymmetries as a function of lepton pseudorapidity to RHIC data.
The asymmetries exhibit excellent stability under perturbative QCD corrections.
|
Nisan and Szegedy (CC 1994) showed that any Boolean function
$f:\{0,1\}^n\rightarrow \{0,1\}$ that depends on all its input variables, when
represented as a real-valued multivariate polynomial $P(x_1,\ldots,x_n)$, has
degree at least $\log n - O(\log \log n)$. This was improved to a tight $(\log
n - O(1))$ bound by Chiarelli, Hatami and Saks (Combinatorica 2020). Similar
statements are also known for other Boolean function complexity measures such
as Sensitivity (Simon (FCT 1983)), Quantum query complexity, and Approximate
degree (Ambainis and de Wolf (CC 2014)).
In this paper, we address this question for \emph{Probabilistic degree}. The
function $f$ has probabilistic degree at most $d$ if there is a random
real-valued polynomial of degree at most $d$ that agrees with $f$ at each input
with high probability. Our understanding of this complexity measure is
significantly weaker than those above: for instance, we do not even know the
probabilistic degree of the OR function, the best-known bounds put it between
$(\log n)^{1/2-o(1)}$ and $O(\log n)$ (Beigel, Reingold, Spielman (STOC 1991);
Tarui (TCS 1993); Harsha, Srinivasan (RSA 2019)).
Here we can give a near-optimal understanding of the probabilistic degree of
$n$-variate functions $f$, \emph{modulo} our lack of understanding of the
probabilistic degree of OR. We show that if the probabilistic degree of OR is
$(\log n)^c$, then the minimum possible probabilistic degree of such an $f$ is
at least $(\log n)^{c/(c+1)-o(1)}$, and we show this is tight up to $(\log
n)^{o(1)}$ factors.
|
A preference based multi-objective evolutionary algorithm is proposed for
generating solutions in an automatically detected knee point region. It is
named Automatic Preference based DI-MOEA (AP-DI-MOEA) where DI-MOEA stands for
Diversity-Indicator based Multi-Objective Evolutionary Algorithm). AP-DI-MOEA
has two main characteristics: firstly, it generates the preference region
automatically during the optimization; secondly, it concentrates the solution
set in this preference region. Moreover, the real-world vehicle fleet
maintenance scheduling optimization (VFMSO) problem is formulated, and a
customized multi-objective evolutionary algorithm (MOEA) is proposed to
optimize maintenance schedules of vehicle fleets based on the predicted failure
distribution of the components of cars. Furthermore, the customized MOEA for
VFMSO is combined with AP-DI-MOEA to find maintenance schedules in the
automatically generated preference region. Experimental results on
multi-objective benchmark problems and our three-objective real-world
application problems show that the newly proposed algorithm can generate the
preference region accurately and that it can obtain better solutions in the
preference region. Especially, in many cases, under the same budget, the Pareto
optimal solutions obtained by AP-DI-MOEA dominate solutions obtained by MOEAs
that pursue the entire Pareto front.
|
Sparse regression has recently been applied to enable transfer learning from
very limited data. We study an extension of this approach to unsupervised
learning -- in particular, learning word embeddings from unstructured text
corpora using low-rank matrix factorization. Intuitively, when transferring
word embeddings to a new domain, we expect that the embeddings change for only
a small number of words -- e.g., the ones with novel meanings in that domain.
We propose a novel group-sparse penalty that exploits this sparsity to perform
transfer learning when there is very little text data available in the target
domain -- e.g., a single article of text. We prove generalization bounds for
our algorithm. Furthermore, we empirically evaluate its effectiveness, both in
terms of prediction accuracy in downstream tasks as well as the
interpretability of the results.
|
A novel technique for divided-pulse amplification is presented in a
proof-of-principle experiment. A pulse burst, cut out of the pulse train of a
mode-locked oscillator, is amplified and temporally combined into a single
pulse. High combination efficiency and excellent pulse contrast are
demonstrated. The system is mostly fiber-coupled, enabling a high
interferometric stability. This approach provides access to the amplitude and
phase of the individual pulses in the burst to be amplified, potentially
allowing the compensation of gain saturation and nonlinear phase mismatches
within the burst. Therefore, this technique enables the scaling of the peak
power and pulse energy of pulsed laser systems beyond currently prevailing
limitations.
|
Optical cat state plays an essential role in quantum computation and quantum
metrology. Here, we experimentally quantify quantum coherence of an optical cat
state by means of relative entropy and l_1 norm of coherence in Fock basis
based on the prepared optical cat state at rubidium D1 line. By transmitting
the optical cat state through a lossy channel, we also demonstrate the
robustness of quantum coherence of optical cat state in the presence of loss,
which is different from the decoherence properties of fidelity and Wigner
function negativity of the optical cat state. Our results confirm that quantum
coherence of optical cat states is robust against loss and pave the way for the
application with optical cat states.
|
Software designers and developers are increasingly relying on application
frameworks as first-class design concepts. They instantiate the services that
frameworks provide to implement various architectural tactics and patterns. One
of the challenges in using frameworks for such tasks is the difficulty of
learning and correctly using frameworks' APIs. This paper introduces a
learning-based approach called ArCode to help novice programmers correctly use
frameworks' APIs to implement architectural tactics and patterns. ArCode has
several novel components: a graph-based approach for learning specification of
a framework from a limited number of training software, a program analysis
algorithm to eliminate erroneous training data, and a recommender module to
help programmers use APIs correctly and identify API misuses in their programs.
We evaluated our technique across two popular frameworks: JAAS security
framework used for authentication and authorization tactic and Java RMI
framework used to enable remote method invocation between client and server and
other object-oriented patterns. Our evaluation results show (i) the feasibility
of using ArCode to learn the specification of a framework; (ii) ArCode
generates accurate recommendations for finding the next API call to implement
an architectural tactic/pattern based on the context of the programmer's code;
(iii) it accurately detects API misuses in the code that implements a
tactic/pattern and provides fix recommendations. Comparison of ArCode with two
prior techniques (MAPO and GrouMiner) on API recommendation and misuse
detection shows that ArCode outperforms these approaches.
|
A bipartite network is a graph structure where nodes are from two distinct
domains and only inter-domain interactions exist as edges. A large number of
network embedding methods exist to learn vectorial node representations from
general graphs with both homogeneous and heterogeneous node and edge types,
including some that can specifically model the distinct properties of bipartite
networks. However, these methods are inadequate to model multiplex bipartite
networks (e.g., in e-commerce), that have multiple types of interactions (e.g.,
click, inquiry, and buy) and node attributes. Most real-world multiplex
bipartite networks are also sparse and have imbalanced node distributions that
are challenging to model. In this paper, we develop an unsupervised Dual
HyperGraph Convolutional Network (DualHGCN) model that scalably transforms the
multiplex bipartite network into two sets of homogeneous hypergraphs and uses
spectral hypergraph convolutional operators, along with intra- and
inter-message passing strategies to promote information exchange within and
across domains, to learn effective node embedding. We benchmark DualHGCN using
four real-world datasets on link prediction and node classification tasks. Our
extensive experiments demonstrate that DualHGCN significantly outperforms
state-of-the-art methods, and is robust to varying sparsity levels and
imbalanced node distributions.
|
As AI-based medical devices are becoming more common in imaging fields like
radiology and histology, interpretability of the underlying predictive models
is crucial to expand their use in clinical practice. Existing heatmap-based
interpretability methods such as GradCAM only highlight the location of
predictive features but do not explain how they contribute to the prediction.
In this paper, we propose a new interpretability method that can be used to
understand the predictions of any black-box model on images, by showing how the
input image would be modified in order to produce different predictions. A
StyleGAN is trained on medical images to provide a mapping between latent
vectors and images. Our method identifies the optimal direction in the latent
space to create a change in the model prediction. By shifting the latent
representation of an input image along this direction, we can produce a series
of new synthetic images with changed predictions. We validate our approach on
histology and radiology images, and demonstrate its ability to provide
meaningful explanations that are more informative than GradCAM heatmaps. Our
method reveals the patterns learned by the model, which allows clinicians to
build trust in the model's predictions, discover new biomarkers and eventually
reveal potential biases.
|
In this survey we present applications of the ideas of complement and
neighborhood in the theory embeddings of manifolds into Euclidean space (in
codimension at least three). We describe how the combination of these ideas
gives a reduction of embeddability and isotopy problems to algebraic problems.
We present a more clarified exposition of the Browder-Levine theorem on
realization of normal systems. Most of the survey is accessible to
non-specialists in the theory of embeddings.
|
We study random quantum circuits and their rate of producing bipartite
entanglement, specifically with respect to the choice of 2-qubit gates and the
order (protocol) in which these are applied. The problem is mapped to a
Markovian process and proved that there are large spectral equivalence classes
-- different configurations have the same spectrum. Optimal gates and the
protocol that generate entanglement with the fastest theoretically possible
rate are identified. Relaxation towards the asymptotic thermal entanglement
proceeds via a series of phase transitions in the local relaxation rate, which
is a consequence of non-Hermiticity. In particular, non-Hermiticity can cause
the rate to be either faster, or, even more interestingly, slower than
predicted by the matrix eigenvalue gap. This is caused by an exponential in
system size explosion of expansion coefficient sizes resulting in a 'phantom'
eigenvalue, and is due to non-orthogonality of non-Hermitian eigenvectors. We
numerically demonstrate that the phenomenon occurs also in random circuits with
non-optimal generic gates, random U(4) gates, and also without spatial or
temporal randomness, suggesting that it could be of wide importance also in
other non-Hermitian settings, including correlations.
|
Observations made in dusty plasma experiments suggest that an ensemble of
electrically charged solid particles, confined in an elongated trap, develops
structural inhomogeneities. With narrowing the trap the particles tend to form
layers oriented parallel with the trap walls. In this work we present
theoretical and numerical results on the structure of three-dimensional
many-particle systems with screened Coulomb (Yukawa) inter-particle interaction
in the strongly coupled liquid phase, confined in one-dimensional harmonic
trap, forming quasi-2D configurations. Particle density profiles are calculated
by means of the hypernetted chain approximation (HNC), showing clear signs of
layer formation. The mechanism behind the formation of layer structure is
discussed and a method to predict the number of layers is presented. Molecular
dynamics (MD) simulations provide validation of the theoretical results and
detailed microscopic insights.
|
Scientists who study how the brain solves problems have recently verified
that, because of stringent limitations in working memory, where the brain
solves problems, students must apply facts and algorithms that have previously
been well memorized to reliably solve problems of any complexity. This is a
paradigm shift: A change in the fundamental understanding of how the brain
solves problems and how we can best guide students to learn to solve problems
in the physical sciences. One implication is that for students, knowledge of
concepts and big ideas is not sufficient to solve most problems assigned in
physics and chemistry courses for STEM majors. To develop an intuitive sense of
which fundamentals to recall when, first students must make the fundamental
relationships of a topic recallable with automaticity then apply those
fundamentals to solving problems in a variety of distinctive contexts. Based on
these findings, cognitive science has identified strategies that speed learning
and assist in retention of physics and chemistry. Experiments will be suggested
by which instructors can test science-informed methodologies.
|
We report the results of a multi-year spectroscopic and photometric
monitoring campaign of two luminous quasars, PG~0923+201 and PG~1001+291, both
located at the high-luminosity end of the broad-line region (BLR)
size-luminosity relation with optical luminosities above $10^{45}~{\rm
erg~s^{-1}}$. PG~0923+201 is for the first time monitored, and PG~1001+291 was
previously monitored but our campaign has a much longer temporal baseline. We
detect time lags of variations of the broad H$\beta$, H$\gamma$, Fe {\sc ii}
lines with respect to those of the 5100~{\AA} continuum. The velocity-resolved
delay map of H$\beta$ in PG~0923+201 indicates a complicated structure with a
mix of Keplerian disk-like motion and outflow, and the map of H$\beta$ in
PG~1001+291 shows a signature of Keplerian disk-like motion. Assuming a virial
factor of $f_{\rm BLR}=1$ and FWHM line widths, we measure the black hole mass
to be $118_{-16}^{+11}\times 10^7 M_{\odot}$ for PG~0923+201 and
$3.33_{-0.54}^{+0.62}\times 10^7 M_{\odot}$ for PG~1001+291. Their respective
accretion rates are estimated to be $0.21_{-0.07}^{+0.06} \times L_{\rm
Edd}\,c^{-2}$ and $679_{-227}^{+259}\times L_{\rm Edd}\,c^{-2}$, indicating
that PG~0923+201 is a sub-Eddington accretor and PG~1001+291 is a
super-Eddington accretor. While the H$\beta$ time lag of PG~0923+201 agrees
with the size-luminosity relation, the time lag of PG~1001+291 shows a
significant deviation, confirming that in high-luminosity AGN the BLR size
depends on both luminosity and Eddington ratio. Black hole mass estimates from
single AGN spectra will be over-estimated at high luminosities and redshifts if
this effect is not taken into account.
|
This paper deals with the formation of a bound dineutron in the outgoing
channel of the $^{159}$Tb(n,$^2$n)$^{158g}$Tb nuclear reaction followed by
assumed transformations of this reaction products. Such nuclear processes were
studied in detail from the point of view of $^{160}$Tb / $^{160}$Dy /
$^{160}$Ho radioactivity versus time dependence. Based on some signs of fusion
process between heavier nuclei ($^{158}$Tb and/or $^{158}$Gd) and the deuteron,
that is a bound dineutron decay product, the mathematical model, including
three systems of differential equations, was developed to describe experimental
data. This development requires a reasonable estimate of the half-life of a
bound dineutron, which was found to be equal 5,877 s as the greatest. We
mathematically modeled the delayed in time experimentally observed buildup of
$^{160}$Tb radioactivity with a maximum at about 495 d since a neutron
irradiation completion of Tb sample, based on the similarity with the
parent-daughter nuclei radioactivity decay and nuclear accumulation processes.
|
In the present paper, we give some characterizations by considering $*$-Ricci
soliton as a Kenmotsu metric. We prove that if a Kenmotsu manifold represents
an almost $*$-Ricci soliton with the potential vector field $V$ is a Jacobi
along the Reeb vector field, then it is a steady $*$-Ricci soliton. Next, we
show that a Kenmotsu matric endowed an almost $*$-Ricci soliton is Einstein
metric if it is $\eta$-Einstein or the potential vector field $V$ is collinear
to the Reeb vector field or $V$ is an infinitesimal contact transformation.
|
We present a parallel algorithm for permanent mod 2^k of a matrix of
univariate integer polynomials. It places the problem in ParityL subset of
NC^2. This extends the techniques of [Valiant], [Braverman, Kulkarni, Roy] and
[Bj\"orklund, Husfeldt], and yields a (randomized) parallel algorithm for
shortest 2-disjoint paths improving upon the recent result from (randomized)
polynomial time.
We also recognize the disjoint paths problem as a special case of finding
disjoint cycles, and present (randomized) parallel algorithms for finding a
shortest cycle and shortest 2-disjoint cycles passing through any given fixed
number of vertices or edges.
|
In this paper, we present a toolchain to design, execute, and verify robot
behaviors. The toolchain follows the guidelines defined by the EU H2020 project
RobMoSys and encodes the robot deliberation as a Behavior Tree (BT), a directed
tree where the internal nodes model behavior composition and leaf nodes model
action or measurement operations. Such leaf nodes take the form of a statechart
(SC), which runs in separate threads, whose states perform basic arithmetic
operations and send commands to the robot. The toolchain provides the ability
to define a runtime monitor for a given system specification that warns the
user whenever a given specification is violated.
We validated the toolchain in a simulated experiment that we made
reproducible in an OS-virtualization environment.
|
Community search aims at finding densely connected subgraphs for query
vertices in a graph. While this task has been studied widely in the literature,
most of the existing works only focus on finding homogeneous communities rather
than heterogeneous communities with different labels. In this paper, we
motivate a new problem of cross-group community search, namely Butterfly-Core
Community (BCC), over a labeled graph, where each vertex has a label indicating
its properties and an edge between two vertices indicates their cross
relationship. Specifically, for two query vertices with different labels, we
aim to find a densely connected cross community that contains two query
vertices and consists of butterfly networks, where each wing of the butterflies
is induced by a k-core search based on one query vertex and two wings are
connected by these butterflies. Indeed, the BCC structure admits the structure
cohesiveness and minimum diameter, and thus can effectively capture the
heterogeneous and concise collaborative team. Moreover, we theoretically prove
this problem is NP-hard and analyze its non-approximability. To efficiently
tackle the problem, we develop a heuristic algorithm, which first finds a BCC
containing the query vertices, then iteratively removes the farthest vertices
to the query vertices from the graph. The algorithm can achieve a
2-approximation to the optimal solution. To further improve the efficiency, we
design a butterfly-core index and develop a suite of efficient algorithms for
butterfly-core identification and maintenance as vertices are eliminated.
Extensive experiments on eight real-world networks and four novel case studies
validate the effectiveness and efficiency of our algorithms.
|
We present analyses of Spitzer observations of 29P/Schwassmann-Wachmann 1
using 16 $\mu$m IRS "blue" peak-up (PU) and 24 $\mu$m and 70 $\mu$m MIPS images
obtained on UT 2003 November 23 and 24 that characterize the Centaur's
large-grain (10-100 $\mu$m) dust coma during a time of non-outbursting
"quiescent" activity. Estimates of $\epsilon f \rho$ for each band (16 $\mu$m
(2600 $\pm$ 43 cm), 24 $\mu$m (5800 $\pm$ 63 cm), and 70 $\mu$m (1800 $\pm$ 900
cm)) follow the trend between nucleus size vs. $\epsilon f \rho$ that was
observed for the WISE/NEOWISE comet ensemble. A coma model was used to derive a
dust production rate in the range of 50-100 kg/s. For the first time, a color
temperature map of SW1's coma was constructed using the 16 $\mu$m and 24 $\mu$m
imaging data. With peaks at $\sim$ 140K, this map implies that coma water ice
grains should be slowly sublimating and producing water gas in the coma. We
analyzed the persistent 24 $\mu$m "wing" (a curved southwestern coma) feature
at 352,000 km (90$''$) from the nucleus attributed by Stansberry et al. (2004)
to nucleus rotation and instead propose that it is largely created by solar
radiation pressure and gravity acting on micron sized grains. We performed coma
removal to the 16 $\mu$m PU image in order to refine the nucleus' emitted
thermal flux. A new application of the Near Earth Asteroid Thermal Model
(NEATM; Harris 1998) at five wavelengths (5.730 $\mu$m, 7.873 $\mu$m, 15.80
$\mu$m, 23.68 $\mu$m, and 71.42 $\mu$m) was then used to refine SW1's effective
radius measurement to $R = 32.3 \pm 3.1$ km and infrared beaming parameter to
$\eta = 1.1 \pm 0.2$, respectively.
|
Polymers are widely-studied materials with diverse properties and
applications determined by different molecular structures. It is essential to
represent these structures clearly and explore the full space of achievable
chemical designs. However, existing approaches are unable to offer
comprehensive design models for polymers because of their inherent scale and
structural complexity. Here, we present a parametric, context-sensitive grammar
designed specifically for the representation and generation of polymers. As a
demonstrative example, we implement our grammar for polyurethanes. Using our
symbolic hypergraph representation and 14 simple production rules, our
PolyGrammar is able to represent and generate all valid polyurethane
structures. We also present an algorithm to translate any polyurethane
structure from the popular SMILES string format into our PolyGrammar
representation. We test the representative power of PolyGrammar by translating
a dataset of over 600 polyurethane samples collected from literature.
Furthermore, we show that PolyGrammar can be easily extended to the other
copolymers and homopolymers such as polyacrylates. By offering a complete,
explicit representation scheme and an explainable generative model with
validity guarantees, our PolyGrammar takes an important step toward a more
comprehensive and practical system for polymer discovery and exploration. As
the first bridge between formal languages and chemistry, PolyGrammar also
serves as a critical blueprint to inform the design of similar grammars for
other chemistries, including organic and inorganic molecules.
|
We investigate the presence of a black hole black string phase transition in
Einstein Gauss Bonnet (EGB) gravity in the large dimension limit. The merger
point is the static spacetime connecting the black string phase with the black
hole phase. We consider several ranges of the Gauss-Bonnet parameter. We find
that there is a range when the Gauss-Bonnet corrections are subordinate to the
Einstein gravity terms in the large dimension limit, and yet the merger point
geometry does not approach a black hole away from the neck. We cannot rule out
a topology changing phase transition as argued by Kol. However as the merger
point geometry does not approach the black hole geometry asymptotically it is
not obvious that the transition is directly to a black hole phase. We also
demonstrate that for another range of the Gauss-Bonnet parameter, the merger
point geometry approaches the black hole geometry asymptotically when a certain
parameter depending on the Gauss-Bonnet parameter $\alpha$ and on the
parameters in the Einstein-Gauss-Bonnet black hole metric is small enough.
|
NLP community is currently investing a lot more research and resources into
development of deep learning models than training data. While we have made a
lot of progress, it is now clear that our models learn all kinds of spurious
patterns, social biases, and annotation artifacts. Algorithmic solutions have
so far had limited success. An alternative that is being actively discussed is
more careful design of datasets so as to deliver specific signals. This
position paper maps out the arguments for and against data curation, and argues
that fundamentally the point is moot: curation already is and will be
happening, and it is changing the world. The question is only how much thought
we want to invest into that process.
|
In this paper we resolve the Alon-Jaeger-Tarsi conjecture for sufficiently
large primes. Namely, we show that for any finite field $\mathbb{F}$ of size
$61<|\mathbb F|\ne 79$ and any nonsingular matrix $M$ over $\mathbb{F}$ there
exists a vector $x$ such that neither $x$ nor $Ax$ has a 0 component.
|
State-of-the-art reinforcement learning (RL) algorithms suffer from high
sample complexity, particularly in the sparse reward case. A popular strategy
for mitigating this problem is to learn control policies by imitating a set of
expert demonstrations. The drawback of such approaches is that an expert needs
to produce demonstrations, which may be costly in practice. To address this
shortcoming, we propose Probabilistic Planning for Demonstration Discovery
(P2D2), a technique for automatically discovering demonstrations without access
to an expert. We formulate discovering demonstrations as a search problem and
leverage widely-used planning algorithms such as Rapidly-exploring Random Tree
to find demonstration trajectories. These demonstrations are used to initialize
a policy, then refined by a generic RL algorithm. We provide theoretical
guarantees of P2D2 finding successful trajectories, as well as bounds for its
sampling complexity. We experimentally demonstrate the method outperforms
classic and intrinsic exploration RL techniques in a range of classic control
and robotics tasks, requiring only a fraction of exploration samples and
achieving better asymptotic performance.
|
Airfoil stall plays a central role in the design of safe and efficient
lifting surfaces. We typically distinguish between static and dynamic stall
based on the unsteady rate of change of an airfoil's angle of attack. Despite
the somewhat misleading denotation, the force and flow development of an
airfoil undergoing static stall are highly unsteady and the boundary with
dynamic stall is not clearly defined. We experimentally investigate the forces
acting on a two-dimensional airfoil that is subjected to two manoeuvres leading
to static stall: a slow continuous increase in angle of attack with a reduced
pitch rate of 1.3e-4 and a step-wise increase in angle of attack from
14.2{\deg} to 14.8{\deg} within 0.04 convective times. We systematically
quantify the stall reaction delay for many repetitions of these two manoeuvres.
The onset of flow stall is marked by the distinct drop in the lift coefficient.
The reaction delay for the slow continuous ramp-up manoeuvre is not influenced
by the blade kinematics and its occurrence histogram is normally distributed
around 32 convective times. The static reaction delay is compared with dynamic
stall delays for dynamic ramp-up motions with reduced pitch rates ranging from
9e-4 to 0.14 and for dynamic sinusoidal pitching motions of different airfoils
at higher Reynolds numbers up to 1e6. The stall delays for all conditions
follows the same power law decrease from 32 convective times for the most
steady case down to an asymptotic value of 3 for reduced pitch rates above
0.04. Static stall is not phenomenologically different than dynamic stall and
is merely a typical case of stall for low pitch rates. Based on our results, we
suggest that conventional measurements of the static stall angle and the static
load curves should be conducted using a continuous and uniform ramp-up motion
at a reduced frequency around 1e-4.
|
By using the tight-binding model and non-equilibrium Green's function method
(NEGF), we study the band structures and transport properties of a silicene
nanoribbon with a line defect where a bulk energy gap is opened due to the
sublattice symmetry breaking. The flat subband bends downwards or upwards due
to the effect of the line defect. The spin-orbit coupling induces quantum spin
Hall states. Especially, the energy band depends on the distance between the
line defect and the edge of the nanoribbon. The effects of the on-site energies
on the band spectra of the two defect configurations are different. There
always exists one band gap for different on-site energies for the defect
configuration of case 1. However, a gapless state and a band gap can be
modulated by changing the on-site energy, the sublattice potential and
spin-orbit couplings for the defect configuration of case 2. Accordingly, the
variation trends of the conductance including zero conductance can be well
understood in terms of the combined effect of the sublattice potential, the
on-site energy and spin-orbit couplings on the band structures. Thus it is easy
and effective to modulate the transport property of the silicene nanoribbon
with the defect configuration of case 2 by utilizing the sublattice potential,
the on-site energy and spin-orbit couplings. This study is of great
significance for the fabrication and the modulation of the transport property
of silicene-based devices.
|
We prove the existence of ground state solutions to critical growth
$p$-Laplacian and fractional $p$-Laplacian problems that are nonresonant at
zero.
|
Let $S$ be a commutative semigroup, $K$ a quadratically closed commutative
field of characteristic different from $2$, $G$ a $2$-cancellative abelian
group and $H$ an abelian group uniquely divisible by $2$. The aim of this paper
is to determine the general solution $f:S^2\to K$ of the d'Alembert type
equation: $$ f(x+y,z+w)+f(x+\sigma(y),z+\tau(w)) =2f(x,z)f(y,w),\quad\quad
(x,y,z,w\in S) $$ the general solution $f:S^2\to G$ of the Jensen type
equation: $$ f(x+y,z+w)+f(x+\sigma(y),z+\tau(w)) =2f(x,z),\quad\quad
(x,y,z,w\in S) $$ the general solution $f:S^2\to H$ of the quadratic type
equation quation: $$ f(x+y,z+w)+f(x+\sigma(y),z+\tau(w))
=2f(x,z)+2f(y,w),\quad\quad (x,y,z,w\in S) $$ where $\sigma,\tau: S\to S$ are
two involutions.
|
In repeated measures factorial designs involving clustered units, parametric
methods such as linear mixed effects models are used to handle within subject
correlations. However, assumptions of these parametric models such as
continuity and normality are usually hard to come by in many cases. The
homoscedasticity assumption is rather hard to verify in practice. Furthermore,
these assumptions may not even be realistic when data are measured in a
non-metric scale as commonly happens, for example, in Quality of Life outcomes.
In this article, nonparametric effect-size measures for clustered data in
factorial designs with pre-post measurements will be introduced. The
effect-size measures provide intuitively-interpretable and informative
probabilistic comparisons of treatment and time effects. The dependence among
observations within a cluster can be arbitrary across treatment groups. The
effect-size estimators along with their asymptotic properties for computing
confidence intervals and performing hypothesis tests will be discussed.
ANOVA-type statistics with $\chi^2$ approximation that retain some of the
optimal asymptotic behaviors in small samples are investigated. Within each
treatment group, we allow some clusters to involve observations measured on
both pre and post intervention periods (referred to as complete clusters),
while others to contain observations from either pre or post intervention
period only (referred to as incomplete clusters). Our methods are shown to be,
particularly effective in the presence of multiple forms of clustering. The
developed nonparametric methods are illustrated with data from a three-arm
Randomized Trial of Indoor Wood Smoke reduction. The study considered two
active treatments to improve asthma symptoms of kids living in homes that use
wood stove for heating.
|
Top-quark pair production is central to many facets of LHC physics. At
leading order, the top and anti-top are produced in a back-to-back topology,
however this topology accounts only for a minority of $t \bar t$ events with
TeV-scale momentum transfer. The remaining events instead involve the splitting
of an initial or final-state gluon to $t \bar t$. We provide simple
quantitative arguments that explain why this is the case and examine the
interplay between different topologies and a range of variables that
characterise the event hardness. We then develop a method to classify the
topologies of individual events and use it to illustrate our findings in the
context of simulated events, using both top partons and suitably defined
fiducial tops. For events with large $t \bar t$ invariant mass, we comment on
additional features that have important experimental and theoretical
implications.
|
Cervical cancer is one of the most deadly and common diseases among women
worldwide. It is completely curable if diagnosed in an early stage, but the
tedious and costly detection procedure makes it unviable to conduct
population-wise screening. Thus, to augment the effort of the clinicians, in
this paper, we propose a fully automated framework that utilizes Deep Learning
and feature selection using evolutionary optimization for cytology image
classification. The proposed framework extracts Deep feature from several
Convolution Neural Network models and uses a two-step feature reduction
approach to ensure reduction in computation cost and faster convergence. The
features extracted from the CNN models form a large feature space whose
dimensionality is reduced using Principal Component Analysis while preserving
99% of the variance. A non-redundant, optimal feature subset is selected from
this feature space using an evolutionary optimization algorithm, the Grey Wolf
Optimizer, thus improving the classification performance. Finally, the selected
feature subset is used to train an SVM classifier for generating the final
predictions. The proposed framework is evaluated on three publicly available
benchmark datasets: Mendeley Liquid Based Cytology (4-class) dataset, Herlev
Pap Smear (7-class) dataset, and the SIPaKMeD Pap Smear (5-class) dataset
achieving classification accuracies of 99.47%, 98.32% and 97.87% respectively,
thus justifying the reliability of the approach. The relevant codes for the
proposed approach can be found in:
https://github.com/DVLP-CMATERJU/Two-Step-Feature-Enhancement
|
We propose a new type of structure for singly heavy baryons of $Qqq\bar{q}q$
in addition to the conventional one of $Qqq$. Based on chiral symmetry of the
light quarks, we show that the $Qqq\bar{q}q$ baryon offers a novel picture for
heavy quark spin-singlet and flavor-antisymmetric baryons. By making use of the
effective Lagrangian approach, we find $\Lambda_c(2765)$ and $\Xi_c(2967)$ are
mostly $Qqq\bar{q}q$ while $\Lambda_c(2286)$ and $\Xi_c(2470)$ are mostly
$Qqq$. The masses of negative-parity baryons are predicted. We also derive a
sum rule and the extended Goldberger-Treiman relation that the masses of the
baryons satisfy. Furthermore, a mass degeneracy of parity partners of the
baryons with the restoration of chiral symmetry is discussed. These are the
unique features that the conventional picture of radial excitation in the quark
model does not accommodate. Our findings provide useful information not only
for future experiments but also for future lattice simulations on diquarks.
|
For more than three decades, nearly free electron elemental metals have been
a topic of debate because the computed bandwidths are significantly wider in
the local density approximation to density-functional theory (DFT) than
indicated by angle-resolved photoemission experiments. Here, we systematically
investigate this using first-principles calculations for alkali and
alkaline-earth metals using DFT and various beyond-DFT methods such as
meta-GGA, G$_0$W$_0$, B3LYP, and DFT+eDMFT. We find that the static non-local
exchange and correlation, as partly included in the B3LYP hybrid functional,
significantly increase the bandwidths even compared to LDA, while the
G$_0$W$_0$ bands are only slightly narrower than in LDA. The agreement with the
ARPES is best when the local approximation to the self-energy is used in the
DFT+eDMFT method. We infer that even moderately correlated systems with
partially occupied s-orbitals, which were assumed to approximate the uniform
electron gas, are very well described in terms of short-range dynamical
correlations that are only local to an atom.
|
Multiple Sclerosis (MS) and microvascular leukoencephalopathy are two
distinct neurological conditions, the first caused by focal autoimmune
inflammation in the central nervous system, the second caused by chronic white
matter damage from atherosclerotic microvascular disease. Both conditions lead
to signal anomalies on Fluid Attenuated Inversion Recovery (FLAIR) magnetic
resonance (MR) images, which can be distinguished by an expert
neuroradiologist, but which can look very similar to the untrained eye as well
as in the early stage of both diseases. In this paper, we attempt to train a
3-dimensional deep neural network to learn the specific features of both
diseases in an unsupervised manner. For this manner, in a first step we train a
generative neural network to create artificial MR images of both conditions
with approximate explicit density, using a mixed dataset of multiple sclerosis,
leukoencephalopathy and healthy patients containing in total 5404 volumes of
3096 patients. In a second step, we distinguish features between the different
diseases in the latent space of this network, and use them to classify new
data.
|
The holographic state of matter exists in the quantum gravitational regime,
with black holes as the example. In this essay, we provide a microscopic
derivation to the holographic thermodynamics and further discuss its
implications for quantum gravity. Especially, we establish the link between
holographic entropy and dimensional reduction. It seems that the fundamental
physics behind black holes and the very early universe is $1+1$ dimensional.
|
Gravitational waves emitted by black hole binary inspiral and mergers enable
unprecedented strong-field tests of gravity, requiring accurate theoretical
modelling of the expected signals in extensions of General Relativity. In this
paper we model the gravitational wave emission of inspiraling binaries in
scalar Gauss-Bonnet gravity theories. Going beyond the weak-coupling
approximation, we derive the gravitational waveform to first post-Newtonian
order beyond the quadrupole approximation and calculate new contributions from
nonlinear curvature terms. We quantify the effect of these terms and provide
ready-to-implement gravitational wave and scalar waveforms as well as the
Fourier domain phase for quasi-circular binaries. We also perform a parameter
space study, which indicates that the values of black hole scalar charges play
a crucial role in the detectability of deviation from General Relativity. We
also compare the scalar waveforms to numerical relativity simulations to assess
the impact of the relativistic corrections to the scalar radiation. Our results
provide important foundations for future precision tests of gravity.
|
We study reductive subgroups $H$ of a reductive linear algebraic group $G$ --
possibly non-connected -- such that $H$ contains a regular unipotent element of
$G$. We show that under suitable hypotheses, such subgroups are $G$-irreducible
in the sense of Serre. This generalizes results of Malle, Testerman and
Zalesski. We obtain analogous results for Lie algebras and for finite groups of
Lie type. Our proofs are short, conceptual and uniform.
|
State-of-the-art cosmological simulations on classical computers are limited
by time, energy, and memory usage. Quantum computers can perform some
calculations exponentially faster than classical computers, using exponentially
less energy and memory, and may enable extremely large simulations that
accurately capture the whole dynamic range of structure in the Universe within
statistically representative cosmic volumes. However, not all computational
tasks exhibit a `quantum advantage'. Quantum circuits act linearly on quantum
states, so nonlinearities (e.g. self-gravity in cosmological simulations) pose
a significant challenge. Here we outline one potential approach to overcome
this challenge and solve the (nonlinear) Schrodinger-Poisson equations for the
evolution of self-gravitating dark matter, based on a hybrid quantum-classical
variational algorithm framework (Lubasch 2020). We demonstrate the method with
a proof-of-concept mock quantum simulation, envisioning a future where quantum
computers will one day lead simulations of dark matter.
|
We reformulate known exotic theories (including theories of fractons) on a
Euclidean spacetime lattice. We write them using the Villain approach and then
we modify them to a convenient range of parameters. The new lattice models are
closer to the continuum limit than the original lattice versions. In
particular, they exhibit many of the recently found properties of the continuum
theories including emergent global symmetries and surprising dualities. Also,
these new models provide a clear and rigorous formulation to the continuum
models and their singularities. In appendices, we use this approach to review
well-studied lattice models and their continuum limits. These include the
XY-model, the $\mathbb{Z}_N$ clock-model, and various gauge theories in diverse
dimensions. This presentation clarifies the relation between the
condensed-matter and the high-energy views of these systems. It emphasizes the
role of symmetries associated with the topology of field space, duality, and
various anomalies.
|
In the Ehrenfest wind tree model, a point particle moves on the plane and
collides with randomly placed fixed square obstacles under the usual law of
geometric optics. The particle represents the wind and the squares are the
trees. We examine the periodic version of the model. Previous authors analyze
the dynamical properties of the model using techniques from algebraic topology
or ergodic theory. In contrast to these works, we adopt a signal processing
viewpoint. We describe the phenomenon of the long-term trajectories by using a
3-state hidden Markov model.
|
In an interesting recent work, Kuzborskij and Szepesv\'ari derived a
confidence bound for functions of independent random variables, which is based
on an inequality that relates concentration to squared perturbations of the
chosen function. Kuzborskij and Szepesv\'ari also established the
PAC-Bayes-ification of their confidence bound. Two important aspects of their
work are that the random variables could be of unbounded range, and not
necessarily of an identical distribution. The purpose of this note is to
advertise/discuss these interesting results, with streamlined proofs. This
expository note is written for persons who, metaphorically speaking, enjoy the
"featured movie" but prefer to skip the preview sequence.
|
A tidal disruption event (TDE) involves the tidal shredding of a star in the
vicinity of a dormant supermassive black hole. The nearby ($\approx$230
mega-parsec) radio-quiet (radio luminosity of $4 \times 10^{38}$ erg s$^{-1}$)
AT2019dsg is the first TDE potentially associated with a neutrino event. The
origin of the non-thermal emission in AT2019dsg remains inconclusive;
possibilities include a relativistic jet or a sub-relativistic outflow.
Distinguishing between them can address neutrino production mechanisms. High
resolution very long baseline interferometry monitoring provides uniquely
constraining flux densities and proper motion of the ejecta. A non-relativistic
(outflow velocity of $\approx$0.1 $c$) decelerated expansion in a relatively
dense environment is found to produce the radio emission. Neutrino production
may be related to the acceleration of protons by the outflow. The present study
thus helps exclude jet-related origins for the non-thermal emission and
neutrino production, and constrains non-jetted scenarios.
|
Although the black holes are an integral part of the standard model of
astrophysics and cosmology, their existence poses some serious fundamental
problems. In recent years, several horizonless compact object models were
proposed to address those issues. As the gravitational wave detectors started
to observe more and more merger events with a large signal-to-noise ratio,
gravitational wave spectroscopy could hold the key to uncover the existence of
these objects. This is because the late time ringdown signals of horizonless
compact objects differ from that of the black holes. In this paper, we study
the ringdown properties of charged compact objects and compare them with those
obtained in the black hole scenario. Since the internal structure and the
equation of state of these compact objects are largely unknown, we employ
membrane paradigm to obtain appropriate boundary conditions for the
perturbations of these objects. This model can describe the ringdown properties
of a large variety of compact objects.
|
We study learning Censor Markov Random Fields (abbreviated CMRFs). These are
Markov Random Fields where some of the nodes are censored (not observed). We
present an algorithm for learning high-temperature CMRFs within o(n)
transportation distance. Crucially our algorithm makes no assumption about the
structure of the graph or the number or location of the observed nodes. We
obtain stronger results for high girth high-temperature CMRFs as well as
computational lower bounds indicating that our results can not be qualitatively
improved.
|
A graph's spectral wavelet signature determines a filtration, and
consequently an associated set of extended persistence diagrams. We propose a
framework that optimises the choice of wavelet for a dataset of graphs, such
that their associated persistence diagrams capture features of the graphs that
are best suited to a given data science problem. Since the spectral wavelet
signature of a graph is derived from its Laplacian, our framework encodes
geometric properties of graphs in their associated persistence diagrams and can
be applied to graphs without a priori node attributes. We apply our framework
to graph classification problems and obtain performances competitive with other
persistence-based architectures. To provide the underlying theoretical
foundations, we extend the differentiability result for ordinary persistent
homology to extended persistent homology.
|
We present SPUX - a modular framework for Bayesian inference enabling
uncertainty quantification and propagation in linear and nonlinear,
deterministic and stochastic models, and supporting Bayesian model selection.
SPUX can be coupled to any serial or parallel application written in any
programming language, (e.g. including Python, R, Julia, C/C++, Fortran, Java,
or a binary executable), scales effortlessly from serial runs on a personal
computer to parallel high performance computing clusters, and aims to provide a
platform particularly suited to support and foster reproducibility in
computational science. We illustrate SPUX capabilities for a simple yet
representative random walk model, describe how to couple different types of
user applications, and showcase several readily available examples from
environmental sciences. In addition to available state-of-the-art numerical
inference algorithms including EMCEE, PMCMC (PF) and SABC, the open source
nature of the SPUX framework and the explicit description of the hierarchical
parallel SPUX executors should also greatly simplify the implementation and
usage of other inference and optimization techniques.
|
Real scalar fields with attractive self-interaction may form self-bound
states, called oscillons. These dense objects are ubiquitous in leading
theories of dark matter and inflation; of particular interest are long-lived
oscillons which survive past $14$ Gyr, offering dramatic astrophysical
signatures into the present day. We introduce a new formalism for computing the
properties of oscillons with improved accuracy, which we apply to study the
internal structure of oscillons and to identify the physical mechanisms
responsible for oscillon longevity. In particular, we show how imposing
realistic boundary conditions naturally selects a near-minimally radiating
solution, and how oscillon longevity arises from its geometry. Further, we
introduce a natural vocabulary for the issue of oscillon stability, which we
use to predict new features in oscillon evolution. This framework allows for
new efficient algorithms, which we use to address questions of whether and to
what extent long-lived oscillons are fine-tuned. Finally, we construct a family
of potentials supporting ultra-long-lived oscillons, with lifetimes in excess
of $10^{17}$ years.
|
In gauge-Higgs unification (GHU), the 4D Higgs boson appears as a part of the
fifth dimensional component of 5D gauge field. Recently, an $SO(11)$ GUT
inspired $SO(5)\times U(1)\times SU(3)$ GHU model has been proposed. In the
GHU, Kaluza-Klein (KK) excited states of neutral vector bosons, photon, $Z$
boson and $Z_R$ boson, appear as neutral massive vector bosons $Z'$s. The $Z'$
bosons in the GHU couple to quarks and leptons with large parity violation,
which leads to distinctive polarization dependence in, e.g., cross sections and
forward-backward asymmetries in $e^-e^+\to\mu^-\mu^+,q\bar{q}$ processes. In
the talk, we discuss fermion pair production in $e^-e^+$ linear collider
experiments with polarized $e^-$ and $e^+$ beams in the GUT inspired GHU.
Deviations from the SM are shown in the early stage of planned international
linear collider (ILC) with 250 GeV experiments. The deviations can be tested
for the KK mass scale up to about 15 TeV. This talk is mainly based on
Phys.Rev.D102(2020)015029.
|
Wheeler's delayed-choice experiment delays the decision to observe either the
wave or particle behavior of a photon until after it has entered the
interferometer, and the quantum delayed-choice experiment provides the
possibility of observing the wave and particle behavior simultaneously by
introducing quantum control device. We here propose a modified quantum
delayed-choice experiment without quantum control or entanglement assistance,
in which a photon can be prepared in a wave-particle superposition state and
the morphing behavior of wave-to-particle transition can be observed easily. It
is demonstrated that the presented scheme can allow us to rule out classical
hidden variable models in a device-independent manner via violating dimension
witness. We also extend the scheme to the situation of two degrees of freedom,
first constructing a hybrid quantum delayed-choice experiment which enables
simultaneous observation of a photon's wave and particle behaviors in different
degrees of freedom, and then proposing a scheme to prepare the single-photon
wave-particle entanglement. This study is not only meaningful to explore the
wave and particle properties of photons, but also provides potential for the
research of the single-particle nonlocality from the perspective of the
wave-particle degree of freedom.
|
We report on finite-size exact-diagonalization calculations in a Hilbert
space defined by the continuum-model flat moir\'e bands of magic angle twisted
bilayer graphene (MATBG). For moir\'e band filling $3>|\nu|>2$, where
superconductivity is strongest, we obtain evidence that the ground state is a
spin ferromagnet. Near $|\nu|=3$, we find Chern insulator ground states that
have spontaneous spin, valley, and sublattice polarization, and demonstrate
that the anisotropy energy in this order-parameter space is strongly
band-filling-factor dependent. We emphasize that inclusion of the remote band
self-energy is necessary for a reliable description of MATBG flat band
correlations.
|
This paper introduces a novel Russian speech dataset called Golos, a large
corpus suitable for speech research. The dataset mainly consists of recorded
audio files manually annotated on the crowd-sourcing platform. The total
duration of the audio is about 1240 hours. We have made the corpus freely
available to download, along with the acoustic model with CTC loss prepared on
this corpus. Additionally, transfer learning was applied to improve the
performance of the acoustic model. In order to evaluate the quality of the
dataset with the beam-search algorithm, we have built a 3-gram language model
on the open Common Crawl dataset. The total word error rate (WER) metrics
turned out to be about 3.3% and 11.5%.
|
We investigate the evaporation process of a Kerr-de Sitter black hole with
the Unruh-Hawking-like vacuum state, which is a realistic vacuum state
modelling the evaporation process of a black hole originating from
gravitational collapse. We also compute the greybody factors for gravitons,
photons, and conformal-coupling massless scalar particles by using the analytic
solutions of the Teukolsky equation in the Kerr-de Sitter background. It turns
out that the cosmological constant quenches the amplification factor and it
approaches to zero towards the critical point where the Nariai and extremal
limits merge together. We confirm that even near the critical point, the
superradiance of gravitons is more significant than that of photons and scalar
particles. Angular momentum is carried out by particles several times faster
than the mass energy decreases. This means that a Kerr-de Sitter black hole
rapidly spins down to a nearly Schwarzschild-de Sitter black hole before it
completely evaporates. We also compute the time evolution of the
Bekenstein-Hawking entropy. The total entropy of the Kerr-de Sitter black hole
and cosmological horizon increases with time, which is consistent with the
generalized second law of thermodynamics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.