abstract
stringlengths 42
2.09k
|
---|
Coronary X-ray angiography is a crucial clinical procedure for the diagnosis
and treatment of coronary artery disease, which accounts for roughly 16% of
global deaths every year. However, the images acquired in these procedures have
low resolution and poor contrast, making lesion detection and assessment
challenging. Accurate coronary artery segmentation not only helps mitigate
these problems, but also allows the extraction of relevant anatomical features
for further analysis by quantitative methods. Although automated segmentation
of coronary arteries has been proposed before, previous approaches have used
non-optimal segmentation criteria, leading to less useful results. Most methods
either segment only the major vessel, discarding important information from the
remaining ones, or segment the whole coronary tree based mostly on contrast
information, producing a noisy output that includes vessels that are not
relevant for diagnosis. We adopt a better-suited clinical criterion and segment
vessels according to their clinical relevance. Additionally, we simultaneously
perform catheter segmentation, which may be useful for diagnosis due to the
scale factor provided by the catheter's known diameter, and is a task that has
not yet been performed with good results. To derive the optimal approach, we
conducted an extensive comparative study of encoder-decoder architectures
trained on a combination of focal loss and a variant of generalized dice loss.
Based on the EfficientNet and the UNet++ architectures, we propose a line of
efficient and high-performance segmentation models using a new decoder
architecture, the EfficientUNet++, whose best-performing version achieved
average dice scores of 0.8904 and 0.7526 for the artery and catheter classes,
respectively, and an average generalized dice score of 0.9234.
|
Certain biomaterials are capable of inducing the secretion of Vascular
Endothelial Growth Factor (VEGF) from cells exposed to their biochemical
influence, which plays a vital role in stimulating angiogenesis. Looking for
this capacity, in this study three porous glasses were synthesized and
characterized. The objective of this study was to determine the concentration
of the glass particles that, being out of the cytotoxic range, could increase
VEGF secretion. The viability of cultivated bone marrow stromal cells (ST-2)
was assessed. The samples were examined with light microscopy (LM) after the
histochemical staining for haematoxylin and eosin (HE). The biological activity
of glasses was evaluated in terms of the influence of the Cu2+ and Sr2+ ions on
the cells. The dissolution products of CuSr-1 and CuSr-2.5 produced the highest
secretion of VEGF from ST-2 cells after 48 h of incubation. The combination of
Cu2+ and Sr2+ lays the foundation for engineering a bioactive glass than can
lead to vascularized, functional bone tissue when used in bone regeneration
applications.
|
We use the data of modern digital sky surveys (PanSTARRS-1, SDSS) combined
with HI-line and far ultraviolet (GALEX) surveys to reclassify 165 early-type
galaxies from the Catalog of Isolated Galaxies (KIG). As a result, the number
of E- and S0-type galaxies reduced to 91. Our search for companions of
early-type KIG galaxies revealed 90 companions around 45 host galaxies with
line-of-sight velocity differences $|dV| < 500$ km s$^{-1}$ and linear
projected separations $R_{p} < 750$ kpc. We found no appreciable differences in
either integrated luminosity or color of galaxies associated with the presence
or absence of close neighbors. We found a characteristic orbital
mass-to-luminosity ratio for 26 systems "KIG galaxy--companion" to be
$M_{\odot}/L_{K} = (74\pm26) M_{\odot}/L_{\odot}$, which is consistent with the
$M_{\rm orb}/L_{K}$ estimates for early-type isolated galaxies in the 2MIG
catalog ($63 M_{\odot}/L_{\odot}$), and also with the $M_{\rm orb}/L_{K}$
estimates for E- and S0-type galaxies in the Local Volume: $38\pm22$ (NGC
3115), $82\pm26$ (NGC 5128), $65\pm20$ (NGC 4594). The high halo-to-stellar
mass ratio for E- and S0-type galaxies compared to the average $(20\pm3)
M_{\odot}/L_{\odot}$ ratio for bulgeless spiral galaxies is indicative of a
significant difference between the dynamic evolution of early- and late-type
galaxies.
|
Forced oscillation (FO) is a significant concern threating the power system
stability. Its mechanisms are mostly studied via linear models. However, FO
amplitude is increasing, e.g., Nordic and Western American FOs, which can
stimulate power system nonlinearity. Hence, this paper incorporates
nonlinearity in FO mechanism analysis. The multi-scale technique is employed in
solving the forced oscillation equation to handle the quadratic nonlinearity.
The amplitude-frequency characteristic curves and first-order approximate
expressions are derived. The frequency deviation and jumping phenomenon caused
by nonlinearity are discovered and further analyzed by comparing with linear
models. This paper provides a preliminary research for nonlinear FOs of power
system, and more characteristics should be further analysis in the near future.
|
Within integrated tokamak plasma modelling, turbulent transport codes are
typically the computational bottleneck limiting their routine use outside of
post-discharge analysis. Neural network (NN) surrogates have been used to
accelerate these calculations while retaining the desired accuracy of the
physics-based models. This paper extends a previous NN model, known as
QLKNN-hyper-10D, by incorporating the impact of impurities, plasma rotation and
magnetic equilibrium effects. This is achieved by adding a light impurity
fractional density ($n_{imp,light} / n_e$) and its normalized gradient, the
normalized pressure gradient ($\alpha$), the toroidal Mach number ($M_{tor}$)
and the normalized toroidal flow velocity gradient. The input space was sampled
based on experimental data from the JET tokamak to avoid the curse of
dimensionality. The resulting networks, named QLKNN-jetexp-15D, show good
agreement with the original QuaLiKiz model, both by comparing individual
transport quantity predictions as well as comparing its impact within the
integrated model, JINTRAC. The profile-averaged RMS of the integrated modelling
simulations is <10% for each of the 5 scenarios tested. This is non-trivial
given the potential numerical instabilities present within the highly nonlinear
system of equations governing plasma transport, especially considering the
novel addition of momentum flux predictions to the model proposed here. An
evaluation of all 25 NN output quantities at one radial location takes
$\sim$0.1 ms, $10^4$ times faster than the original QuaLiKiz model. Within the
JINTRAC integrated modelling tests performed in this study, using
QLKNN-jetexp-15D resulted in a speed increase of only 60 - 100 as other physics
modules outside of turbulent transport become the bottleneck.
|
Research in Natural Language Processing is making rapid advances, resulting
in the publication of a large number of research papers. Finding relevant
research papers and their contribution to the domain is a challenging problem.
In this paper, we address this challenge via the SemEval 2021 Task 11:
NLPContributionGraph, by developing a system for a research paper
contributions-focused knowledge graph over Natural Language Processing
literature. The task is divided into three sub-tasks: extracting contribution
sentences that show important contributions in the research article, extracting
phrases from the contribution sentences, and predicting the information units
in the research article together with triplet formation from the phrases. The
proposed system is agnostic to the subject domain and can be applied for
building a knowledge graph for any area. We found that transformer-based
language models can significantly improve existing techniques and utilized the
SciBERT-based model. Our first sub-task uses Bidirectional LSTM (BiLSTM)
stacked on top of SciBERT model layers, while the second sub-task uses
Conditional Random Field (CRF) on top of SciBERT with BiLSTM. The third
sub-task uses a combined SciBERT based neural approach with heuristics for
information unit prediction and triplet formation from the phrases. Our system
achieved F1 score of 0.38, 0.63 and 0.76 in end-to-end pipeline testing, phrase
extraction testing and triplet extraction testing respectively.
|
Every year, the Robocup@Home competition challenges teams and robots'
abilities. In 2020, the RoboCup@Home Education challenge was organized online,
altering the usual competition rules. In this paper, we present the latest
developments that lead the RoboBreizh team to win the contest. These
developments include several modules linked to each other allowing the Pepper
robot to understand, act and adapt itself to a local environment. Up-to-date
available technologies have been used for navigation and dialogue. First
contribution includes combining object detection and pose estimation techniques
to detect user's intention. Second contribution involves using Learning by
Demonstrations to easily learn new movements that improve the Pepper robot's
skills. This proposal won the best performance award of the 2020 RoboCup@Home
Education challenge.
|
Although significant progress in automatic learning of steganographic cost
has been achieved recently, existing methods designed for spatial images are
not well applicable to JPEG images which are more common media in daily life.
The difficulties of migration mostly lie in the unique and complicated JPEG
characteristics caused by 8x8 DCT mode structure. To address the issue, in this
paper we extend an existing automatic cost learning scheme to JPEG, where the
proposed scheme called JEC-RL (JPEG Embedding Cost with Reinforcement Learning)
is explicitly designed to tailor the JPEG DCT structure. It works with the
embedding action sampling mechanism under reinforcement learning, where a
policy network learns the optimal embedding policies via maximizing the rewards
provided by an environment network. The policy network is constructed following
a domain-transition design paradigm, where three modules including pixel-level
texture complexity evaluation, DCT feature extraction, and mode-wise
rearrangement, are proposed. These modules operate in serial, gradually
extracting useful features from a decompressed JPEG image and converting them
into embedding policies for DCT elements, while considering JPEG
characteristics including inter-block and intra-block correlations
simultaneously. The environment network is designed in a gradient-oriented way
to provide stable reward values by using a wide architecture equipped with a
fixed preprocessing layer with 8x8 DCT basis filters. Extensive experiments and
ablation studies demonstrate that the proposed method can achieve good security
performance for JPEG images against both advanced feature based and modern CNN
based steganalyzers.
|
Back propagation based visualizations have been proposed to interpret deep
neural networks (DNNs), some of which produce interpretations with good visual
quality. However, there exist doubts about whether these intuitive
visualizations are related to the network decisions. Recent studies have
confirmed this suspicion by verifying that almost all these modified
back-propagation visualizations are not faithful to the model's decision-making
process. Besides, these visualizations produce vague "relative importance
scores", among which low values can't guarantee to be independent of the final
prediction. Hence, it's highly desirable to develop a novel back-propagation
framework that guarantees theoretical faithfulness and produces a quantitative
attribution score with a clear understanding. To achieve the goal, we resort to
mutual information theory to generate the interpretations, studying how much
information of output is encoded in each input neuron. The basic idea is to
learn a source signal by back-propagation such that the mutual information
between input and output should be as much as possible preserved in the mutual
information between input and the source signal. In addition, we propose a
Mutual Information Preserving Inverse Network, termed MIP-IN, in which the
parameters of each layer are recursively trained to learn how to invert. During
the inversion, forward Relu operation is adopted to adapt the general
interpretations to the specific input. We then empirically demonstrate that the
inverted source signal satisfies completeness and minimality property, which
are crucial for a faithful interpretation. Furthermore, the empirical study
validates the effectiveness of interpretations generated by MIP-IN.
|
Matrix powering is a fundamental computational primitive in linear algebra.
It has widespread applications in scientific computing and engineering, and
underlies the solution of time-homogeneous linear ordinary differential
equations, simulation of discrete-time Markov chains, or discovering the
spectral properties of matrices with iterative methods. In this paper, we
investigate the possibility of speeding up matrix powering of sparse stable
Hermitian matrices on a quantum computer. We present two quantum algorithms
that can achieve speedup over the classical matrix powering algorithms -- (i)
an adaption of quantum-walk based fast forwarding algorithm (ii) an algorithm
based on Hamiltonian simulation. Furthermore, by mapping the N-bit parity
determination problem to a matrix powering problem, we provide no-go theorems
that limit the quantum speedups achievable in powering non-Hermitian matrices.
|
In an adversarial environment, a hostile player performing a task may behave
like a non-hostile one in order not to reveal its identity to an opponent. To
model such a scenario, we define identity concealment games: zero-sum
stochastic reachability games with a zero-sum objective of identity
concealment. To measure the identity concealment of the player, we introduce
the notion of an average player. The average player's policy represents the
expected behavior of a non-hostile player. We show that there exists an
equilibrium policy pair for every identity concealment game and give the
optimality equations to synthesize an equilibrium policy pair. If the player's
opponent follows a non-equilibrium policy, the player can hide its identity
better. For this reason, we study how the hostile player may learn the
opponent's policy. Since learning via exploration policies would quickly reveal
the hostile player's identity to the opponent, we consider the problem of
learning a near-optimal policy for the hostile player using the game runs
collected under the average player's policy. Consequently, we propose an
algorithm that provably learns a near-optimal policy and give an upper bound on
the number of sample runs to be collected.
|
A celebrated theorem of Lind states that a positive real number is equal to
the spectral radius of some integral primitive matrix, if and only if, it is a
Perron algebraic integer. Given a Perron number $p$, we prove that there is an
integral irreducible matrix with spectral radius $p$, and with dimension
bounded above in terms of the algebraic degree, the ratio of the first two
largest Galois conjugates, and arithmetic information about the ring of
integers of its number field. This arithmetic information can be taken to be
either the discriminant or the minimal Hermite-like thickness. Equivalently,
given a Perron number $p$, there is an irreducible shift of finite type with
entropy $\log(p)$ defined as an edge shift on a graph whose number of vertices
is bounded above in terms of the aforementioned data.
|
Neuromorphic computing aims to mimic the architecture of the human brain to
carry out computational tasks that are challenging and much more energy
consuming for standard hardware. Despite progress in several fields of physics
and engineering, the realization of artificial neural networks which combine
high operating speeds with fast and low-energy adaptability remains a
challenge. Here we demonstrate an opto-magnetic neural network capable of
learning and classification of digitized 3x3 characters exploiting local
storage in the magnetic material. Using picosecond laser pulses, we find that
micrometer sized synapses absorb well below 100 picojoule per synapse per laser
pulse, with favorable scaling to smaller spatial dimensions. We thus succeeded
in combining the speed and low-dissipation of optical networks with the
low-energy adaptability and non-volatility of magnetism, providing a promising
approach to fast and energy-efficient neuromorphic computing.
|
Gauge invariance, a core principle in electrodynamics, has two separate
meanings. One concept treats the photon as the gauge particle for
electrodynamics. It is based on symmetries of the Lagrangian, and requires no
mention of electric or magnetic fields. The second concept depends directly on
the electric and magnetic fields, and how they can be represented by potential
functions that are not unique. A general proof that potentials are more
fundamental than fields serves to resolve discrepancies. Physical symmetries,
however, are altered by gauge transformations and strongly limit gauge freedom.
A new constraint on the form of allowable gauge transformations must be
introduced that applies to both gauge concepts.
|
Although matter contributions to the graviton self-energy
$-i[\mbox{}^{\mu\nu} \Sigma^{\rho\sigma}](x;x')$ must be separately conserved
on $x^{\mu}$ and ${x'}^{\mu}$, graviton contributions obey the weaker
constraint of the Ward identity, which involves a divergence on both
coordinates. On a general homogeneous and isotropic background this leads to
just four structure functions for matter contributions but nine structure
functions for graviton contributions. We propose a convenient parameterization
for these nine structure functions. We also apply the formalism to explicit one
loop computations of $-i[\mbox{}^{\mu\nu} \Sigma^{\rho\sigma}](x;x')$ on de
Sitter background, one of the contributions from a massless, minimally coupled
scalar and the other for the contribution from gravitons in the simplest gauge.
We also specialize the linearized, quantum-corrected Einstein equation to the
graviton mode function and to the gravitational response to a point mass.
|
The large-scale integration of converter-interfaced resources in electrical
power systems raises new stability threats which call for a new theoretic
framework for modelling and analysis. Here we present the theory of
power-communication isomorphism to solve this grand challenge. It is revealed
that an intrinsic communication mechanism governs the synchronisation of all
apparatus in power systems based on which a unified representation for
heterogeneous apparatus and behaviours is established. We develop the
mathematics to model the dynamic interaction within a power-communication
isomorphic system which yield a simple stability criterion for complex systems
that can be intuitively interpreted and thus conveniently applied in practice.
|
Bundles of C*-algebras can be used to represent limits of physical theories
whose algebraic structure depends on the value of a parameter. The primary
example is the $\hbar\to 0$ limit of the C*-algebras of physical quantities in
quantum theories, represented in the framework of strict deformation
quantization. In this paper, we understand such limiting procedures in terms of
the extension of a bundle of C*-algebras to some limiting value of a parameter.
We prove existence and uniqueness results for such extensions. Moreover, we
show that such extensions are functorial for the C*-product, dynamical
automorphisms, and the Lie bracket (in the $\hbar\to 0$ case) on the fiber
C*-algebras.
|
We provide a theoretical analysis of the nature of the orbital angular
momentum (OAM) modal fields in a multilayered fiber, such as the step-index
fiber and the ring-core fiber. In a detailed study of the vector field
solutions of the step-index fiber (in the exponential basis), we discover that
the polarization-induced field component is a modified scalar OAM field (as
opposed to a standard OAM scalar field) with a shifted intensity pattern in the
weakly guiding approximation (WGA); the familiar intensity donut pattern is
reduced or increased in radius depending upon whether it is a case of
spin-alignment or anti-alignment with the OAM. Such a shift in the intensity
pattern appears to be a general feature of the field of a multilayered fiber as
seen from an extension to the ring-core fiber. Additionally, we derive a
general expression for the polarization-correction to the scalar propagation
constant, which includes, for the first time, the contribution of the
polarization-induced field. All the analytic expressions are illustrated and
validated numerically with application to a step-index fiber, whose analytic
solutions are well-known.
|
In online advertising, auto-bidding has become an essential tool for
advertisers to optimize their preferred ad performance metrics by simply
expressing high-level campaign objectives and constraints. Previous works
designed auto-bidding tools from the view of single-agent, without modeling the
mutual influence between agents. In this paper, we instead consider this
problem from a distributed multi-agent perspective, and propose a general
$\underline{M}$ulti-$\underline{A}$gent reinforcement learning framework for
$\underline{A}$uto-$\underline{B}$idding, namely MAAB, to learn the
auto-bidding strategies. First, we investigate the competition and cooperation
relation among auto-bidding agents, and propose a temperature-regularized
credit assignment to establish a mixed cooperative-competitive paradigm. By
carefully making a competition and cooperation trade-off among agents, we can
reach an equilibrium state that guarantees not only individual advertiser's
utility but also the system performance (i.e., social welfare). Second, to
avoid the potential collusion behaviors of bidding low prices underlying the
cooperation, we further propose bar agents to set a personalized bidding bar
for each agent, and then alleviate the revenue degradation due to the
cooperation. Third, to deploy MAAB in the large-scale advertising system with
millions of advertisers, we propose a mean-field approach. By grouping
advertisers with the same objective as a mean auto-bidding agent, the
interactions among the large-scale advertisers are greatly simplified, making
it practical to train MAAB efficiently. Extensive experiments on the offline
industrial dataset and Alibaba advertising platform demonstrate that our
approach outperforms several baseline methods in terms of social welfare and
revenue.
|
This paper investigates an end-to-end neural diarization (EEND) method for an
unknown number of speakers. In contrast to the conventional pipeline approach
to speaker diarization, EEND methods are better in terms of speaker overlap
handling. However, EEND still has a disadvantage in that it cannot deal with a
flexible number of speakers. To remedy this problem, we introduce
encoder-decoder-based attractor calculation module (EDA) to EEND. Once
frame-wise embeddings are obtained, EDA sequentially generates speaker-wise
attractors on the basis of a sequence-to-sequence method using an LSTM
encoder-decoder. The attractor generation continues until a stopping condition
is satisfied; thus, the number of attractors can be flexible. Diarization
results are then estimated as dot products of the attractors and embeddings.
The embeddings from speaker overlaps result in larger dot product values with
multiple attractors; thus, this method can deal with speaker overlaps. Because
the maximum number of output speakers is still limited by the training set, we
also propose an iterative inference method to remove this restriction. Further,
we propose a method that aligns the estimated diarization results with the
results of an external speech activity detector, which enables fair comparison
against pipeline approaches. Extensive evaluations on simulated and real
datasets show that EEND-EDA outperforms the conventional pipeline approach.
|
We study the effect of disorder and doping on the metal-insulator transition
in a repulsive Hubbard model on a square lattice using the determinant quantum
Monte Carlo method. First, with the aim of making our results reliable, we
compute the sign problem with various parameters such as temperature, disorder,
on-site interactions, and lattice size. We show that in the presence of
randomness in the hopping elements, the metal-insulator transition occurs and
the critical disorder strength differs at different fillings. We also
demonstrate that doping is a driving force behind the metal-insulator
transition.
|
The symmetry energy and its density dependence are crucial inputs for many
nuclear physics and astrophysics applications, as they determine properties
ranging from the neutron-skin thickness of nuclei to the crust thickness and
the radius of neutron stars. Recently, PREX-II reported a value of $0.283 \pm
0.071$ fm for the neutron-skin thickness of $^{208}$Pb, implying a slope
parameter $L = 106 \pm 37$ MeV, larger than most ranges obtained from
microscopic calculations and other nuclear experiments. We use a nonparametric
equation of state representation based on Gaussian processes to constrain the
symmetry energy $S_0$, $L$, and $R_\mathrm{skin}^{^{208}\mathrm{Pb}}$ directly
from observations of neutron stars with minimal modeling assumptions. The
resulting astrophysical constraints from heavy pulsar masses, LIGO/Virgo, and
NICER clearly favor smaller values of the neutron skin and $L$, as well as
negative symmetry incompressibilities. Combining astrophysical data with
PREX-II and chiral effective field theory constraints yields $S_0 =
33.0^{+2.0}_{-1.8}$ MeV, $L=53^{+14}_{-15}$ MeV, and
$R_\mathrm{skin}^{^{208}\mathrm{Pb}}=0.17^{+0.04}_{-0.04}$ fm.
|
The shortage of highly qualified high school physics teachers is a national
problem. The Mitchell Institute Physics Enhancement Program (MIPEP) is a
two-week professional development program for in-service high school physics
teachers with a limited background in the subject area. MIPEP, which started in
2012, includes intense training in both subject matter and research-based
instructional strategies. Content and materials used in the program fulfill
state curriculum requirements. The MIPEP curriculum is taught by Texas A&M
University faculty from the Department of Physics & Astronomy along with two
master high school physics teachers. In this paper we present the design and
implementation of MIPEP. We report on assessment of knowledge and confidence of
2014-2018 MIPEP cohorts. We also present the results of the 2020 program that
was delivered remotely due to the pandemic. Analysis of these assessments
showed that the majority of MIPEP participants increased their physics
knowledge and their confidence in that knowledge during both traditional and
virtual program deliveries.
|
We report the detection and analysis of a radio flare observed on 17 April
2014 from Sgr A* at $9$ GHz using the VLA in its A-array configuration. This is
the first reported simultaneous radio observation of Sgr A* across $16$
frequency windows between $8$ and $10$ GHz. We cross correlate the lowest and
highest spectral windows centered at $8.0$ and $9.9$ GHz, respectively, and
find the $8.0$ GHz light curve lagging $18.37^{+2.17}_{-2.18}$ minutes behind
the $9.9$ GHz light curve. This is the first time lag found in Sgr A*'s light
curve across a narrow radio frequency bandwidth. We separate the quiescent and
flaring components of Sgr A* via flux offsets at each spectral window. The
emission is consistent with an adiabatically-expanding synchrotron plasma,
which we fit to the light curves to characterize the two components. The
flaring emission has an equipartition magnetic field strength of $2.2$ Gauss,
size of $14$ Schwarzschild radii, average speed of $12000$ km s$^{-1}$, and
electron energy spectrum index ($N(E)\propto E^{-p}$), $p = 0.18$. The peak
flare flux at $10$ GHz is approximately $25$% of the quiescent emission. This
flare is abnormal as the inferred magnetic field strength and size are
typically about $10$ Gauss and few Schwarzschild radii. The properties of this
flare are consistent with a transient warm spot in the accretion flow at a
distance of $10$-$100$ Schwarzschild radii from Sgr A*. Our analysis allows for
independent characterization of the variable and quiescent components, which is
significant for studying temporal variations in these components.
|
General partners (GP) are sometimes paid on a deal-by-deal basis and other
times on a whole-portfolio basis. When is one method of payment better than the
other? I show that when assets (projects or firms) are highly correlated or
when GPs have low reputation, whole-portfolio contracting is superior to
deal-by-deal contracting. In this case, by bundling payouts together,
whole-portfolio contracting enhances incentives for GPs to exert effort.
Therefore, it is better suited to alleviate the moral hazard problem which is
stronger than the adverse selection problem in the case of high correlation of
assets or low reputation of GPs. In contrast, for low correlation of assets or
high reputation of GPs, information asymmetry concerns dominate and
deal-by-deal contracts become optimal, as they can efficiently weed out bad
projects one by one. These results shed light on recent empirical findings on
the relationship between investors and venture capitalists.
|
Consensus about the universality of the power law feature in complex networks
is experiencing profound challenges. To shine fresh light on this controversy,
we propose a generic theoretical framework in order to examine the power law
property. First, we study a class of birth-and-death networks that is
ubiquitous in the real world, and calculate its degree distributions. Our
results show that the tails of its degree distributions exhibits a distinct
power law feature, providing robust theoretical support for the ubiquity of the
power law feature. Second, we suggest that in the real world two important
factors, network size and node disappearance probability, point to the
existence of the power law feature in the observed networks. As network size
reduces, or as the probability of node disappearance increases, then the power
law feature becomes increasingly difficult to observe. Finally, we suggest that
an effective way of detecting the power law property is to observe the
asymptotic (limiting) behaviour of the degree distribution within its effective
intervals.
|
In the search for small exoplanets orbiting cool stars whose spectral energy
distributions peak in the near infrared, the strong absorption of radiation in
this region due to water vapour in the atmosphere is a particularly adverse
effect for the ground-based observations of cool stars. To achieve the
photometric precision required to detect exoplanets in the near infrared, it is
necessary to mitigate the impact of variable precipitable water vapour (PWV) on
radial-velocity and photometric measurements. The aim is to enable global PWV
correction by monitoring the amount of precipitable water vapour at zenith and
along the line of sight of any visible target. We developed an open source
Python package that uses Geostationary Operational Environmental Satellites
(GOES) imagery data, which provides temperature and relative humidity at
different pressure levels to compute near real-time PWV above any ground-based
observatory covered by GOES every 5 minutes or 10 minutes depending on the
location. We computed PWV values on selected days above Cerro Paranal (Chile)
and San Pedro M\'artir (Mexico) to benchmark the procedure. We also simulated
different pointing at test targets as observed from the sites to compute the
PWV along the line of sight. To asses the accuracy of our method, we compared
our results with the on-site radiometer measurements obtained from Cerro
Paranal. Our results show that our publicly-available code proves to be a good
supporting tool for measuring the local PWV for any ground-based facility
within the GOES coverage, which will help in reducing correlated noise
contributions in near-infrared ground-based observations that do not benefit
from on-site PWV measurements.
|
"Pay-per-last-$N$-shares" (PPLNS) is one of the most common payout strategies
used by mining pools in Proof-of-Work (PoW) cryptocurrencies. As with any
payment scheme, it is imperative to study issues of incentive compatibility of
miners within the pool. For PPLNS this question has only been partially
answered; we know that reasonably-sized miners within a PPLNS pool prefer
following the pool protocol over employing specific deviations. In this paper,
we present a novel modification to PPLNS where we randomise the protocol in a
natural way. We call our protocol "Randomised pay-per-last-$N$-shares"
(RPPLNS), and note that the randomised structure of the protocol greatly
simplifies the study of its incentive compatibility. We show that RPPLNS
maintains the strengths of PPLNS (i.e., fairness, variance reduction, and
resistance to pool hopping), while also being robust against a richer class of
strategic mining than what has been shown for PPLNS.
|
Accurate knowledge of the thermodynamic properties of zero-temperature,
high-density quark matter plays an integral role in attempts to constrain the
behavior of the dense QCD matter found inside neutron-star cores, irrespective
of the phase realized inside the stars. In this Letter, we consider the
weak-coupling expansion of the dense QCD equation of state and compute the
next-to-next-to-next-to-leading-order contribution arising from the non-Abelian
interactions among long-wavelength, dynamically screened gluonic fields.
Accounting for these interactions requires an all-loop resummation, which can
be performed using hard-thermal-loop (HTL) kinematic approximations.
Concretely, we perform a full two-loop computation using the HTL effective
theory, valid for the long-wavelegth, or soft, modes. We find that the soft
sector is well-behaved within cold quark matter, contrary to the case
encountered at high temperatures, and find that the new contribution decreases
the renormalization-scale dependence of the equation of state at high density.
|
We present the first study of nearby M dwarfs with the ROentgen Survey with
an Imaging Telescope Array (eROSITA) on board the Russian
Spektrum-Roentgen-Gamma mission (SRG). To this end we extracted the Gaia DR2
data for the ~9000 nearby M dwarfs in the superblink proper motion catalog and
calculated their stellar parameters from empirical relations with optical-IR
colors. We cross-matched this catalog with the eROSITA Final Equatorial Depth
Survey (eFEDS) and the first eROSITA all-sky survey (eRASS1). Our sample
consists of 704 stars (SpT = K5-M7). This unprecedented data base for X-ray
emitting M dwarfs allowed to quantitatively constrain the mass dependence of
the X-ray luminosity, and to determine the change in the activity level with
respect to pre-main-sequence stars. We also combined these data with the
Transiting Exoplanet Survey Satellite (TESS) observations that are available
for 501 of 704 X-ray detected M dwarfs and determined the rotation period for
180 of them. With the joint eROSITA-TESS sample, and combining it with our
historical X-ray and rotation data for M dwarfs, we examined the mass
dependence in the saturated regime of the rotation-activity relation. A first
comparison of eROSITA hardness ratios and spectra shows that 65% of our X-ray
detected M dwarfs have coronal temperatures of $\sim 0.5$ keV. We investigated
their long-term X-ray variability by comparing the eRASS1 and ROSAT all-sky
survey (RASS) measurements. Evidence for X-ray flares is found in various parts
of our analysis: directly from inspection of the eFEDS light curves, in the
relation between RASS and eRASS1 X-ray luminosities, and in stars displaying
X-ray emission hotter than the bulk of the sample according to the hardness
ratios. Finally, we point out the need of X-ray spectroscopy for more M dwarfs
to study the coronal temperature-luminosity relation, not well constrained by
our eFEDS results.
|
The generating function of a Hamiltonian $H$ is defined as $F(t)=\langle
e^{-itH}\rangle$, where $t$ is the time and where the expectation value is
taken on a given initial quantum state. This function gives access to the
different moments of the Hamiltonian $\langle H^{K}\rangle$ at various orders
$K$. The real and imaginary parts of $F(t)$ can be respectively evaluated on
quantum computers using one extra ancillary qubit with a set of measurement for
each value of the time $t$. The low cost in terms of qubits renders it very
attractive in the near term period where the number of qubits is limited.
Assuming that the generating function can be precisely computed using quantum
devices, we show how the information content of this function can be used a
posteriori on classical computers to solve quantum many-body problems. Several
methods of classical post-processing are illustrated with the aim to predict
approximate ground or excited state energies and/or approximate long-time
evolutions. This post-processing can be achieved using methods based on the
Krylov space and/or on the $t$-expansion approach that is closely related to
the imaginary time evolution. Hybrid quantum-classical calculations are
illustrated in many-body interacting systems using the pairing and
Fermi-Hubbard models.
|
Reliable predictions of the behaviour of chemical systems are essential
across many industries, from nanoscale engineering over validation of advanced
materials to nanotoxicity assessment in health and medicine. For the future we
therefore envision a paradigm shift for the design of chemical simulations
across all length scales from a prescriptive to a predictive and quantitative
science. This paper presents an integrative perspective about the
state-of-the-art of modelling in computational and theoretical chemistry with
examples from data- and equation-based models. Extension to include reliable
risk assessments and quality control are discussed. To specify and broaden the
concept of chemical accuracy in the design cycle of reliable and robust
molecular simulations the fields of computational chemistry, physics,
mathematics, visualisation science, and engineering are bridged. Methods from
electronic structure calculations serve as examples to explain how
uncertainties arise: through assumed mechanisms in form of equations, model
parameters, algorithms, and numerical implementations. We provide a full
classification of uncertainties throughout the chemical modelling cycle and
discuss how the associated risks can be mitigated. Further, we apply our
statements to molecular dynamics and partial differential equations based
approaches. An overview of methods from numerical mathematics and statistics
provides strategies to analyse risks and potential errors in the design of new
materials and compounds. We also touch on methods for validation and
verification. In the conclusion we address cross-disciplinary open challenges.
In future the quantitative analysis of where simulations and their prognosis
fail will open doors towards predictive materials engineering and chemical
modelling.
|
This paper studies a federated learning (FL) system, where \textit{multiple}
FL services co-exist in a wireless network and share common wireless resources.
It fills the void of wireless resource allocation for multiple simultaneous FL
services in the existing literature. Our method designs a two-level resource
allocation framework comprising \emph{intra-service} resource allocation and
\emph{inter-service} resource allocation. The intra-service resource allocation
problem aims to minimize the length of FL rounds by optimizing the bandwidth
allocation among the clients of each FL service. Based on this, an
inter-service resource allocation problem is further considered, which
distributes bandwidth resources among multiple simultaneous FL services. We
consider both cooperative and selfish providers of the FL services. For
cooperative FL service providers, we design a distributed bandwidth allocation
algorithm to optimize the overall performance of multiple FL services,
meanwhile cater to the fairness among FL services and the privacy of clients.
For selfish FL service providers, a new auction scheme is designed with the FL
service owners as the bidders and the network provider as the auctioneer. The
designed auction scheme strikes a balance between the overall FL performance
and fairness. Our simulation results show that the proposed algorithms
outperform other benchmarks under various network conditions.
|
Among various quantum key distribution (QKD) protocols, the round-robin
differential-phase-shift (RRDPS) protocol has a unique feature that its
security is guaranteed without monitoring any statistics. Moreover, this
protocol has a remarkable property of being robust against source imperfections
assuming that the emitted pulses are independent. Unfortunately, some
experiments confirmed the violation of the independence due to pulse
correlations, and therefore the lack of a security proof without taking into
account this effect is an obstacle for the security. In this paper, we prove
that the RRDPS protocol is secure against any source imperfections by
establishing a proof with the pulse correlations. Our proof is simple in the
sense that we make only three experimentally simple assumptions for the source.
Our numerical simulation based on the proof shows that the long-range pulse
correlation does not cause a significant impact on the key rate, which reveals
another striking feature of the RRDPS protocol. Our security proof is thus
effective and applicable to wide range of practical sources and paves the way
to realize truly secure QKD in high-speed systems.
|
We consider a pursuit-evasion problem with a heterogeneous team of multiple
pursuers and multiple evaders. Although both the pursuers (robots) and the
evaders are aware of each others' control and assignment strategies, they do
not have exact information about the other type of agents' location or action.
Using only noisy on-board sensors the pursuers (or evaders) make probabilistic
estimation of positions of the evaders (or pursuers). Each type of agent use
Markov localization to update the probability distribution of the other type. A
search-based control strategy is developed for the pursuers that intrinsically
takes the probability distribution of the evaders into account. Pursuers are
assigned using an assignment algorithm that takes redundancy (i.e., an excess
in the number of pursuers than the number of evaders) into account, such that
the total or maximum estimated time to capture the evaders is minimized. In
this respect we assume the pursuers to have clear advantage over the evaders.
However, the objective of this work is to use assignment strategies that
minimize the capture time. This assignment strategy is based on a modified
Hungarian algorithm as well as a novel algorithm for determining assignment of
redundant pursuers. The evaders, in order to effectively avoid the pursuers,
predict the assignment based on their probabilistic knowledge of the pursuers
and use a control strategy to actively move away from those pursues. Our
experimental evaluation shows that the redundant assignment algorithm performs
better than an alternative nearest-neighbor based assignment algorithm.
|
The drastic increase of data quantity often brings the severe decrease of
data quality, such as incorrect label annotations, which poses a great
challenge for robustly training Deep Neural Networks (DNNs). Existing learning
\mbox{methods} with label noise either employ ad-hoc heuristics or restrict to
specific noise assumptions. However, more general situations, such as
instance-dependent label noise, have not been fully explored, as scarce studies
focus on their label corruption process. By categorizing instances into
confusing and unconfusing instances, this paper proposes a simple yet universal
probabilistic model, which explicitly relates noisy labels to their instances.
The resultant model can be realized by DNNs, where the training procedure is
accomplished by employing an alternating optimization algorithm. Experiments on
datasets with both synthetic and real-world label noise verify that the
proposed method yields significant improvements on robustness over
state-of-the-art counterparts.
|
We propose a new hybrid quantum algorithm based on the classical Ant Colony
Optimization algorithm to produce approximate solutions for NP-hard problems,
in particular optimization problems. First, we discuss some previously proposed
Quantum Ant Colony Optimization algorithms, and based on them, we develop an
improved algorithm that can be truly implemented on near-term quantum
computers. Our iterative algorithm codifies only the information about the
pheromones and the exploration parameter in the quantum state, while
subrogating the calculation of the numerical result to a classical computer. A
new guided exploration strategy is used in order to take advantage of the
quantum computation power and generate new possible solutions as a
superposition of states. This approach is specially useful to solve constrained
optimization problems, where we can implement efficiently the exploration of
new paths without having to check the correspondence of a path to a solution
before the measurement of the state. As an example of a NP-hard problem, we
choose to solve the Quadratic Assignment Problem. The benchmarks made by
simulating the noiseless quantum circuit and the experiments made on IBM
quantum computers show the validity of the algorithm.
|
The Information Bottleneck (IB) provides an information theoretic principle
for representation learning, by retaining all information relevant for
predicting label while minimizing the redundancy. Though IB principle has been
applied to a wide range of applications, its optimization remains a challenging
problem which heavily relies on the accurate estimation of mutual information.
In this paper, we present a new strategy, Variational Self-Distillation (VSD),
which provides a scalable, flexible and analytic solution to essentially
fitting the mutual information but without explicitly estimating it. Under
rigorously theoretical guarantee, VSD enables the IB to grasp the intrinsic
correlation between representation and label for supervised training.
Furthermore, by extending VSD to multi-view learning, we introduce two other
strategies, Variational Cross-Distillation (VCD) and Variational
Mutual-Learning (VML), which significantly improve the robustness of
representation to view-changes by eliminating view-specific and task-irrelevant
information. To verify our theoretically grounded strategies, we apply our
approaches to cross-modal person Re-ID, and conduct extensive experiments,
where the superior performance against state-of-the-art methods are
demonstrated. Our intriguing findings highlight the need to rethink the way to
estimate mutual
|
In this report, we investigate (element-based) inconsistency measures for
multisets of business rule bases. Currently, related works allow to assess
individual rule bases, however, as companies might encounter thousands of such
instances daily, studying not only individual rule bases separately, but rather
also their interrelations becomes necessary, especially in regard to
determining suitable re-modelling strategies. We therefore present an approach
to induce multiset-measures from arbitrary (traditional) inconsistency
measures, propose new rationality postulates for a multiset use-case, and
investigate the complexity of various aspects regarding multi-rule base
inconsistency measurement.
|
In this paper, we try to improve exploration in Blackbox methods,
particularly Evolution strategies (ES), when applied to Reinforcement Learning
(RL) problems where intermediate waypoints/subgoals are available. Since
Evolutionary strategies are highly parallelizable, instead of extracting just a
scalar cumulative reward, we use the state-action pairs from the trajectories
obtained during rollouts/evaluations, to learn the dynamics of the agent. The
learnt dynamics are then used in the optimization procedure to speed-up
training. Lastly, we show how our proposed approach is universally applicable
by presenting results from experiments conducted on Carla driving and UR5
robotic arm simulators.
|
UDDSKETCH is a recent algorithm for accurate tracking of quantiles in data
streams, derived from the DDSKETCH algorithm. UDDSKETCH provides accuracy
guarantees covering the full range of quantiles independently of the input
distribution and greatly improves the accuracy with regard to DDSKETCH. In this
paper we show how to compress and fuse data streams (or datasets) by using
UDDSKETCH data summaries that are fused into a new summary related to the union
of the streams (or datasets) processed by the input summaries whilst preserving
both the error and size guarantees provided by UDDSKETCH. This property of
sketches, known as mergeability, enables parallel and distributed processing.
We prove that UDDSKETCH is fully mergeable and introduce a parallel version of
UDDSKETCH suitable for message-passing based architectures. We formally prove
its correctness and compare it to a parallel version of DDSKETCH, showing
through extensive experimental results that our parallel algorithm almost
always outperforms the parallel DDSKETCH algorithm with regard to the overall
accuracy in determining the quantiles.
|
Analysing whether neural language models encode linguistic information has
become popular in NLP. One method of doing so, which is frequently cited to
support the claim that models like BERT encode syntax, is called probing;
probes are small supervised models trained to extract linguistic information
from another model's output. If a probe is able to predict a particular
structure, it is argued that the model whose output it is trained on must have
implicitly learnt to encode it. However, drawing a generalisation about a
model's linguistic knowledge about a specific phenomena based on what a probe
is able to learn may be problematic: in this work, we show that semantic cues
in training data means that syntactic probes do not properly isolate syntax. We
generate a new corpus of semantically nonsensical but syntactically well-formed
Jabberwocky sentences, which we use to evaluate two probes trained on normal
data. We train the probes on several popular language models (BERT, GPT, and
RoBERTa), and find that in all settings they perform worse when evaluated on
these data, for one probe by an average of 15.4 UUAS points absolute. Although
in most cases they still outperform the baselines, their lead is reduced
substantially, e.g. by 53% in the case of BERT for one probe. This begs the
question: what empirical scores constitute knowing syntax?
|
The growing political polarization of the American electorate over the last
several decades has been widely studied and documented. During the
administration of President Donald Trump, charges of "fake news" made social
and news media not only the means but, to an unprecedented extent, the topic of
political communication. Using data from before the November 3rd, 2020 US
Presidential election, recent work has demonstrated the viability of using
YouTube's social media ecosystem to obtain insights into the extent of US
political polarization as well as the relationship between this polarization
and the nature of the content and commentary provided by different US news
networks. With that work as background, this paper looks at the sharp
transformation of the relationship between news consumers and here-to-fore
"fringe" news media channels in the 64 days between the US presidential
election and the violence that took place at US Capitol on January 6th. This
paper makes two distinct types of contributions. The first is to introduce a
novel methodology to analyze large social media data to study the dynamics of
social political news networks and their viewers. The second is to provide
insights into what actually happened regarding US political social media
channels and their viewerships during this volatile 64 day period.
|
Age-related macular degeneration (AMD) may cause severe loss of vision or
blindness particularly in elderly people. Exudative AMD is characterized by
angiogenesis of blood vessels growing from underneath the macula, crossing the
blood-retina barrier (that comprise Bruch's membrane, BM, and the retinal
pigmentation epithelium RPE), leaking blood and fluid into the retina and
knocking off photoreceptors. Here, we simulate a computational model of
angiogenesis from the choroid blood vessels via a cellular Potts model, as well
as BM, RPE cells, drusen deposits and photoreceptors. Our results indicate that
improving AMD may require fixing the impaired lateral adhesion between RPE
cells and with BM, as well as diminishing Vessel Endothelial Growth Factor
(VEGF) and Jagged proteins that affect the Notch signaling pathway. Our
numerical simulations suggest that anti-VEGF and anti-Jagged therapies could
temporarily halt exudative AMD while addressing impaired cellular adhesion
could be more effective on a longer time span.
|
The study is based on a principle of laser physics so that a (coherent) laser
light whose wavelength is shorter than a feature under inspection (like
sub-cellular component) can interact with such specific feature (or textural
features) and generates laser speckle patterns which can characterize those
specific features. By the method we have managed to detect differences at
sub-cellular scales such as genetic modification, cellular shape deformation,
etc. with 87% accuracy. In this study red laser is used whose wavelength (6.5
microns) is shorter than a plant cell (~60 microns) that is suitable to
interact with sub-cellular features. The work is assumed to be an initial stage
of further application on human cellular changes observation that would be
utilized for development of more accurate methods such as better drug delivery
assessments, systemic diseases early diagnosis, etc.
|
Monitoring the network performance experienced by the end-user is crucial for
managers of wireless networks as it can enable them to remotely modify the
network parameters to improve the end-user experience. Unfortunately, for
performance monitoring, managers are typically limited to the logs of the
Access Points (APs) that they manage. This information does not directly
capture factors that can hinder station (STA) side transmissions. Consequently,
state-of-the-art methods to measure such metrics primarily involve active
measurements. Unfortunately, such active measurements increase traffic load and
if used regularly and for all the STAs can potentially disrupt user traffic,
thereby worsening performance for other users in the network and draining the
battery of mobile devices.
This thesis enables passive AP-side network analytics. In the first part of
the thesis, I present virtual speed test, a measurement based framework that
enables an AP to estimate speed test results for any of its associated clients
solely based on AP-side observables. Next, I present Uplink Latency Microscope
(uScope), an AP-side framework for estimation of WLAN uplink latency for any of
the associated STAs and decomposition into its constituent components. Similar
to virtual speed test, uScope makes estimations solely based on passive AP-side
observations. We implement both frameworks on a commodity hardware platform and
conduct extensive field trials on a university campus and in a residential
apartment complex. In over 1 million tests, the two proposed frameworks
demonstrate an estimation accuracy with errors under 10%.
|
We propose a theoretical scheme to enhance the phase sensitivity by
introducing a Kerr nonlinear phase shift into the traditional SU(1,1)
interferometer with a coherent state input and homodyne detection. We
investigate the realistic effects of photon losses on phase sensitivity and
quantum Fisher information. The results show that compared with the linear
phase shift in SU(1,1) interferometer, the Kerr nonlinear case can not only
enhance the phase sensitivity and quantum Fisher information, but also
significantly suppress the photon losses. We also observe that at the same
accessible parameters, internal losses have a greater influence on the phase
sensitivity than the external ones. It is interesting that, our scheme shows an
obvious advantage of low-cost input resources to obtain higher phase
sensitivity and larger quantum Fisher information due to the introduction of
nonlinear phase element.
|
Deep generative models of 3D shapes have received a great deal of research
interest. Yet, almost all of them generate discrete shape representations, such
as voxels, point clouds, and polygon meshes. We present the first 3D generative
model for a drastically different shape representation --- describing a shape
as a sequence of computer-aided design (CAD) operations. Unlike meshes and
point clouds, CAD models encode the user creation process of 3D shapes, widely
used in numerous industrial and engineering design tasks. However, the
sequential and irregular structure of CAD operations poses significant
challenges for existing 3D generative models. Drawing an analogy between CAD
operations and natural language, we propose a CAD generative network based on
the Transformer. We demonstrate the performance of our model for both shape
autoencoding and random shape generation. To train our network, we create a new
CAD dataset consisting of 178,238 models and their CAD construction sequences.
We have made this dataset publicly available to promote future research on this
topic.
|
Scientific and engineering problems often require the use of artificial
intelligence to aid understanding and the search for promising designs. While
Gaussian processes (GP) stand out as easy-to-use and interpretable learners,
they have difficulties in accommodating big datasets, categorical inputs, and
multiple responses, which has become a common challenge for a growing number of
data-driven design applications. In this paper, we propose a GP model that
utilizes latent variables and functions obtained through variational inference
to address the aforementioned challenges simultaneously. The method is built
upon the latent variable Gaussian process (LVGP) model where categorical
factors are mapped into a continuous latent space to enable GP modeling of
mixed-variable datasets. By extending variational inference to LVGP models, the
large training dataset is replaced by a small set of inducing points to address
the scalability issue. Output response vectors are represented by a linear
combination of independent latent functions, forming a flexible kernel
structure to handle multiple responses that might have distinct behaviors.
Comparative studies demonstrate that the proposed method scales well for large
datasets with over 10^4 data points, while outperforming state-of-the-art
machine learning methods without requiring much hyperparameter tuning. In
addition, an interpretable latent space is obtained to draw insights into the
effect of categorical factors, such as those associated with building blocks of
architectures and element choices in metamaterial and materials design. Our
approach is demonstrated for machine learning of ternary oxide materials and
topology optimization of a multiscale compliant mechanism with aperiodic
microstructures and multiple materials.
|
The research presents an overhead view of 10 important objects and follows
the general formatting requirements of the most popular machine learning task:
digit recognition with MNIST. This dataset offers a public benchmark extracted
from over a million human-labelled and curated examples. The work outlines the
key multi-class object identification task while matching with prior work in
handwriting, cancer detection, and retail datasets. A prototype deep learning
approach with transfer learning and convolutional neural networks (MobileNetV2)
correctly identifies the ten overhead classes with an average accuracy of
96.7%. This model exceeds the peak human performance of 93.9%. For upgrading
satellite imagery and object recognition, this new dataset benefits diverse
endeavors such as disaster relief, land use management, and other traditional
remote sensing tasks. The work extends satellite benchmarks with new
capabilities to identify efficient and compact algorithms that might work
on-board small satellites, a practical task for future multi-sensor
constellations. The dataset is available on Kaggle and Github.
|
Conformal Predictors (CP) are wrappers around ML models, providing error
guarantees under weak assumptions on the data distribution. They are suitable
for a wide range of problems, from classification and regression to anomaly
detection. Unfortunately, their very high computational complexity limits their
applicability to large datasets. In this work, we show that it is possible to
speed up a CP classifier considerably, by studying it in conjunction with the
underlying ML method, and by exploiting incremental&decremental learning. For
methods such as k-NN, KDE, and kernel LS-SVM, our approach reduces the running
time by one order of magnitude, whilst producing exact solutions. With similar
ideas, we also achieve a linear speed up for the harder case of bootstrapping.
Finally, we extend these techniques to improve upon an optimization of k-NN CP
for regression. We evaluate our findings empirically, and discuss when methods
are suitable for CP optimization.
|
The quantum approximate optimization algorithm (QAOA) is a variational method
for noisy, intermediate-scale quantum computers to solve combinatorial
optimization problems. Quantifying performance bounds with respect to specific
problem instances provides insight into when QAOA may be viable for solving
real-world applications. Here, we solve every instance of MaxCut on
non-isomorphic unweighted graphs with nine or fewer vertices by numerically
simulating the pure-state dynamics of QAOA. Testing up to three layers of QAOA
depth, we find that distributions of the approximation ratio narrow with
increasing depth while the probability of recovering the maximum cut generally
broadens. We find QAOA exceeds the Goemans-Williamson approximation ratio bound
for most graphs. We also identify consistent patterns within the ensemble of
optimized variational circuit parameters that offer highly efficient heuristics
for solving MaxCut with QAOA. The resulting data set is presented as a
benchmark for establishing empirical bounds on QAOA performance that may be
used to test on-going experimental realizations.
|
The proof of origin of logs is becoming increasingly important. In the
context of Industry 4.0 and to combat illegal logging there is an increasing
motivation to track each individual log. Our previous works in this field
focused on log tracking using digital log end images based on methods inspired
by fingerprint and iris-recognition. This work presents a convolutional neural
network (CNN) based approach which comprises a CNN-based segmentation of the
log end combined with a final CNN-based recognition of the segmented log end
using the triplet loss function for CNN training. Results show that the
proposed two-stage CNN-based approach outperforms traditional approaches.
|
Sentiment prediction remains a challenging and unresolved task in various
research fields, including psychology, neuroscience, and computer science. This
stems from its high degree of subjectivity and limited input sources that can
effectively capture the actual sentiment. This can be even more challenging
with only text-based input. Meanwhile, the rise of deep learning and an
unprecedented large volume of data have paved the way for artificial
intelligence to perform impressively accurate predictions or even human-level
reasoning. Drawing inspiration from this, we propose a coverage-based sentiment
and subsentence extraction system that estimates a span of input text and
recursively feeds this information back to the networks. The predicted
subsentence consists of auxiliary information expressing a sentiment. This is
an important building block for enabling vivid and epic sentiment delivery
(within the scope of this paper) and for other natural language processing
tasks such as text summarisation and Q&A. Our approach outperforms the
state-of-the-art approaches by a large margin in subsentence prediction (i.e.,
Average Jaccard scores from 0.72 to 0.89). For the evaluation, we designed
rigorous experiments consisting of 24 ablation studies. Finally, our learned
lessons are returned to the community by sharing software packages and a public
dataset that can reproduce the results presented in this paper.
|
Nano-graphene /polymer composites can functionas pressure induced
electro-switches, at concentrations around their conductivity percolation
threshold. Close to the critical point, the pressure dependence of the electron
tunneling through the polymer barrier separating nanon-graphenes results from
thecompetition among shorteningof the tunneling length and the increase of the
polymer's polarizability. Such switching behaviorwas recentlyobserved
inpolyvinyl alcohol (PVA) loaded withnano-graphene platelets (NGPs). In this
work, PVA is blended withh alpha-poly(vinylidene fluoride) (PVdF)and NGPs.
Coaxial mechanical stress and electric field render the nano-composite
piezoelectric. We investigate the influence of heterogeneity, thermal
properties, phase transitions and kinetic processes occurring in the polymer
matrix on the macroscopicelectrical conductivity and interfacial polarization
in casted specimens. Furthermore, the effect of electro-activity of PVdF grains
on the electric and thermal properties are comparatively studied. Broadband
Dielectricspectroscopy is employed to resolve and inspect electron transport
and trapping with respect to thermal transitions and kineticprocessestraced via
Differential Scanning Calorimetry. The harmonic electric field applied during a
BDS sweep induces volume modifications of the electro-active PVdF grains,
while, electro-activity of PVdF grains can disturb the internal electric field
that free (or bound) electric. The dc conductivity and dielectric relaxation
was found to exhibit weakdependencies.
|
Social recommendation is effective in improving the recommendation
performance by leveraging social relations from online social networking
platforms. Social relations among users provide friends' information for
modeling users' interest in candidate items and help items expose to potential
consumers (i.e., item attraction). However, there are two issues haven't been
well-studied: Firstly, for the user interests, existing methods typically
aggregate friends' information contextualized on the candidate item only, and
this shallow context-aware aggregation makes them suffer from the limited
friends' information. Secondly, for the item attraction, if the item's past
consumers are the friends of or have a similar consumption habit to the
targeted user, the item may be more attractive to the targeted user, but most
existing methods neglect the relation enhanced context-aware item attraction.
To address the above issues, we proposed DICER (Dual Side Deep Context-aware
Modulation for SocialRecommendation). Specifically, we first proposed a novel
graph neural network to model the social relation and collaborative relation,
and on top of high-order relations, a dual side deep context-aware modulation
is introduced to capture the friends' information and item attraction.
Empirical results on two real-world datasets show the effectiveness of the
proposed model and further experiments are conducted to help understand how the
dual context-aware modulation works.
|
Energy management is a critical aspect of risk assessment for Uncrewed Aerial
Vehicle (UAV) flights, as a depleted battery during a flight brings almost
guaranteed vehicle damage and a high risk of human injuries or property damage.
Predicting the amount of energy a flight will consume is challenging as
routing, weather, obstacles, and other factors affect the overall consumption.
We develop a deep energy model for a UAV that uses Temporal Convolutional
Networks to capture the time varying features while incorporating static
contextual information. Our energy model is trained on a real world dataset and
does not require segregating flights into regimes. We illustrate an improvement
in power predictions by $29\%$ on test flights when compared to a
state-of-the-art analytical method. Using the energy model, we can predict the
energy usage for a given trajectory and evaluate the risk of running out of
battery during flight. We propose using Conditional Value-at-Risk (CVaR) as a
metric for quantifying this risk. We show that CVaR captures the risk
associated with worst-case energy consumption on a nominal path by transforming
the output distribution of Monte Carlo forward simulations into a risk space.
Computing the CVaR on the risk-space distribution provides a metric that can
evaluate the overall risk of a flight before take-off. Our energy model and
risk evaluation method can improve flight safety and evaluate the coverage area
from a proposed takeoff location.
The video and codebase are available at https://youtu.be/PHXGigqilOA and
https://git.io/cvar-risk .
|
Localizing and counting large ungulates -- hoofed mammals like cows and elk
-- in very high-resolution satellite imagery is an important task for
supporting ecological studies. Prior work has shown that this is feasible with
deep learning based methods and sub-meter multi-spectral satellite imagery. We
extend this line of work by proposing a baseline method, CowNet, that
simultaneously estimates the number of animals in an image (counts), as well as
predicts their location at a pixel level (localizes). We also propose an
methodology for evaluating such models on counting and localization tasks
across large scenes that takes the uncertainty of noisy labels and the
information needed by stakeholders in ecological monitoring tasks into account.
Finally, we benchmark our baseline method with state of the art vision methods
for counting objects in scenes. We specifically test the temporal
generalization of the resulting models over a large landscape in Point Reyes
Seashore, CA. We find that the LC-FCN model performs the best and achieves an
average precision between 0.56 and 0.61 and an average recall between 0.78 and
0.92 over three held out test scenes.
|
Various processes in academic organizations include the decision points where
selecting people through their assessment and ranking is performed, and the
impact of wrong or right choices can be very high. How do we simultaneously
ensure that these selection decisions are well balanced, fair, and unbiased by
satisfying the key stakeholders' wishes? How much and what kinds of evidence
must be used to make them? How can both the evidence and the procedures be made
transparent and unambitious for everyone? In this paper, we suggest a set of
so-called deep evidence-based analytics, which is applied on top of the
collective awareness platform (portal for managing higher education processes).
The deep evidence, in addition to the facts about the individual academic
achievements of personnel, includes the evidence about individual rewards.
However, what is more important is that such evidence also includes explicit
individual value systems (formalized personal preferences in the
self-assessment of both achievements and the rewards). We provide formalized
procedures that can be used to drive the academic assessment and selection
processes within universities based on advanced (deep) evidence and with
different balances of decision power between administrations and personnel. We
also show how this analytics enables computational evidence for some abstract
properties of an academic organization related to its organizational culture,
such as organizational democracy, justice, and work passion. We present the
analytics together with examples of its actual use within Ukrainian higher
education at the Trust portal.
|
Analyticity and crossing properties of four point function are investigated
in conformal field theories in the frameworks of Wightman axioms. A Hermitian
scalar conformal field, satisfying the Wightman axioms, is considered. The
crucial role of microcausality in deriving analyticity domains is discussed and
domains of analyticity are presented. A pair of permuted Wightman functions are
envisaged. The crossing property is derived by appealing to the technique of
analytic completion for the pair of permuted Wightman functions. The operator
product expansion of a pair of scalar fields is studied and analyticity
property of the matrix elements of composite fields, appearing in the operator
product expansion, is investigated. An integral representation is presented for
the commutator of composite fields where microcausality is a key ingredient.
Three fundamental theorems of axiomatic local field theories; namely, PCT
theorem, the theorem proving equivalence between PCT theorem and weak local
commutativity and the edge-of-the-wedge theorem are invoked to derive a
conformal bootstrap equation rigorously.
|
Bohm and Bell's approaches to the foundations of quantum mechanics share
notable features with the contemporary Primitive Ontology perspective and
Esfeld and Deckert minimalist ontology. For instance, all these programs
consider ontological clarity a necessary condition to be met by every
theoretical framework, promote scientific realism also in the quantum domain
and strengthen the explanatory power of quantum theory. However, these
approaches remarkably diverge from one another, since they employ different
metaphysical principles leading to conflicting Weltanschaaungen. The principal
aim of this essay is to spell out the relations as well as the main differences
existing among such programs, which unfortunately remain often unnoticed in
literature. Indeed, it is not uncommon to see Bell's views conflated with the
PO programme, and the latter with Esfeld and Deckert's proposal. It will be our
task to clear up this confusion.
|
The recent growth of web video sharing platforms has increased the demand for
systems that can efficiently browse, retrieve and summarize video content.
Query-aware multi-video summarization is a promising technique that caters to
this demand. In this work, we introduce a novel Query-Aware Hierarchical
Pointer Network for Multi-Video Summarization, termed DeepQAMVS, that jointly
optimizes multiple criteria: (1) conciseness, (2) representativeness of
important query-relevant events and (3) chronological soundness. We design a
hierarchical attention model that factorizes over three distributions, each
collecting evidence from a different modality, followed by a pointer network
that selects frames to include in the summary. DeepQAMVS is trained with
reinforcement learning, incorporating rewards that capture representativeness,
diversity, query-adaptability and temporal coherence. We achieve
state-of-the-art results on the MVS1K dataset, with inference time scaling
linearly with the number of input video frames.
|
Written language contains stylistic cues that can be exploited to
automatically infer a variety of potentially sensitive author information.
Adversarial stylometry intends to attack such models by rewriting an author's
text. Our research proposes several components to facilitate deployment of
these adversarial attacks in the wild, where neither data nor target models are
accessible. We introduce a transformer-based extension of a lexical replacement
attack, and show it achieves high transferability when trained on a weakly
labeled corpus -- decreasing target model performance below chance. While not
completely inconspicuous, our more successful attacks also prove notably less
detectable by humans. Our framework therefore provides a promising direction
for future privacy-preserving adversarial attacks.
|
This paper details the Leibniz generalization of Lie-theoretic results from
Peggy Batten's 1993 dissertation. We first show that the multiplier of a
Leibniz algebra is characterized by its second cohomology group with
coefficients in the field. We then establish criteria for when the center of a
cover maps onto the center of the algebra. Along the way, we obtain a
collection of exact sequences and a brief theory of unicentral Leibniz
algebras.
|
Generalized correlation analysis (GCA) is concerned with uncovering linear
relationships across multiple datasets. It generalizes canonical correlation
analysis that is designed for two datasets. We study sparse GCA when there are
potentially multiple generalized correlation tuples in data and the loading
matrix has a small number of nonzero rows. It includes sparse CCA and sparse
PCA of correlation matrices as special cases. We first formulate sparse GCA as
generalized eigenvalue problems at both population and sample levels via a
careful choice of normalization constraints. Based on a Lagrangian form of the
sample optimization problem, we propose a thresholded gradient descent
algorithm for estimating GCA loading vectors and matrices in high dimensions.
We derive tight estimation error bounds for estimators generated by the
algorithm with proper initialization. We also demonstrate the prowess of the
algorithm on a number of synthetic datasets.
|
We present a detailed study of the Bose-Hubbard model in a $p$-band
triangular lattice by focusing on the evolution of orbital order across the
superfluid-Mott insulator transition. Two distinct phases are found in the
superfluid regime. One of these phases adiabatically connects the weak
interacting limit. This phase is characterized by the intertwining of axial
$p_\pm=p_x \pm ip_y$ and in-plane $p_\theta=\cos\theta p_x+\sin\theta p_y$
orbital orders, which break the time-reversal symmetry and lattice symmetries
simultaneously. In addition, the calculated Bogoliubov excitation spectrum gaps
the original Dirac points in the single-particle spectrum but exhibits emergent
Dirac points. The other superfluid phase in close proximity to the Mott
insulator with unit boson filling shows a detwined in-plane ferro-orbital
order. Finally, an orbital exchange model is constructed for the Mott insulator
phase. Its classical ground state has an emergent SO$(2)$ rotational symmetry
in the in-plane orbital space and therefore enjoys an infinite degeneracy,
which is ultimately lifted by the orbital fluctuation via the order by disorder
mechanism. Our systematic analysis suggests that the in-plane ferro-orbital
order in the Mott insulator phase agrees with and likely evolves from the
latter superfluid phase.
|
Recent studies have shown that the majority of published computational models
in systems biology and physiology are not repeatable or reproducible. There are
a variety of reasons for this. One of the most likely reasons is that given how
busy modern researchers are and the fact that no credit is given to authors for
publishing repeatable work, it is inevitable that this will be the case. The
situation can only be rectified when government agencies, universities and
other research institutions change policies and that journals begin to insist
that published work is in fact at least repeatable if not reproducible. In this
chapter guidelines are described that can be used by researchers to help make
sure their work is repeatable. A scoring system is suggested that authors can
use to determine how well they are doing.
|
We study the effect of torsional deformations on the electronic properties of
single-walled transition metal dichalcogenide (TMD) nanotubes. In particular,
considering forty-five select armchair and zigzag TMD nanotubes, we perform
symmetry-adapted Kohn-Sham density functional theory calculations to determine
the variation in bandgap and effective mass of charge carriers with twist. We
find that metallic nanotubes remain so even after deformation, whereas
semiconducting nanotubes experience a decrease in bandgap with twist --
originally direct bandgaps become indirect -- resulting in semiconductor to
metal transitions. In addition, the effective mass of holes and electrons
continuously decrease and increase with twist, respectively, resulting in
n-type to p-type semiconductor transitions. We find that this behavior is
likely due to rehybridization of orbitals in the metal and chalcogen atoms,
rather than charge transfer between them. Overall, torsional deformations
represent a powerful avenue to engineer the electronic properties of
semiconducting TMD nanotubes, with applications to devices like sensors and
semiconductor switches.
|
Our all electron (DFBG) calculations show differences between relativistic
and non-relativistic calculations for the structure of the isomers of Og(CO)6
|
We discover the formation of a temporal boundary soliton (TBS) in the close
proximity of a temporal boundary, moving in a nonlinear optical medium, upon
high-intensity pulse collision with the boundary. We show that the TBS
excitation causes giant intensity fluctuations in reflection (transmission)
from (through) the temporal boundary even for very modest input pulse intensity
fluctuations. We advance a statistical theory of the phenomenon and show that
the TBS emerges as an extremely rare event in a nonintegrable nonlinear system,
heralded by colossal intensity fluctuations with unprecedented magnitudes of
the normalized intensity autocorrelation function of the reflected/transmitted
pulse ensemble.
|
We introduce Automorphism-based graph neural networks (Autobahn), a new
family of graph neural networks. In an Autobahn, we decompose the graph into a
collection of subgraphs and apply local convolutions that are equivariant to
each subgraph's automorphism group. Specific choices of local neighborhoods and
subgraphs recover existing architectures such as message passing neural
networks. Our formalism also encompasses novel architectures: as an example, we
introduce a graph neural network that decomposes the graph into paths and
cycles. The resulting convolutions reflect the natural way that parts of the
graph can transform, preserving the intuitive meaning of convolution without
sacrificing global permutation equivariance. We validate our approach by
applying Autobahn to molecular graphs, where it achieves state-of-the-art
results.
|
This paper discusses prime numbers that are (resp. are not) congruent
numbers. Particularly the only case not fully covered by earlier results,
namely primes of the form $p=8k+1$, receives attention.
|
We describe methods for simulating general second-quantized Hamiltonians
using the compact encoding, in which qubit states encode only the occupied
modes in physical occupation number basis states. These methods apply to
second-quantized Hamiltonians composed of a constant number of interactions,
i.e., linear combinations of ladder operator monomials of fixed form. Compact
encoding leads to qubit requirements that are optimal up to logarithmic
factors. We show how to use sparse Hamiltonian simulation methods for
second-quantized Hamiltonians in compact encoding, give explicit
implementations for the required oracles, and analyze the methods. We also
describe several example applications including the free boson and fermion
theories, the $\phi^4$-theory, and the massive Yukawa model, all in both
equal-time and light-front quantization. Our methods provide a general-purpose
tool for simulating second-quantized Hamiltonians, with optimal or near-optimal
scaling with error and model parameters.
|
The neutron drop is firstly investigated in an axially symmetric harmonic
oscillator (ASHO) field, whose potential strengths of different directions can
be controlled artificially. The shape of the neutron drop will change from
spherical to oblate or prolate according to the anisotropy of the external
field. With the potential strength increasing in the axial direction, the
neutron prefers to occupy the orbital perpendicular to the symmetry axis. On
the contrary, the neutron likes to stay in the orbital parallel to the symmetry
axis when the potential strength increases in the radial direction. Meanwhile,
when the potential strength of one direction disappears, the neutron drop
cannot bind together. These investigations are not only helpful to simulate the
properties of neutrons in finite nuclei but also provide the theoretical
predictions to the future artificial operations on the nuclei like the
ultracold atom system, for a deeper realization of quantum many-body systems.
|
Derived from a biophysical model for the motion of a crawling cell, the
system \[(*)~\begin{cases}u_t=\Delta u-\nabla\cdot(u\nabla v)\\0=\Delta
v-kv+u\end{cases}\] is investigated in a finite domain
$\Omega\subset\mathbb{R}^n$, $n\geq2$, with $k\geq0$. While a comprehensive
literature is available for cases with $(*)$ describing chemotaxis systems and
hence being accompanied by homogeneous Neumann-type boundary conditions, the
presently considered modeling context, besides yet requiring the flux
$\partial_\nu u-u\partial_\nu v$ to vanish on $\partial\Omega$, inherently
involves homogeneous Dirichlet conditions for the attractant $v$, which in the
current setting corresponds to the cell's cytoskeleton being free of pressure
at the boundary.
This modification in the boundary setting is shown to go along with a
substantial change with respect to the potential to support the emergence of
singular structures: It is, inter alia, revealed that in contexts of radial
solutions in balls there exist two critical mass levels, distinct from each
other whenever $k>0$ or $n\geq3$, that separate ranges within which (i) all
solutions are global in time and remain bounded, (ii) both global bounded and
exploding solutions exist, or (iii) all nontrivial solutions blow up in finite
time. While critical mass phenomena distinguishing between regimes of type (i)
and (ii) belong to the well-understood characteristics of $(*)$ when posed
under classical no-flux boundary conditions in planar domains, the discovery of
a distinct secondary critical mass level related to the occurrence of (iii)
seems to have no nearby precedent.
In the planar case with the domain being a disk, the analytical results are
supplemented with some numerical illustrations, and it is discussed how the
findings can be interpreted biophysically for the situation of a cell on a flat
substrate.
|
Our knowledge of white dwarf planetary systems predominately arises from the
region within a few Solar radii of the white dwarfs, where minor planets break
up, form rings and discs, and accrete onto the star. The entry location, angle
and speed into this Roche sphere has rarely been explored but crucially
determines the initial geometry of the debris, accretion rates onto the
photosphere, and ultimately the composition of the minor planet. Here we evolve
a total of over 10^5 asteroids with single-planet N-body simulations across the
giant branch and white dwarf stellar evolution phases to quantify the geometry
of asteroid injection into the white dwarf Roche sphere as a function of
planetary mass and eccentricity. We find that lower planetary masses increase
the extent of anisotropic injection and decrease the probability of head-on
(normal to the Roche sphere) encounters. Our results suggest that one can use
dynamical activity within the Roche sphere to make inferences about the hidden
architectures of these planetary systems.
|
We report the observation of discrete vortex bound states with the energy
levels deviating from the widely believed ratio of 1:3:5 in the vortices of an
iron based superconductor KCa2Fe4As4F2 through scanning tunneling microcopy
(STM). Meanwhile Friedel oscillations of vortex bound states are also observed
for the first time in related vortices. By doing self-consistent calculations
of Bogoliubov-de Gennes equations, we find that at extreme quantum limit, the
superconducting order parameter exhibits a Friedel-like oscillation, which
modifies the energy levels of the vortex bound states and explains why it
deviates from the ratio of 1:3:5. The observed Friedel oscillations of the
bound states can also be roughly interpreted by the theoretical calculations,
however some features at high energies could not be explained. We attribute
this discrepancy to the high energy bound states with the influence of nearby
impurities. Our combined STM measurement and the self-consistent calculations
illustrate a generalized feature of the vortex bound states in type-II
superconductors.
|
The removal of organic micropollutants (OMPs) has been investigated in
constructed wetlands (CWs) operated as bioelectrochemical systems (BES). The
operation of CWs as BES (CW-BES), either in the form of microbial fuel cells
(MFC) or microbial electrolysis cells (MEC), has only been investigated in
recent years. The presented experiment used CW meso-scale systems applying a
realistic horizontal flow regime and continuous feeding of real urban
wastewater spiked with four OMPs (pharmaceuticals), namely carbamazepine (CBZ),
diclofenac (DCF), ibuprofen (IBU) and naproxen (NPX). The study evaluated the
removal efficiency of conventional CW systems (CW-control) as well as CW
systems operated as closed-circuit MFCs (CW-MFCs) and MECs (CW-MECs). Although
a few positive trends were identified for the CW-BES compared to the CW-control
(higher average CBZ, DCF and NPX removal by 10-17% in CW-MEC and 5% in CW-MFC),
these proved to be not statistically significantly different. Mesoscale
experiments with real wastewater could thus not confirm earlier positive
effects of CW-BES found under strictly controlled laboratory conditions with
synthetic wastewaters.
|
We have studied the effect of nitriding on the humidity sensing properties of
hydrogenated amorphous carbon (a-C:H) films. The films were prepared in two
stages combining the techniques of physical deposition in vapor phase
evaporation (PAPVD) and plasma pulsed nitriding. By deconvolution of the Raman
spectrum we identified two peaks corresponding to the D and G modes
characteristic of a-C:H. After the N$_2$-H$_2$ plasma treating, the peaks
narrowed and shifted to the right, which we associated with the incorporation
of N into the structure. We compared the sensitivity to the relative humidity
(RH) of the films before and after the N$_2$-H$_2$ plasma treatment. The
nitriding improved the humidity sensitivity measured as the low frequency
resistance.
By impedance spectroscopy we studied the frequency dependence of the AC
conductivity $\sigma$ at different RH conditions. Before nitriding
$\sigma(\omega)\sim A \omega^s$, it seemed to have the universal behaviour seen
in other amorphous systems. The humidity changed the overall scale $A$. After
nitriding, the exponent $s$ increased, and became RH dependent. We associated
this behaviour to the change of the interaction mechanism between the water
molecule and the substrate when the samples were nitriding.
|
Stealing attack against controlled information, along with the increasing
number of information leakage incidents, has become an emerging cyber security
threat in recent years. Due to the booming development and deployment of
advanced analytics solutions, novel stealing attacks utilize machine learning
(ML) algorithms to achieve high success rate and cause a lot of damage.
Detecting and defending against such attacks is challenging and urgent so that
governments, organizations, and individuals should attach great importance to
the ML-based stealing attacks. This survey presents the recent advances in this
new type of attack and corresponding countermeasures. The ML-based stealing
attack is reviewed in perspectives of three categories of targeted controlled
information, including controlled user activities, controlled ML model-related
information, and controlled authentication information. Recent publications are
summarized to generalize an overarching attack methodology and to derive the
limitations and future directions of ML-based stealing attacks. Furthermore,
countermeasures are proposed towards developing effective protections from
three aspects -- detection, disruption, and isolation.
|
Small-scale Mixed-Integer Quadratic Programming (MIQP) problems often arise
in embedded control and estimation applications. Driven by the need for
algorithmic simplicity to target computing platforms with limited memory and
computing resources, this paper proposes a few approaches to solving MIQPs,
either to optimality or suboptimally. We specialize an existing Accelerated
Dual Gradient Projection (GPAD) algorithm to effectively solve the Quadratic
Programming (QP) relaxation that arise during Branch and Bound (B&B) and
propose a generic framework to warm-start the binary variables which reduces
the number of QP relaxations. Moreover, in order to find an integer feasible
combination of the binary variables upfront, two heuristic approaches are
presented: ($i$) without using B&B, and ($ii$) using B&B with a significantly
reduced number of QP relaxations. Both heuristic approaches return an integer
feasible solution that may be suboptimal but involve a much reduced computation
effort. Such a feasible solution can be either implemented directly or used to
set an initial upper bound on the optimal cost in B&B. Through different hybrid
control and estimation examples involving binary decision variables, we show
that the performance of the proposed methods, although very simple to code, is
comparable to that of state-of-the-art MIQP solvers.
|
We propose a new model for networks of time series that influence each other.
Graph structures among time series are found in diverse domains, such as web
traffic influenced by hyperlinks, product sales influenced by recommendation,
or urban transport volume influenced by road networks and weather. There has
been recent progress in graph modeling and in time series forecasting,
respectively, but an expressive and scalable approach for a network of series
does not yet exist. We introduce Radflow, a novel model that embodies three key
ideas: a recurrent neural network to obtain node embeddings that depend on
time, the aggregation of the flow of influence from neighboring nodes with
multi-head attention, and the multi-layer decomposition of time series. Radflow
naturally takes into account dynamic networks where nodes and edges change over
time, and it can be used for prediction and data imputation tasks. On
real-world datasets ranging from a few hundred to a few hundred thousand nodes,
we observe that Radflow variants are the best performing model across a wide
range of settings. The recurrent component in Radflow also outperforms N-BEATS,
the state-of-the-art time series model. We show that Radflow can learn
different trends and seasonal patterns, that it is robust to missing nodes and
edges, and that correlated temporal patterns among network neighbors reflect
influence strength. We curate WikiTraffic, the largest dynamic network of time
series with 366K nodes and 22M time-dependent links spanning five years. This
dataset provides an open benchmark for developing models in this area, with
applications that include optimizing resources for the web. More broadly,
Radflow has the potential to improve forecasts in correlated time series
networks such as the stock market, and impute missing measurements in
geographically dispersed networks of natural phenomena.
|
We need intelligent robots for mobile construction, the process of navigating
in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly
achieve the design without GPS, due to the difficulty caused by the
bi-directional coupling of accurate robot localization and navigation together
with strategic environment manipulation. However, many existing robot vision
and learning tasks such as visual navigation and robot manipulation address
only one of these two coupled aspects. To stimulate the pursuit of a generic
and adaptive solution, we reasonably simplify mobile construction as a
partially observable Markov decision process (POMDP) in 1/2/3D grid worlds and
benchmark the performance of a handcrafted policy with basic localization and
planning, and state-of-the-art deep reinforcement learning (RL) methods. Our
extensive experiments show that the coupling makes this problem very
challenging for those methods, and emphasize the need for novel task-specific
solutions.
|
Analogous to the case of the binomial random graph $G(d+1,p)$, it is known
that the behaviour of a random subgraph of a $d$-dimensional hypercube, where
we include each edge independently with probability $p$, which we denote by
$Q^d_p$, undergoes a phase transition around the critical value of
$p=\frac{1}{d}$. More precisely, standard arguments show that significantly
below this value of $p$, with probability tending to one as $d \to \infty$ (whp
for short) all components of this graph have order $O(d)$, whereas Ajtai,
Koml\'{o}s and Szemer\'{e}di showed that significantly above this value, in the
\emph{supercritical regime}, whp there is a unique `giant' component of order
$\Theta\left(2^d\right)$. In $G(d+1,p)$ much more is known about the complex
structure of the random graph which emerges in this supercritical regime. For
example, it is known that in this regime whp $G(d+1,p)$ contains paths and
cycles of length $\Omega(d)$, as well as complete minors of order
$\Omega\left(\sqrt{d}\right)$. In this paper we obtain analogous results in
$Q^d_p$. In particular, we show that for supercritical $p$, i.e., when
$p=\frac{1+\epsilon}{d}$ for a positive constant $\epsilon$, whp $Q^d_p$
contains a cycle of length $\Omega\left(\frac{2^d}{d^3(\log d)^3} \right)$ and
a complete minor of order $\Omega\left(\frac{2^{\frac{d}{2}}}{d^3(\log d)^3
}\right)$. In order to prove these results, we show that whp the largest
component of $Q^d_p$ has good edge-expansion properties, a result of
independent interest. We also consider the genus of $Q^d_p$ and show that, in
this regime of $p$, whp the genus is $\Omega\left(2^d\right)$.
|
We develop the concept of quasi-phasematching (QPM) by implementing it in the
recently proposed Josephson traveling-wave parametric amplifier (JTWPA) with
three-wave mixing (3WM). The amplifier is based on a ladder transmission line
consisting of flux-biased radio-frequency SQUIDs whose nonlinearity is of
$\chi^{(2)}$-type. QPM is achieved in the 3WM process,
$\omega_p=\omega_s+\omega_i$ (where $\omega_p$, $\omega_s$, and $\omega_i$ are
the pump, signal, and idler frequencies, respectively) due to designing the
JTWPA to include periodically inverted groups of these SQUIDs that reverse the
sign of the nonlinearity. Modeling shows that the JTWPA bandwidth is relatively
large (ca. $0.4\omega_p$) and flat, while unwanted modes, including
$\omega_{2p}=2\omega_p$, $\omega_+=\omega_p +\omega_s$, $\omega_- = 2\omega_p -
\omega_s$, etc., are strongly suppressed with the help of engineered
dispersion.
|
A sequence of point configurations on a compact complex manifold is
asymptotically Fekete if it is close to maximizing a sequence of Vandermonde
determinants. These Vandermonde determinants are defined by tensor powers of a
Hermitian ample line bundle and the point configurations in the sequence
possess good sampling properties with respect to sections of the line bundle.
In this paper, given a collection of toric Hermitian ample line bundles, we
give necessary and sufficient condition for existence of a sequence of point
configurations which is asymptotically Fekete (and hence possess good sampling
properties) with respect to each one of the line bundles. When they exist, we
also present a way of constructing such sequences. As a byproduct we get a new
equidistribution property for maximizers of products of Vandermonde
determinants.
|
Recently, the study of graph neural network (GNN) has attracted much
attention and achieved promising performance in molecular property prediction.
Most GNNs for molecular property prediction are proposed based on the idea of
learning the representations for the nodes by aggregating the information of
their neighbor nodes (e.g. atoms). Then, the representations can be passed to
subsequent layers to deal with individual downstream tasks. Therefore, the
architectures of GNNs can be considered as being composed of two core parts:
graph-related layers and task-specific layers. Facing real-world molecular
problems, the hyperparameter optimization for those layers are vital.
Hyperparameter optimization (HPO) becomes expensive in this situation because
evaluating candidate solutions requires massive computational resources to
train and validate models. Furthermore, a larger search space often makes the
HPO problems more challenging. In this research, we focus on the impact of
selecting two types of GNN hyperparameters, those belonging to graph-related
layers and those of task-specific layers, on the performance of GNN for
molecular property prediction. In our experiments. we employed a
state-of-the-art evolutionary algorithm (i.e., CMA-ES) for HPO. The results
reveal that optimizing the two types of hyperparameters separately can gain the
improvements on GNNs' performance, but optimising both types of hyperparameters
simultaneously will lead to predominant improvements. Meanwhile, our study also
further confirms the importance of HPO for GNNs in molecular property
prediction problems.
|
In a complete metric space equipped with a doubling measure supporting a
$p$-Poincar\'e inequality, we prove sharp growth and integrability results for
$p$-harmonic Green functions and their minimal $p$-weak upper gradients. We
show that these properties are determined by the growth of the underlying
measure near the singularity. Corresponding results are obtained also for more
general $p$-harmonic functions with poles, as well as for singular solutions of
elliptic differential equations in divergence form on weighted $\mathbf{R}^n$
and on manifolds.
The proofs are based on a new general capacity estimate for annuli, which
implies precise pointwise estimates for $p$-harmonic Green functions. The
capacity estimate is valid under considerably milder assumptions than above. We
also use it, under these milder assumptions, to characterize singletons of zero
capacity and the $p$-parabolicity of the space. This generalizes and improves
earlier results that have been important especially in the context of
Riemannian manifolds.
|
A number of recent approaches have been proposed for pruning neural network
parameters at initialization with the goal of reducing the size and
computational burden of models while minimally affecting their training
dynamics and generalization performance. While each of these approaches have
some amount of well-founded motivation, a rigorous analysis of the effect of
these pruning methods on network training dynamics and their formal
relationship to each other has thus far received little attention. Leveraging
recent theoretical approximations provided by the Neural Tangent Kernel, we
unify a number of popular approaches for pruning at initialization under a
single path-centric framework. We introduce the Path Kernel as the
data-independent factor in a decomposition of the Neural Tangent Kernel and
show the global structure of the Path Kernel can be computed efficiently. This
Path Kernel decomposition separates the architectural effects from the
data-dependent effects within the Neural Tangent Kernel, providing a means to
predict the convergence dynamics of a network from its architecture alone. We
analyze the use of this structure in approximating training and generalization
performance of networks in the absence of data across a number of
initialization pruning approaches. Observing the relationship between input
data and paths and the relationship between the Path Kernel and its natural
norm, we additionally propose two augmentations of the SynFlow algorithm for
pruning at initialization.
|
Common modal decomposition techniques for flowfield analysis, data-driven
modeling and flow control, such as proper orthogonal decomposition (POD) and
dynamic mode decomposition (DMD) are usually performed in an Eulerian (fixed)
frame of reference with snapshots from measurements or evolution equations. The
Eulerian description poses some difficulties, however, when the domain or the
mesh deforms with time as, for example, in fluid-structure interactions. For
such cases, we first formulate a Lagrangian modal analysis (LMA) ansatz by a
posteriori transforming the Eulerian flow fields into Lagrangian flow maps
through an orientation and measure-preserving domain diffeomorphism. The
development is then verified for Lagrangian variants of POD and DMD using
direct numerical simulations (DNS) of two canonical flow configurations at Mach
0.5, the lid-driven cavity and flow past a cylinder, representing internal and
external flows, respectively, at pre- and post-bifurcation Reynolds numbers.
The LMA is demonstrated for several situations encompassing unsteady flow
without and with boundary and mesh deformation as well as non-uniform base
flows that are steady in Eulerian but not in Lagrangian frames. We show that
LMA application to steady nonuniform base flow yields insights into flow
stability and post-bifurcation dynamics. LMA naturally leads to Lagrangian
coherent flow structures and connections with finite-time Lyapunov exponents
(FTLE). We examine the mathematical link between FTLE and LMA by considering a
double-gyre flow pattern. Dynamically important flow features in the Lagrangian
sense are recovered by performing LMA with forward and backward (adjoint) time
procedures.
|
Network models provide an efficient way to represent many real life problems
mathematically. In the last few decades, the field of network optimization has
witnessed an upsurge of interest among researchers and practitioners. The
network models considered in this thesis are broadly classified into four types
including transportation problem, shortest path problem, minimum spanning tree
problem and maximum flow problem. Quite often, we come across situations, when
the decision parameters of network optimization problems are not precise and
characterized by various forms of uncertainties arising from the factors, like
insufficient or incomplete data, lack of evidence, inappropriate judgements and
randomness. Considering the deterministic environment, there exist several
studies on network optimization problems. However, in the literature, not many
investigations on single and multi objective network optimization problems are
observed under diverse uncertain frameworks. This thesis proposes seven
different network models under different uncertain paradigms. Here, the
uncertain programming techniques used to formulate the uncertain network models
are (i) expected value model, (ii) chance constrained model and (iii) dependent
chance constrained model. Subsequently, the corresponding crisp equivalents of
the uncertain network models are solved using different solution methodologies.
The solution methodologies used in this thesis can be broadly categorized as
classical methods and evolutionary algorithms. The classical methods, used in
this thesis, are Dijkstra and Kruskal algorithms, modified rough Dijkstra
algorithm, global criterion method, epsilon constraint method and fuzzy
programming method. Whereas, among the evolutionary algorithms, we have
proposed the varying population genetic algorithm with indeterminate crossover
and considered two multi objective evolutionary algorithms.
|
The wavelet analysis technique is a powerful tool and is widely used in broad
disciplines of engineering, technology, and sciences. In this work, we present
a novel scheme of constructing continuous wavelet functions, in which the
wavelet functions are obtained by taking the first derivative of smoothing
functions with respect to the scale parameter. Due to this wavelet constructing
scheme, the inverse transforms are only one-dimensional integrations with
respect to the scale parameter, and hence the continuous wavelet transforms
constructed in this way are more ready to use than the usual scheme. We then
apply the Gaussian-derived wavelet constructed by our scheme to computations of
the density power spectrum for dark matter, the velocity power spectrum and the
kinetic energy spectrum for baryonic fluid. These computations exhibit the
convenience and strength of the continuous wavelet transforms. The transforms
are very easy to perform, and we believe that the simplicity of our wavelet
scheme will make continuous wavelet transforms very useful in practice.
|
We prove the irreducibility of integer polynomials $f(X)$ whose roots lie
inside an Apollonius circle associated to two points on the real axis with
integer abscisae $a$ and $b$, with ratio of the distances to these points
depending on the canonical decomposition of $f(a)$ and $f(b)$. In particular,
we obtain irreducibility criteria for the case where $f(a)$ and $f(b)$ have few
prime factors, and $f$ is either an Enestr\"om-Kakeya polynomial, or has a
large leading coefficient. Analogous results are also provided for multivariate
polynomials over arbitrary fields, in a non-Archimedean setting.
|
We construct the complete set of boundary states of two-dimensional fermionic
CFTs using that of the bosonic counterpart. We see that there are two groups of
boundary conditions, which contributes to the open-string partition function by
characters with integer coefficients, or with $\sqrt{2}$ times integer
coefficients. We argue that, using the argument of [JHEP 09 (2020) 018], this
$\sqrt{2}$ indicates a single unpaired Majorana zero mode, and that these two
groups of boundary conditions are mutually incompatible. We end the paper by
mentioning a possible interpretation of the result in terms of the entanglement
entropy.
|
We develop a dynamical density functional theory based model for the drying
of colloidal films on planar surfaces. We consider mixtures of two different
sizes of hard-sphere colloids. Depending on the solvent evaporation rate and
the initial concentrations of the two species, we observe varying degrees of
stratification in the final dried films. Our model predicts the various
structures described in the literature previously from experiments and computer
simulations, in particular the small-on-top stratified films. Our model also
includes the influence of adsorption of particles to the interfaces.
|
A perennial objection against Bayes factor point-null hypothesis tests is
that the point-null hypothesis is known to be false from the outset. Following
Morey and Rouder (2011) we examine the consequences of approximating the sharp
point-null hypothesis by a hazy `peri-null' hypothesis instantiated as a narrow
prior distribution centered on the point of interest. The peri-null Bayes
factor then equals the point-null Bayes factor multiplied by a correction term
which is itself a Bayes factor. For moderate sample sizes, the correction term
is relatively inconsequential; however, for large sample sizes the correction
term becomes influential and causes the peri-null Bayes factor to be
inconsistent and approach a limit that depends on the ratio of prior ordinates
evaluated at the maximum likelihood estimate. We characterize the asymptotic
behavior of the peri-null Bayes factor and discuss how to construct peri-null
Bayes factor hypothesis tests that are also consistent.
|
NonlinearSchrodinger.jl is a Julia package with a simple interface for
studying solutions of nonlinear Schr\"odinger equations (NLSEs). In
approximately ten lines of code, one can perform a simulation of the cubic NLSE
using one of 32 algorithms, including symplectic and Runge-Kutta-Nystr\"om
integrators up to eighth order. Furthermore, it is possible to compute
analytical solutions via a numerical implementation of the Darboux
transformation for extended NLSEs up to fifth order, with an equally simple
interface. In what follows, we review the fundamentals of solving this class of
equations numerically and analytically, discuss the implementation, and provide
several examples.
|
In a 2004 paper by V. M. Buchstaber and D. V. Leykin, published in
"Functional Analysis and Its Applications," for each $g > 0$, a system of $2g$
multidimensional heat equations in a nonholonomic frame was constructed. The
sigma function of the universal hyperelliptic curve of genus $g$ is a solution
of this system. In the work arXiv:2007.08966 explicit expressions for the
Schr\"odinger operators that define the equations of the system considered were
obtained in the hyperelliptic case.
In this work we use these results to show that if the initial condition of
the system considered is polynomial, then the solution of the system is
uniquely determined up to a constant factor. This has important applications in
the well-known problem of series expansion for the hyperelliptic sigma
function. We give an explicit description of the connection of such solutions
to well-known Burchnall-Chaundy polynomials and Adler-Moser polynomials. We
find a system of linear second-order differential equations that determines the
corresponding Adler-Moser polynomial.
|
Photon-mediated interactions between atomic systems are the cornerstone of
quantum information transfer. They can arise via coupling to a common
electromagnetic mode or by quantum interference. This can manifest in
cooperative light-matter coupling, yielding collective rate enhancements such
as those at the heart of superradiance, or remote entanglement via
measurement-induced path erasure. Here, we report coherent control of
cooperative emission arising from two distant but indistinguishable solid-state
emitters due to path erasure. The primary signature of cooperative emission,
the emergence of "bunching" at zero-delay in an intensity correlation
experiment, is used to characterise the indistinguishability of the emitters,
their dephasing, and the degree of correlation in the joint system which can be
coherently controlled. In a stark departure from a pair of uncorrelated
emitters, we observe photon statistics resembling that of a weak coherent state
in Hong-Ou-Mandel type interference measurements. Our experiments establish new
techniques to control and characterize cooperative behaviour between matter
qubits using the full quantum optics toolbox, a key stepping stone on the route
to realising large-scale quantum photonic networks.
|
Galaxy clusters are considered to be gigantic reservoirs of cosmic rays
(CRs). Some of the clusters are found with extended radio emission, which
provides evidence for the existence of magnetic fields and CR electrons in the
intra-cluster medium (ICM). The mechanism of radio halo (RH) emission is still
under debate, and it has been believed that turbulent reacceleration plays an
important role. In this paper, we study the reacceleration of CR protons and
electrons in detail by numerically solving the Fokker-Planck equation, and show
how radio and gamma-ray observations can be used to constrain CR distributions
and resulting high-energy emission for the Coma cluster. We take into account
the radial diffusion of CRs and follow the time evolution of their
one-dimensional distribution, by which we investigate the radial profile of the
CR injection that is consistent with the observed RH surface brightness. We
find that the required injection profile is non-trivial, depending on whether
CR electrons have the primary or secondary origin. Although the secondary CR
electron scenario predicts larger gamma-ray and neutrino fluxes, it is in
tension with the observed RH spectrum. In either scenario, we find that galaxy
clusters can make a sizable contribution to the all-sky neutrino intensity if
the CR energy spectrum is nearly flat.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.