abstract
stringlengths 42
2.09k
|
---|
Phenomics is concerned with detailed description of all aspects of organisms,
from their physical foundations at genetic, molecular and cellular level, to
behavioural and psychological traits. Neuropsychiatric phenomics, endorsed by
NIMH, provides such broad perspective to understand mental disorders. It is
clear that learning sciences also need similar approach that will integrate
efforts to understand cognitive processes from the perspective of the brain
development, in temporal, spatial, psychological and social aspects. The brain
is a substrate shaped by genetic, epigenetic, cellular and environmental
factors including education, individual experiences and personal history,
culture, social milieu. Learning sciences should thus be based on the
foundation of neurocognitive phenomics. A brief review of selected aspects of
such approach is presented, outlining new research directions. Central,
peripheral and motor processes in the brain are linked to the inventory of the
learning styles.
|
This article describes novel approaches to quickly estimate planar surfaces
from RGBD sensor data. The approach manipulates the standard algebraic fitting
equations into a form that allows many of the needed regression variables to be
computed directly from the camera calibration information. As such, much of the
computational burden required by a standard algebraic surface fit can be
pre-computed. This provides a significant time and resource savings, especially
when many surface fits are being performed which is often the case when RGBD
point-cloud data is being analyzed for normal estimation, curvature estimation,
polygonization or 3D segmentation applications. Using an integral image
implementation, the proposed approaches show a significant increase in
performance compared to the standard algebraic fitting approaches.
|
Compressing Deep Neural Network (DNN) models to alleviate the storage and
computation requirements is essential for practical applications, especially
for resource limited devices. Although capable of reducing a reasonable amount
of model parameters, previous unstructured or structured weight pruning methods
can hardly truly accelerate inference, either due to the poor hardware
compatibility of the unstructured sparsity or due to the low sparse rate of the
structurally pruned network. Aiming at reducing both storage and computation,
as well as preserving the original task performance, we propose a generalized
weight unification framework at a hardware compatible micro-structured level to
achieve high amount of compression and acceleration. Weight coefficients of a
selected micro-structured block are unified to reduce the storage and
computation of the block without changing the neuron connections, which turns
to a micro-structured pruning special case when all unified coefficients are
set to zero, where neuron connections (hence storage and computation) are
completely removed. In addition, we developed an effective training framework
based on the alternating direction method of multipliers (ADMM), which converts
our complex constrained optimization into separately solvable subproblems.
Through iteratively optimizing the subproblems, the desired micro-structure can
be ensured with high compression ratio and low performance degradation. We
extensively evaluated our method using a variety of benchmark models and
datasets for different applications. Experimental results demonstrate
state-of-the-art performance.
|
The asymptotic phase is a fundamental quantity for the analysis of
deterministic limit-cycle oscillators, and generalized definitions of the
asymptotic phase for stochastic oscillators have also been proposed. In this
article, we show that the asymptotic phase and also amplitude can be defined
for classical and semiclassical stochastic oscillators in a natural and unified
manner by using the eigenfunctions of the Koopman operator of the system. We
show that the proposed definition gives appropriate values of the phase and
amplitude for strongly stochastic limit-cycle oscillators, excitable systems
undergoing noise-induced oscillations, and also for quantum limit-cycle
oscillators in the semiclassical regime.
|
The relationship between galaxy characteristics and the reionization of the
universe remains elusive, mainly due to the observational difficulty in
accessing the Lyman continuum (LyC) at these redshifts. It is thus important to
identify low-redshift LyC-leaking galaxies that can be used as laboratories to
investigate the physical processes that allow LyC photons to escape. The
weakness of the [S II] nebular emission lines relative to typical star-forming
galaxies has been proposed as a LyC predictor. In this paper, we show that the
[S II]-deficiency is an effective method to select LyC-leaking candidates using
data from the Low-redshift LyC Survey, which has detected flux below the Lyman
edge in 35 out of 66 star-forming galaxies with the Cosmic Origins Spectrograph
onboard the Hubble Space Telescope. We show that LyC leakers tend to be more [S
II]-deficient and that the fraction of their detections increases as [S
II]-deficiency becomes more prominent. Correlational studies suggest that [S
II]-deficiency complements other LyC diagnostics (such as strong Lyman-$\alpha$
emission and high [O III]/[O II]). Our results verify an additional technique
by which reionization-era galaxies could be studied.
|
We propose a novel analytical model for anisotropic multi-layer cylindrical
structures containing graphene layers. The general structure is formed by an
aperiodic repetition of a three-layer sub-structure, where a graphene layer,
with an isotropic surface conductivity, has been sandwiched between two
adjacent magnetic materials. An external magnetic bias has been applied in the
axial direction. General matrix representation is obtained in our proposed
analytical model to find the dispersion relation. The relation will be used to
find the effective index of the structure and its other propagation parameters.
Two special exemplary structures have been introduced and studied to show the
richness of the proposed general structure regarding the related specific
plasmonic wave phenomena and effects. A series of simulations have been
conducted to demonstrate the noticeable wave-guiding properties of the
structure in the 10-40 THz band. A very good agreement between the analytical
and simulation results is observed. The proposed structure can be utilized to
design novel plasmonic devices such as absorbers, modulators, plasmonic sensors
and tunable antennas in the THz frequencies.
|
In this paper, meant as a companion to arXiv:2006.04458, we consider a class
of non-integrable $2D$ Ising models in cylindrical domains, and we discuss two
key aspects of the multiscale construction of their scaling limit. In
particular, we provide a detailed derivation of the Grassmann representation of
the model, including a self-contained presentation of the exact solution of the
nearest neighbor model in the cylinder. Moreover, we prove precise asymptotic
estimates of the fermionic Green's function in the cylinder, required for the
multiscale analysis of the model. We also review the multiscale construction of
the effective potentials in the infinite volume limit, in a form suitable for
the generalization to finite cylinders. Compared to previous works, we
introduce a few important simplifications in the localization procedure and in
the iterative bounds on the kernels of the effective potentials, which are
crucial for the adaptation of the construction to domains with boundaries.
|
Many channel decoders rely on parallel decoding attempts to achieve good
performance with acceptable latency. However, most of the time fewer attempts
than the foreseen maximum are sufficient for successful decoding.
Input-distribution-aware (IDA) decoding allows to determine the parallelism of
polar code list decoders by observing the distribution of channel information.
In this work, IDA decoding is proven to be effective with different codes and
decoding algorithms as well. Two techniques, M-IDA and MD-IDA, are proposed:
they exploit the sampling of the input distribution inherent to particular
decoding algorithms to perform low-cost IDA decoding. Simulation results on the
decoding of BCH codes via the Chase and ORBGRAND algorithms show that they
perform at least as well as the original IDA decoding, allowing to reduce
run-time complexity down to 17% and 67\% with minimal error correction
degradation.
|
It was shown that the particle distribution detected by a uniformly
accelerated observer in the inertial vacuum (Unruh effect) deviates from the
pure Planckian spectrum when considering the superposition of fields with
different masses. Here we elaborate on the statistical origin of this
phenomenon. In a suitable regime, we provide an effective description of the
emergent distribution in terms of the nonextensive q-generalized statistics
based on Tsallis entropy. This picture allows us to establish a nontrivial
relation between the q-entropic index and the characteristic mixing parameters
sin{\theta} and \Delta m. In particular, we infer that q < 1, indicating the
superadditive feature of Tsallis entropy in this framework. We discuss our
result in connection with the entangled condensate structure acquired by the
quantum vacuum for mixed fields.
|
In this paper we present an early prototype of the Digger Finger that is
designed to easily penetrate granular media and is equipped with the GelSight
sensor. Identifying objects buried in granular media using tactile sensors is a
challenging task. First, particle jamming in granular media prevents downward
movement. Second, the granular media particles tend to get stuck between the
sensing surface and the object of interest, distorting the actual shape of the
object. To tackle these challenges we present a Digger Finger prototype. It is
capable of fluidizing granular media during penetration using mechanical
vibrations. It is equipped with high resolution vision based tactile sensing to
identify objects buried inside granular media. We describe the experimental
procedures we use to evaluate these fluidizing and buried shape recognition
capabilities. A robot with such fingers can perform explosive ordnance disposal
and Improvised Explosive Device (IED) detection tasks at a much a finer
resolution compared to techniques like Ground Penetration Radars (GPRs).
Sensors like the Digger Finger will allow robotic manipulation research to move
beyond only manipulating rigid objects.
|
Domain adaptation aims to leverage a label-rich domain (the source domain) to
help model learning in a label-scarce domain (the target domain). Most domain
adaptation methods require the co-existence of source and target domain samples
to reduce the distribution mismatch, however, access to the source domain
samples may not always be feasible in the real world applications due to
different problems (e.g., storage, transmission, and privacy issues). In this
work, we deal with the source data-free unsupervised domain adaptation problem,
and propose a novel approach referred to as Virtual Domain Modeling (VDM-DA).
The virtual domain acts as a bridge between the source and target domains. On
one hand, we generate virtual domain samples based on an approximated Gaussian
Mixture Model (GMM) in the feature space with the pre-trained source model,
such that the virtual domain maintains a similar distribution with the source
domain without accessing to the original source data. On the other hand, we
also design an effective distribution alignment method to reduce the
distribution divergence between the virtual domain and the target domain by
gradually improving the compactness of the target domain distribution through
model learning. In this way, we successfully achieve the goal of distribution
alignment between the source and target domains by training deep networks
without accessing to the source domain data. We conduct extensive experiments
on benchmark datasets for both 2D image-based and 3D point cloud-based
cross-domain object recognition tasks, where the proposed method referred to
Domain Adaptation with Virtual Domain Modeling (VDM-DA) achieves the
state-of-the-art performances on all datasets.
|
We consider the learning and prediction of nonlinear time series generated by
a latent symplectic map. A special case is (not necessarily separable)
Hamiltonian systems, whose solution flows give such symplectic maps. For this
special case, both generic approaches based on learning the vector field of the
latent ODE and specialized approaches based on learning the Hamiltonian that
generates the vector field exist. Our method, however, is different as it does
not rely on the vector field nor assume its existence; instead, it directly
learns the symplectic evolution map in discrete time. Moreover, we do so by
representing the symplectic map via a generating function, which we approximate
by a neural network (hence the name GFNN). This way, our approximation of the
evolution map is always \emph{exactly} symplectic. This additional geometric
structure allows the local prediction error at each step to accumulate in a
controlled fashion, and we will prove, under reasonable assumptions, that the
global prediction error grows at most \emph{linearly} with long prediction
time, which significantly improves an otherwise exponential growth. In
addition, as a map-based and thus purely data-driven method, GFNN avoids two
additional sources of inaccuracies common in vector-field based approaches,
namely the error in approximating the vector field by finite difference of the
data, and the error in numerical integration of the vector field for making
predictions. Numerical experiments further demonstrate our claims.
|
Early wildfire detection is of paramount importance to avoid as much damage
as possible to the environment, properties, and lives. Deep Learning (DL)
models that can leverage both visible and infrared information have the
potential to display state-of-the-art performance, with lower false-positive
rates than existing techniques. However, most DL-based image fusion methods
have not been evaluated in the domain of fire imagery. Additionally, to the
best of our knowledge, no publicly available dataset contains visible-infrared
fused fire images. There is a growing interest in DL-based image fusion
techniques due to their reduced complexity. Due to the latter, we select three
state-of-the-art, DL-based image fusion techniques and evaluate them for the
specific task of fire image fusion. We compare the performance of these methods
on selected metrics. Finally, we also present an extension to one of the said
methods, that we called FIRe-GAN, that improves the generation of artificial
infrared images and fused ones on selected metrics.
|
We consider linear systems $Ax = b$ where $A \in \mathbb{R}^{m \times n}$
consists of normalized rows, $\|a_i\|_{\ell^2} = 1$, and where up to $\beta m$
entries of $b$ have been corrupted (possibly by arbitrarily large numbers).
Haddock, Needell, Rebrova and Swartworth propose a quantile-based Random
Kaczmarz method and show that for certain random matrices $A$ it converges with
high likelihood to the true solution. We prove a deterministic version by
constructing, for any matrix $A$, a number $\beta_A$ such that there is
convergence for all perturbations with $\beta < \beta_A$. Assuming a random
matrix heuristic, this proves convergence for tall Gaussian matrices with up to
$\sim 0.5\%$ corruption (a number that can likely be improved).
|
In the framework of photonics with all-dielectric nanoantennas,
sub-micro-metric spheres can be exploited for a plethora of applications
including vanishing back-scattering, enhanced directivity of a light emitter,
beam steering, and large Purcell factors. Here, the potential of a
high-throughput fabrication method based on aerosol-spray is shown to form
quasi-perfect sub-micrometric spheres of polycrystalline TiO 2 . Spectroscopic
investigation of light scattering from individual particles reveals sharp
resonances in agreement with Mie theory, neat structural colors, and a high
directivity. Owing to the high permittivity and lossless material in use, this
method opens the way toward the implementation of isotropic meta-materials and
forward-directional sources with magnetic responses at visible and near-UV
frequencies, not accessible with conventional Si- and Ge-based Mie resonators.
|
We exhibit the analog of the entropy map for multivariate Gaussian
distributions on local fields. As in the real case, the image of this map lies
in the supermodular cone and it determines the distribution of the valuation
vector. In general, this map can be defined for non-archimedian valued fields
whose valuation group is an additive subgroup of the real line, and it remains
supermodular. We also explicitly compute the image of this map in dimension 3.
|
We investigate the degradation of quantum entanglement in the
Schwarzschild-de Sitter black hole spacetime, by studying the mutual
information and the logarithmic negativity for maximally entangled, bipartite
initial states for massless minimal scalar fields. This spacetime is endowed
with a black hole as well as a cosmological event horizon, giving rise to
particle creation at two different temperatures. We consider two independent
descriptions of thermodynamics and particle creation in this background. The
first involves thermal equilibrium of an observer with the individual Hawking
temperature of either of the horizons. We show that as of the asymptotically
flat/anti-de Sitter black holes, the entanglement or correlation degrades here
with increasing Hawking temperature. The second treats both the horizons
combinedly to define a total entropy and an effective equilibrium temperature.
We present a field theoretic derivation of this effective temperature and argue
that unlike the usual cases, the particle creation here is not ocurring in
causally disconnected spacetime wedges but in a single region. Using these
states, we then show that in this scenario the entanglement never degrades but
increases with increasing black hole temperature and holds true no matter how
hot the black hole becomes or how small the cosmological constant is. We argue
that this phenomenon can have no analogue in the asymptotically flat/anti-de
Sitter black hole spacetimes.
|
Dynamic graphs arise in a plethora of practical scenarios such as social
networks, communication networks, and financial transaction networks. Given a
dynamic graph, it is fundamental and essential to learn a graph representation
that is expected not only to preserve structural proximity but also jointly
capture the time-evolving patterns. Recently, graph convolutional network (GCN)
has been widely explored and used in non-Euclidean application domains. The
main success of GCN, especially in handling dependencies and passing messages
within nodes, lies in its approximation to Laplacian smoothing. As a matter of
fact, this smoothing technique can not only encourage must-link node pairs to
get closer but also push cannot-link pairs to shrink together, which
potentially cause serious feature shrink or oversmoothing problem, especially
when stacking graph convolution in multiple layers or steps. For learning
time-evolving patterns, a natural solution is to preserve historical state and
combine it with the current interactions to obtain the most recent
representation. Then the serious feature shrink or oversmoothing problem could
happen when stacking graph convolution explicitly or implicitly according to
current prevalent methods, which would make nodes too similar to distinguish
each other. To solve this problem in dynamic graph embedding, we analyze the
shrinking properties in the node embedding space at first, and then design a
simple yet versatile method, which exploits L2 feature normalization constraint
to rescale all nodes to hypersphere of a unit ball so that nodes would not
shrink together, and yet similar nodes can still get closer. Extensive
experiments on four real-world dynamic graph datasets compared with competitive
baseline models demonstrate the effectiveness of the proposed method.
|
Epitaxial orthorhombic Hf0.5Zr0.5O2 (HZO) films on La0.67Sr0.33MnO3 (LSMO)
electrodes show robust ferroelectricity, with high polarization, endurance and
retention. However, no similar results have been achieved using other
perovskite electrodes so far. Here, LSMO and other perovskite electrodes are
compared. A small amount of orthorhombic phase and low polarization is found in
HZO films grown on La-doped BaSnO3 and Nb-doped SrTiO3, while null amounts of
orthorhombic phase and polarization are detected in films on LaNiO3 and SrRuO3.
The critical effect of the electrode on the stabilized phases is not
consequence of differences in the electrode lattice parameter. The interface is
critical, and engineering the HZO bottom interface on just a few monolayers of
LSMO permits the stabilization of the orthorhombic phase. Furthermore, while
the specific divalent ion (Sr or Ca) in the manganite is not relevant, reducing
the La content causes a severe reduction of the amount of orthorhombic phase
and the ferroelectric polarization in the HZO film.
|
We show that for every positive integer $k$, there exist $k$ consecutive
primes having the property that if any digit of any one of the primes,
including any of the infinitely many leading zero digits, is changed, then that
prime becomes composite.
|
Classical and quantum correlation functions are derived for a system of
non-interacting particles moving on a circle. It is shown that the decaying
behaviour of the classical expression for the correlation function can be
recovered from the strictly periodic quantum mechanical expression by taking
the limit that the Planck's constant goes to zero, after an appropriate
transformation.
|
We propose a scheme to create an electronic Floquet vortex state by
irradiating a two-dimensional semiconductor with the laser light carrying
non-zero orbital angular momentum. We analytically and numerically study the
properties of the Floquet vortex states, with the methods analogous to the ones
previously applied to the analysis of superconducting vortex states. We show
that such Floquet vortex states are similar to the superconducting vortex
states, and they exhibit a wide range of tunability. To illustrate the
potential utility of such tunability, we show how such states could be used for
quantum state engineering.
|
The redshift distribution of galactic-scale lensing systems provides a
laboratory to probe the velocity dispersion function (VDF) of early-type
galaxies (ETGs) and measure the evolution of early-type galaxies at redshift z
~ 1. Through the statistical analysis of the currently largest sample of
early-type galaxy gravitational lenses, we conclude that the VDF inferred
solely from strong lensing systems is well consistent with the measurements of
SDSS DR5 data in the local universe. In particular, our results strongly
indicate a decline in the number density of lenses by a factor of two and a 20%
increase in the characteristic velocity dispersion for the early-type galaxy
population at z ~ 1. Such VDF evolution is in perfect agreement with the
$\Lambda$CDM paradigm (i.e., the hierarchical build-up of mass structures over
cosmic time) and different from "stellar mass-downsizing" evolutions obtained
by many galaxy surveys. Meanwhile, we also quantitatively discuss the evolution
of the VDF shape in a more complex evolution model, which reveals its strong
correlation with that of the number density and velocity dispersion of
early-type galaxies. Finally, we evaluate if future missions such as LSST can
be sensitive enough to place the most stringent constraints on the redshift
evolution of early-type galaxies, based on the redshift distribution of
available gravitational lenses.
|
A promising approach to the practical application of the Quantum Approximate
Optimization Algorithm (QAOA) is finding QAOA parameters classically in
simulation and sampling the solutions from QAOA with optimized parameters on a
quantum computer. Doing so requires repeated evaluations of QAOA energy in
simulation. We propose a novel approach for accelerating the evaluation of QAOA
energy by leveraging the symmetry of the problem. We show a connection between
classical symmetries of the objective function and the symmetries of the terms
of the cost Hamiltonian with respect to the QAOA energy. We show how by
considering only the terms that are not connected by symmetry, we can
significantly reduce the cost of evaluating the QAOA energy. Our approach is
general and applies to any known subgroup of symmetries and is not limited to
graph problems. Our results are directly applicable to nonlocal QAOA
generalization RQAOA. We outline how available fast graph automorphism solvers
can be leveraged for computing the symmetries of the problem in practice. We
implement the proposed approach on the MaxCut problem using a state-of-the-art
tensor network simulator and a graph automorphism solver on a benchmark of 48
graphs with up to 10,000 nodes. Our approach provides an improvement for $p=1$
on $71.7\%$ of the graphs considered, with a median speedup of $4.06$, on a
benchmark where $62.5\%$ of the graphs are known to be hard for automorphism
solvers.
|
Methods inspired from machine learning have recently attracted great interest
in the computational study of quantum many-particle systems. So far, however,
it has proven challenging to deal with microscopic models in which the total
number of particles is not conserved. To address this issue, we propose a new
variant of neural network states, which we term neural coherent states. Taking
the Fr\"ohlich impurity model as a case study, we show that neural coherent
states can learn the ground state of non-additive systems very well. In
particular, we observe substantial improvement over the standard coherent state
estimates in the most challenging intermediate coupling regime. Our approach is
generic and does not assume specific details of the system, suggesting wide
applications.
|
This paper presents a fundamental analysis connecting phase noise and
long-term frequency accuracy of oscillators and explores the possibilities and
limitations in crystal-less frequency calibration for wireless edge nodes from
a noise-impact perspective. N-period-average jitter (NPAJ) is introduced as a
link between the spectral characterization of phase noise and long-term
frequency accuracy. It is found that flicker noise or other colored noise
profiles coming from the reference in a frequency synthesizer is the dominant
noise source affecting long-term frequency accuracy. An average processing unit
embedded in an ADPLL is proposed based on the N-period-average jitter concept
to enhance frequency accuracy in a Calibrate and Open-loop scenario commonly
used in low power radios. With this low-cost block, the frequency calibration
accuracy can be directly associated with the reference noise performance. Thus,
the feasibility of XO-less design with certain communication standards can be
easily evaluated with the proposed theory.
|
Using a differential equation approach asymptotic expansions are rigorously
obtained for Lommel, Weber, Anger-Weber and Struve functions, as well as
Neumann polynomials, each of which is a solution of an inhomogeneous Bessel
equation. The approximations involve Airy and Scorer functions, and are
uniformly valid for large real order $\nu$ and unbounded complex argument $z$.
An interesting complication is the identification of the Lommel functions with
the new asymptotic solutions, and in order to do so it is necessary to consider
certain sectors of the complex plane, as well as introduce new forms of Lommel
and Struve functions.
|
Context: Expert judgement is a common method for software effort estimations
in practice today. Estimators are often shown extra obsolete requirements
together with the real ones to be implemented. Only one previous study has been
conducted on if such practices bias the estimations. Objective: We conducted
six experiments with both students and practitioners to study, and quantify,
the effects of obsolete requirements on software estimation. Method By
conducting a family of six experiments using both students and practitioners as
research subjects (N = 461), and by using a Bayesian Data Analysis approach, we
investigated different aspects of this effect. We also argue for, and show an
example of, how we by using a Bayesian approach can be more confident in our
results and enable further studies with small sample sizes. Results: We found
that the presence of obsolete requirements triggered an overestimation in
effort across all experiments. The effect, however, was smaller in a field
setting compared to using students as subjects. Still, the over-estimations
triggered by the obsolete requirements were systematically around twice the
percentage of the included obsolete ones, but with a large 95% credible
interval. Conclusions: The results have implications for both research and
practice in that the found systematic error should be accounted for in both
studies on software estimation and, maybe more importantly, in estimation
practices to avoid over-estimation due to this systematic error. We partly
explain this error to be stemming from the cognitive bias of
anchoring-and-adjustment, i.e. the obsolete requirements anchored a much larger
software. However, further studies are needed in order to accurately predict
this effect.
|
In this review I will discuss the comparison between model results and
observational data for the Milky Way, the predictive power of such models as
well as their limits. Such a comparison, known as Galactic archaeology, allows
us to impose constraints on stellar nucleosynthesis and timescales of formation
of the various Galactic components (halo, bulge, thick disk and thin disk).
|
We focus on studying the opacity of iron, chromium, and nickel plasmas at
conditions relevant to experiments carried out at Sandia National Laboratories
[J. E. Bailey et al., Nature 517, 56 (2015)]. We calculate the photo-absorption
cross-sections and subsequent opacity for plasmas using linear response
time-dependent density functional theory (TD-DFT). Our results indicate that
the physics of channel mixing accounted for in linear response TD-DFT leads to
an increase in the opacity in the bound-free quasi-continuum, where the Sandia
experiments indicate that models under-predict iron opacity. However, the
increase seen in our calculations is only in the range of 5-10%. Further, we do
not see any change in this trend for chromium and nickel. This behavior
indicates that channel mixing effects do not explain the trends in opacity
observed in the Sandia experiments.
|
In this note, we prove some results related to small perturbations of a frame
for a Hilbert space $\mathcal{H}$ in order to have a woven pair for
$\mathcal{H}$. Our results complete those known in the literature. In addition
we study a necessary condition for a woven pair, that resembles a
characterization for Riesz frames.
|
Solving the Multi-Agent Path Finding (MAPF) problem optimally is known to be
NP-Hard for both make-span and total arrival time minimization. While many
algorithms have been developed to solve MAPF problems, there is no dominating
optimal MAPF algorithm that works well in all types of problems and no standard
guidelines for when to use which algorithm. In this work, we develop the deep
convolutional network MAPFAST (Multi-Agent Path Finding Algorithm SelecTor),
which takes a MAPF problem instance and attempts to select the fastest
algorithm to use from a portfolio of algorithms. We improve the performance of
our model by including single-agent shortest paths in the instance embedding
given to our model and by utilizing supplemental loss functions in addition to
a classification loss. We evaluate our model on a large and diverse dataset of
MAPF instances, showing that it outperforms all individual algorithms in its
portfolio as well as the state-of-the-art optimal MAPF algorithm selector. We
also provide an analysis of algorithm behavior in our dataset to gain a deeper
understanding of optimal MAPF algorithms' strengths and weaknesses to help
other researchers leverage different heuristics in algorithm designs.
|
We studied the accretion disc structure in the doubly imaged lensed quasar
SDSS J1339+1310 using $r$-band light curves and UV-visible to near-IR (NIR)
spectra from the first 11 observational seasons after its discovery. The
2009$-$2019 light curves displayed pronounced microlensing variations on
different timescales, and this microlensing signal permitted us to constrain
the half-light radius of the 1930 \r{A} continuum-emitting region. Assuming an
accretion disc with an axis inclined at 60 deg to the line of sight, we
obtained log$_{10}$($r_{1/2}$/cm) = 15.4$^{+0.3}_{-0.4}$. We also estimated the
central black hole mass from spectroscopic data. The width of the Civ, Mgii,
and H$\beta$ emission lines, and the continuum luminosity at 1350, 3000, and
5100 \r{A}, led to log$_{10}$($M_{BH}$/M$_{\odot}$) = 8.6 $\pm$ 0.4. Thus, hot
gas responsible for the 1930 \r{A} continuum emission is likely orbiting a 4.0
$\times$ 10$^8$ M$_{\odot}$ black hole at an $r_{1/2}$ of only a few tens of
Schwarzschild radii.
|
Let pi = pi_1 pi_2 ... pi_n be a permutation in the symmetric group S_n
written in one-line notation. The pinnacle set of pi, denoted Pin pi, is the
set of all pi_i such that pi_{i-1} < pi_i > pi_{i+1}. This is an analogue of
the well-studied peak set of pi where one considers values rather than
positions. The pinnacle set was introduced by Davis, Nelson, Petersen, and
Tenner who showed that it has many interesting properties. In particular, they
proved that the number of subsets of [n] = {1, 2, ..., n} which can be the
pinnacle set of some permutation is a binomial coefficient. Their proof
involved a bijection with lattice paths and was somewhat involved. We give a
simpler demonstration of this result which does not need lattice paths.
Moreover, we show that our map and theirs are different descriptions of the
same function. Davis et al. also studied the number of pinnacle sets with
maximum m and cardinality d which they denoted by p(m,d). We show that these
integers are ballot numbers and give two proofs of this fact: one using finite
differences and one bijective. Diaz-Lopez, Harris, Huang, Insko, and Nilsen
found a summation formula for calculating the number of permutations in S_n
having a given pinnacle set. We derive a new expression for this number which
is faster to calculate in many cases. We also show how this method can be
adapted to find the number of orderings of a pinnacle set which can be realized
by some pi in S_n.
|
This article discusses the effects of the spiral-arm corotation on the
stellar dynamics in the Solar Neighborhood (SN). All our results presented here
rely on: 1) observational evidence that the Sun lies near the corotation
circle, where stars rotate with the same angular velocity as the spiral-arm
pattern; the corotation circle establishes domains of the corotation resonance
(CR) in the Galactic disk; 2) dynamical constraints that put the spiral-arm
potential as the dominant perturbation in the SN, comparing with the effects of
the central bar in the SN; 3) a long-lived nature of the spiral structure,
promoting a state of dynamical relaxing and phase-mixing of the stellar orbits
in response to the spiral perturbation. With an analytical model for the
Galactic potential, composed of an axisymmetric background deduced from the
observed rotation curve, and perturbed by a four-armed spiral pattern,
numerical simulations of stellar orbits are performed to delineate the domains
of regular and chaotic motions shaped by the resonances. Such studies show that
stars can be trapped inside the stable zones of the spiral CR, and this orbital
trapping mechanism could explain the dynamical origin of the Local arm of the
Milky Way (MW). The spiral CR and the near high-order epicyclic resonances
influence the velocity distribution in the SN, creating the observable
structures such as moving groups and their radially extended counterpart known
as diagonal ridges. The Sun and most of the SN stars evolve inside a stable
zone of the spiral CR, never crossing the main spiral-arm structure, but
oscillating in the region between the Sagittarius-Carina and Perseus arms. This
orbital behavior of the Sun brings insights to our understanding of questions
concerning the solar system evolution, the Earth environment changes, and the
preservation of life on Earth.
|
Context. The tropospheric wind pattern in Jupiter consists of alternating
prograde and retrograde zonal jets with typical velocities of up to 100 m/s
around the equator. At much higher altitudes, in the ionosphere, strong auroral
jets have been discovered with velocities of 1-2 km/s. There is no such direct
measurement in the stratosphere of the planet. Aims. In this paper, we bridge
the altitude gap between these measurements by directly measuring the wind
speeds in Jupiter's stratosphere. Methods. We use the Atacama Large
Millimeter/submillimeter Array's very high spectral and angular resolution
imaging of the stratosphere of Jupiter to retrieve the wind speeds as a
function of latitude by fitting the Doppler shifts induced by the winds on the
spectral lines. Results. We detect for the first time equatorial zonal jets
that reside at 1 mbar, i.e. above the altitudes where Jupiter's
Quasi-Quadrennial Oscillation occurs. Most noticeably, we find 300-400 m/s
non-zonal winds at 0.1 mbar over the polar regions underneath the main auroral
ovals. They are in counter-rotation and lie several hundreds of kilometers
below the ionospheric auroral winds. We suspect them to be the lower tail of
the ionospheric auroral winds. Conclusions. We detect directly and for the
first time strong winds in Jupiter's stratosphere. They are zonal at low-to-mid
latitudes and non-zonal at polar latitudes. The wind system found at polar
latitudes may help increase the effciency of chemical complexification by
confining the photochemical products in a region of large energetic electron
precipitation.
|
Developers of computer vision algorithms outsource some of the labor involved
in annotating training data through business process outsourcing companies and
crowdsourcing platforms. Many data annotators are situated in the Global South
and are considered independent contractors. This paper focuses on the
experiences of Argentinian and Venezuelan annotation workers. Through
qualitative methods, we explore the discourses encoded in the task instructions
that these workers follow to annotate computer vision datasets. Our preliminary
findings indicate that annotation instructions reflect worldviews imposed on
workers and, through their labor, on datasets. Moreover, we observe that
for-profit goals drive task instructions and that managers and algorithms make
sure annotations are done according to requesters' commands. This configuration
presents a form of commodified labor that perpetuates power asymmetries while
reinforcing social inequalities and is compelled to reproduce them into
datasets and, subsequently, in computer vision systems.
|
The increasing concerns about data privacy and security drive an emerging
field of studying privacy-preserving machine learning from isolated data
sources, i.e., federated learning. A class of federated learning, vertical
federated learning, where different parties hold different features for common
users, has a great potential of driving a great variety of business cooperation
among enterprises in many fields. In machine learning, decision tree ensembles
such as gradient boosting decision trees (GBDT) and random forest are widely
applied powerful models with high interpretability and modeling efficiency.
However, stateof-art vertical federated learning frameworks adapt anonymous
features to avoid possible data breaches, makes the interpretability of the
model compromised. To address this issue in the inference process, in this
paper, we firstly make a problem analysis about the necessity of disclosure
meanings of feature to Guest Party in vertical federated learning. Then we find
the prediction result of a tree could be expressed as the intersection of
results of sub-models of the tree held by all parties. With this key
observation, we protect data privacy and allow the disclosure of feature
meaning by concealing decision paths and adapt a communication-efficient secure
computation method for inference outputs. The advantages of Fed-EINI will be
demonstrated through both theoretical analysis and extensive numerical results.
We improve the interpretability of the model by disclosing the meaning of
features while ensuring efficiency and accuracy.
|
The detection and characterization of young planetary systems offers a direct
path to study the processes that shape planet evolution. We report on the
discovery of a sub-Neptune-size planet orbiting the young star HD 110082
(TOI-1098). Transit events we initially detected during TESS Cycle 1 are
validated with time-series photometry from Spitzer. High-contrast imaging and
high-resolution, optical spectra are also obtained to characterize the stellar
host and confirm the planetary nature of the transits. The host star is a late
F dwarf (M=1.2 Msun) with a low-mass, M dwarf binary companion (M=0.26 Msun)
separated by nearly one arcminute (~6200 AU). Based on its rapid rotation and
Lithium absorption, HD 110082 is young, but is not a member of any known group
of young stars (despite proximity to the Octans association). To measure the
age of the system, we search for coeval, phase-space neighbors and compile a
sample of candidate siblings to compare with the empirical sequences of young
clusters and to apply quantitative age-dating techniques. In doing so, we find
that HD 110082 resides in a new young stellar association we designate
MELANGE-1, with an age of 250(+50/-70) Myr. Jointly modeling the TESS and
Spitzer light curves, we measure a planetary orbital period of 10.1827 days and
radius of Rp = 3.2(+/-0.1) Earth radii. HD 110082 b's radius falls in the
largest 12% of field-age systems with similar host star mass and orbital
period. This finding supports previous studies indicating that young planets
have larger radii than their field-age counterparts.
|
Infinitesimal symmetries of a classical mechanical system are usually
described by a Lie algebra acting on the phase space, preserving the Poisson
brackets. We propose that a quantum analogue is the action of a Lie bi-algebra
on the associative $*$-algebra of observables. The latter can be thought of as
functions on some underlying non-commutative manifold. We illustrate this for
the non-commutative torus $\mathbb{T}^2_\theta$. The canonical trace defines a
Manin triple from which a Lie bi-algebra can be constructed. In the special
case of rational $\theta=\frac{M}{N}$ this Lie bi-algebra is
$\underline{GL}(N)=\underline{U}(N)\oplus \underline{B}(N)$, corresponding to
unitary and upper triangular matrices. The Lie bi-algebra has a remnant in the
classical limit $N\to\infty$: the elements of $\underline{U}(N)$ tend to real
functions while $\underline{B}(N)$ tends to a space of complex analytic
functions.
|
We study the problem of determining the best intervention in a Causal
Bayesian Network (CBN) specified only by its causal graph. We model this as a
stochastic multi-armed bandit (MAB) problem with side-information, where the
interventions correspond to the arms of the bandit instance. First, we propose
a simple regret minimization algorithm that takes as input a semi-Markovian
causal graph with atomic interventions and possibly unobservable variables, and
achieves $\tilde{O}(\sqrt{M/T})$ expected simple regret, where $M$ is dependent
on the input CBN and could be very small compared to the number of arms. We
also show that this is almost optimal for CBNs described by causal graphs
having an $n$-ary tree structure. Our simple regret minimization results, both
upper and lower bound, subsume previous results in the literature, which
assumed additional structural restrictions on the input causal graph. In
particular, our results indicate that the simple regret guarantee of our
proposed algorithm can only be improved by considering more nuanced structural
restrictions on the causal graph. Next, we propose a cumulative regret
minimization algorithm that takes as input a general causal graph with all
observable nodes and atomic interventions and performs better than the optimal
MAB algorithm that does not take causal side-information into account. We also
experimentally compare both our algorithms with the best known algorithms in
the literature. To the best of our knowledge, this work gives the first simple
and cumulative regret minimization algorithms for CBNs with general causal
graphs under atomic interventions and having unobserved confounders.
|
In its standard formulation, quantum backflow is a classically impossible
phenomenon in which a free quantum particle in a positive-momentum state
exhibits a negative probability current. Recently, Miller et al. [Quantum 5,
379 (2021)] have put forward a new, "experiment-friendly" formulation of
quantum backflow that aims at extending the notion of quantum backflow to
situations in which the particle's state may have both positive and negative
momenta. Here, we investigate how the experiment-friendly formulation of
quantum backflow compares to the standard one when applied to a free particle
in a positive-momentum state. We show that the two formulations are not always
compatible. We further identify a parametric regime in which the two
formulations appear to be in qualitative agreement with one another.
|
This paper studies the generation and transmission expansion co-optimization
problem with a high wind power penetration rate in large-scale power grids. In
this paper, generation and transmission expansion co-optimization is modeled as
a mixed-integer programming (MIP) problem. A scenario creation method is
proposed to capture the variation and correlation of both load and wind power
across regions for large-scale power grids. Obtained scenarios that represent
load and wind uncertainties can be easily introduced into the MIP problem and
then solved to obtain the co-optimized generation and transmission expansion
plan. Simulation results show that the proposed planning model and the scenario
creation method can improve the expansion result significantly through modeling
more detailed information of wind and load variation among regions in the US EI
system. The improved expansion plan that combines generation and transmission
will aid system planners and policy makers to maximize the social welfare in
large-scale power grids.
|
We provide a systematic method to deduce the global form of flavor symmetry
groups in 4d N=2 theories obtained by compactifying 6d N=(2,0) superconformal
field theories (SCFTs) on a Riemann surface carrying regular punctures and
possibly outer-automorphism twist lines. Apriori, this method only determines
the group associated to the manifest part of the flavor symmetry algebra, but
often this information is enough to determine the group associated to the full
enhanced flavor symmetry algebra. Such cases include some interesting and
well-studied 4d N=2 SCFTs like the Minahan-Nemeschansky theories. The symmetry
groups obtained via this method match with the symmetry groups obtained using a
Lagrangian description if such a description arises in some duality frame.
Moreover, we check that the proposed symmetry groups are consistent with the
superconformal indices available in the literature. As another application, our
method finds distinct global forms of flavor symmetry group for pairs of
interacting 4d N=2 SCFTs (recently pointed out in the literature) whose Coulomb
branch dimensions, flavor algebras and levels coincide (along with other
invariants), but nonetheless are distinct SCFTs.
|
Two years ago, we alarmed the scientific community about the large number of
bad papers in the literature on {\it zero difference balanced functions}, where
direct proofs of seemingly new results are presented in an unnecessarily
lengthy and convoluted way. Indeed, these results had been proved long before
and very easily in terms of difference families.
In spite of our report, papers of the same kind continue to proliferate.
Regrettably, a further attempt to put the topic in order seems unavoidable.
While some authors now follow our recommendation of using the terminology of
{\it partitioned difference families}, their methods are still the same and
their results are often trivial or even wrong. In this note, we show how a very
recent paper of this type can be easily dealt with.
|
A neighborhood restricted Mixed Gibbs Sampling (MGS) based approach is
proposed for low-complexity high-order modulation large-scale Multiple-Input
Multiple-Output (LS-MIMO) detection. The proposed LS-MIMO detector applies a
neighborhood limitation (NL) on the noisy solution from the MGS at a distance d
- thus, named d-simplified MGS (d-sMGS) - in order to mitigate its impact,
which can be harmful when a high order modulation is considered. Numerical
simulation results considering 64-QAM demonstrated that the proposed detection
method can substantially improve the MGS algorithm convergence, whereas no
extra computational complexity per iteration is required. The proposed
d-sMGS-based detector suitable for high-order modulation LS-MIMO further
exhibits improved performance vs. complexity tradeoff when the system loading
is high, i.e., when K >= 0.75. N. Also, with increasing the number of
dimensions, i.e., increasing the number of antennas and/or modulation order, a
smaller restriction of 2-sMGS was shown to be a more interesting choice than
1-sMGS.
|
Darwinian evolution tends to produce energy-efficient outcomes. On the other
hand, energy limits computation, be it neural and probabilistic or digital and
logical. Taking a particular energy-efficient viewpoint, we define neural
computation and make use of an energy-constrained, computational function. This
function can be optimized over a variable that is proportional to the number of
synapses per neuron. This function also implies a specific distinction between
ATP-consuming processes, especially computation \textit{per se} vs the
communication processes including action potentials and transmitter release.
Thus to apply this mathematical function requires an energy audit with a
partitioning of energy consumption that differs from earlier work. The audit
points out that, rather than the oft-quoted 20 watts of glucose available to
the brain \cite{sokoloff1960metabolism,sawada2013synapse}, the fraction
partitioned to cortical computation is only 0.1 watts of ATP. On the other hand
at 3.5 watts, long-distance communication costs are 35-fold greater. Other
novel quantifications include (i) a finding that the biological vs ideal values
of neural computational efficiency differ by a factor of $10^8$ and (ii) two
predictions of $N$, the number of synaptic transmissions needed to fire a
neuron (2500 vs 2000).
|
Every physical system is characterized by its action. The standard measure of
integration is the square root of a minus the determinant of the metric. It is
chosen on the basis of a single requirement that it must be a density under
diffeomorphic transformations. Therefore, it may not be a unique choice. In
this thesis, we develop the two-measure and the Galileon measure string and
superstring actions, apply one of them to the string model of hadrons and
present the modified measure extension to higher dimensional extended objects.
|
Traditional image classification techniques often produce unsatisfactory
results when applied to high spatial resolution data because classes in high
resolution images are not spectrally homogeneous. Texture offers an alternative
source of information for classifying these images. This paper evaluates a
recently developed, computationally simple texture metric called Weber Local
Descriptor (WLD) for use in classifying high resolution QuickBird panchromatic
data. We compared WLD with state-of-the art texture descriptors (TD) including
Local Binary Pattern (LBP) and its rotation-invariant version LBPRIU. We also
investigated whether incorporating VAR, a TD that captures brightness
variation, would improve the accuracy of LBPRIU and WLD. We found that WLD
generally produces more accurate classification results than the other TD we
examined, and is also more robust to varying parameters. We have implemented an
optimised algorithm for calculating WLD which makes the technique practical in
terms of computation time. Overall, our results indicate that WLD is a
promising approach for classifying high resolution remote sensing data.
|
Image completion has made tremendous progress with convolutional neural
networks (CNNs), because of their powerful texture modeling capacity. However,
due to some inherent properties (e.g., local inductive prior, spatial-invariant
kernels), CNNs do not perform well in understanding global structures or
naturally support pluralistic completion. Recently, transformers demonstrate
their power in modeling the long-term relationship and generating diverse
results, but their computation complexity is quadratic to input length, thus
hampering the application in processing high-resolution images. This paper
brings the best of both worlds to pluralistic image completion: appearance
prior reconstruction with transformer and texture replenishment with CNN. The
former transformer recovers pluralistic coherent structures together with some
coarse textures, while the latter CNN enhances the local texture details of
coarse priors guided by the high-resolution masked images. The proposed method
vastly outperforms state-of-the-art methods in terms of three aspects: 1) large
performance boost on image fidelity even compared to deterministic completion
methods; 2) better diversity and higher fidelity for pluralistic completion; 3)
exceptional generalization ability on large masks and generic dataset, like
ImageNet.
|
Recently, anchor-based methods have achieved great progress in face
detection. Once anchor design and anchor matching strategy determined, plenty
of positive anchors will be sampled. However, faces with extreme aspect ratio
always fail to be sampled according to standard anchor matching strategy. In
fact, the max IoUs between anchors and extreme aspect ratio faces are still
lower than fixed sampling threshold. In this paper, we firstly explore the
factors that affect the max IoU of each face in theory. Then, anchor matching
simulation is performed to evaluate the sampling range of face aspect ratio.
Besides, we propose a Wide Aspect Ratio Matching (WARM) strategy to collect
more representative positive anchors from ground-truth faces across a wide
range of aspect ratio. Finally, we present a novel feature enhancement module,
named Receptive Field Diversity (RFD) module, to provide diverse receptive
field corresponding to different aspect ratios. Extensive experiments show that
our method can help detectors better capture extreme aspect ratio faces and
achieve promising detection performance on challenging face detection
benchmarks, including WIDER FACE and FDDB datasets.
|
Spike-based neuromorphic hardware holds the promise to provide more energy
efficient implementations of Deep Neural Networks (DNNs) than standard hardware
such as GPUs. But this requires to understand how DNNs can be emulated in an
event-based sparse firing regime, since otherwise the energy-advantage gets
lost. In particular, DNNs that solve sequence processing tasks typically employ
Long Short-Term Memory (LSTM) units that are hard to emulate with few spikes.
We show that a facet of many biological neurons, slow after-hyperpolarizing
(AHP) currents after each spike, provides an efficient solution. AHP-currents
can easily be implemented in neuromorphic hardware that supports
multi-compartment neuron models, such as Intel's Loihi chip. Filter
approximation theory explains why AHP-neurons can emulate the function of LSTM
units. This yields a highly energy-efficient approach to time series
classification. Furthermore it provides the basis for implementing with very
sparse firing an important class of large DNNs that extract relations between
words and sentences in a text in order to answer questions about the text.
|
Let ${\cal G}$ be a minor-closed graph class. We say that a graph $G$ is a
$k$-apex of ${\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such
that $G\setminus S$ belongs to ${\cal G}.$ We denote by ${\cal A}_k ({\cal G})$
the set of all graphs that are $k$-apices of ${\cal G}.$ We prove that every
graph in the obstruction set of ${\cal A}_k ({\cal G}),$ i.e., the
minor-minimal set of graphs not belonging to ${\cal A}_k ({\cal G}),$ has size
at most $2^{2^{2^{2^{{\sf poly}(k)}}}},$ where ${\sf poly}$ is a polynomial
function whose degree depends on the size of the minor-obstructions of ${\cal
G}.$ This bound drops to $2^{2^{{\sf poly}(k)}}$ when ${\cal G}$ excludes some
apex graph as a minor.
|
This paper presents an online evolving neural network-based inverse dynamics
learning controller for an autonomous vehicle's longitudinal and lateral
control under model uncertainties and disturbances. The inverse dynamics of the
vehicle are approximated using a feedback error learning mechanism that
utilizes a dynamic Radial Basis Function neural network, referred to as the
Extended Minimal Resource Allocating Network (EMRAN). EMRAN uses an extended
Kalman filter approach for learning and a growing/pruning condition helps in
keeping the number of hidden neurons minimum. The online learning algorithm
helps in handling the uncertainties and dynamic variations and also the unknown
disturbances on the road. The proposed control architecture employs two coupled
conventional controllers aided by the EMRAN inverse dynamics controller. The
control architecture has a conventional PID controller for longitudinal cruise
control and a Stanley controller for lateral path-tracking. Performances of
both the longitudinal and lateral controllers are compared with existing
control methods and the simulation results clearly indicate that the proposed
control scheme handles the disturbances and parametric uncertainties better,
and also provides better tracking performance in autonomous vehicles.
|
Quantum resources, such as entanglement, steering, and Bell nonlocality, are
evaluated for three coupled qubits in the steady-state configuration. We employ
the phenomenological master equation and the microscopic master equation to
probe such quantum resources, which provide very different results depending on
the system configuration. In particular, steering and Bell nonlocality are null
within the phenomenological model, while they reach considerable values within
the microscopic model. These results show that the phenomenological approach is
not able to capture all quantum resources of the system. We also provide an
analytical expression for the steady-state and quantum resources of the system
composed of three coupled qubits in the zero temperature limit. Such results
demonstrate that quantum resources between two qubits are strongly affected by
the third qubit in a nontrivial way.
|
As an instance-level recognition problem, re-identification (re-ID) requires
models to capture diverse features. However, with continuous training, re-ID
models pay more and more attention to the salient areas. As a result, the model
may only focus on few small regions with salient representations and ignore
other important information. This phenomenon leads to inferior performance,
especially when models are evaluated on small inter-identity variation data. In
this paper, we propose a novel network, Erasing-Salient Net (ES-Net), to learn
comprehensive features by erasing the salient areas in an image. ES-Net
proposes a novel method to locate the salient areas by the confidence of
objects and erases them efficiently in a training batch. Meanwhile, to mitigate
the over-erasing problem, this paper uses a trainable pooling layer P-pooling
that generalizes global max and global average pooling. Experiments are
conducted on two specific re-identification tasks (i.e., Person re-ID, Vehicle
re-ID). Our ES-Net outperforms state-of-the-art methods on three Person re-ID
benchmarks and two Vehicle re-ID benchmarks. Specifically, mAP / Rank-1 rate:
88.6% / 95.7% on Market1501, 78.8% / 89.2% on DuckMTMC-reID, 57.3% / 80.9% on
MSMT17, 81.9% / 97.0% on Veri-776, respectively. Rank-1 / Rank-5 rate: 83.6% /
96.9% on VehicleID (Small), 79.9% / 93.5% on VehicleID (Medium), 76.9% / 90.7%
on VehicleID (Large), respectively. Moreover, the visualized salient areas show
human-interpretable visual explanations for the ranking results.
|
In this work, we give a class of examples of hyperbolic potentials (including
the null one) for continuous non-uniformly expanding maps. It implies the
existence and uniqueness of equilibrium state (in particular, of maximal
entropy measure). Among the maps considered is the important class known as
Viana maps.
|
Projective two-weight linear codes are closely related to finite projective
spaces and strongly regular graphs. In this paper, a family of $q$-ary
projective two-weight linear codes is presented, where $q$ is a power of 2. The
parameters of both the codes and their duals are excellent. As applications,
the codes are used to derive strongly regular graphs with new parameters and
secret sharing schemes with interesting access structures.
|
We demonstrate unprecedented accuracy for rapid gravitational-wave parameter
estimation with deep learning. Using neural networks as surrogates for Bayesian
posterior distributions, we analyze eight gravitational-wave events from the
first LIGO-Virgo Gravitational-Wave Transient Catalog and find very close
quantitative agreement with standard inference codes, but with inference times
reduced from O(day) to a minute per event. Our networks are trained using
simulated data, including an estimate of the detector-noise characteristics
near the event. This encodes the signal and noise models within millions of
neural-network parameters, and enables inference for any observed data
consistent with the training distribution, accounting for noise nonstationarity
from event to event. Our algorithm -- called "DINGO" -- sets a new standard in
fast-and-accurate inference of physical parameters of detected
gravitational-wave events, which should enable real-time data analysis without
sacrificing accuracy.
|
In the article [PT] a general procedure to study solutions of the equations
$x^4-dy^2=z^p$ was presented for negative values of $d$. The purpose of the
present article is to extend our previous results to positive values of $d$. On
doing so, we give a description of the extension ${\mathbb
Q}(\sqrt{d},\sqrt{\epsilon})/{\mathbb Q}(\sqrt{d})$ (where $\epsilon$ is a
fundamental unit) needed to prove the existence of a Hecke character over
${\mathbb Q}(\sqrt{d})$ with fixed local conditions. We also extend some "large
image" results regarding images of Galois representations coming from ${\mathbb
Q}$-curves (due to Ellenberg in \cite{MR2075481}) from imaginary to real
quadratic fields.
|
In today's digital society, the Tor network has become an indispensable tool
for individuals to protect their privacy on the Internet. Operated by
volunteers, relay servers constitute the core component of Tor and are used to
geographically escape surveillance. It is therefore essential to have a large,
yet diverse set of relays. In this work, we analyze the contribution of
educational institutions to the Tor network and report on our experience of
operating exit relays at a university. Taking Germany as an example (but
arguing that the global situation is similar), we carry out a quantitative
study and find that universities contribute negligible amounts of relays and
bandwidth. Since many universities all over the world have excellent conditions
that render them perfect places to host Tor (exit) relays, we encourage other
interested people and institutions to join. To this end, we discuss and resolve
common concerns and provide lessons learned.
|
Given a pair of graphs $\textbf{A}$ and $\textbf{B}$, the problems of
deciding whether there exists either a homomorphism or an isomorphism from
$\textbf{A}$ to $\textbf{B}$ have received a lot of attention. While graph
homomorphism is known to be NP-complete, the complexity of the graph
isomorphism problem is not fully understood. A well-known combinatorial
heuristic for graph isomorphism is the Weisfeiler-Leman test together with its
higher order variants. On the other hand, both problems can be reformulated as
integer programs and various LP methods can be applied to obtain high-quality
relaxations that can still be solved efficiently. We study so-called fractional
relaxations of these programs in the more general context where $\textbf{A}$
and $\textbf{B}$ are not graphs but arbitrary relational structures. We give a
combinatorial characterization of the Sherali-Adams hierarchy applied to the
homomorphism problem in terms of fractional isomorphism. Collaterally, we also
extend a number of known results from graph theory to give a characterization
of the notion of fractional isomorphism for relational structures in terms of
the Weisfeiler-Leman test, equitable partitions, and counting homomorphisms
from trees. As a result, we obtain a description of the families of CSPs that
are closed under Weisfeiler-Leman invariance in terms of their polymorphisms as
well as decidability by the first level of the Sherali-Adams hierarchy.
|
When considered as orthogonal bases in distinct vector spaces, the unit
vectors of polarization directions and the Laguerre-Gaussian modes of
polarization amplitude are inseparable, constituting a so-called classical
entangled light beam. We apply this classical entanglement to demonstrate
theoretically the execution of Shor's factoring algorithm on a classical light
beam. The demonstration comprises light-path designs for the key algorithmic
steps of modular exponentiation and Fourier transform on the target integer 15.
The computed multiplicative order that eventually leads to the integer factors
is identified through a four-hole diffraction interference from sources
obtained from the entangled beam profile. We show that the fringe patterns
resulted from the interference are uniquely mapped to the sought-after order,
thereby emulating the factoring process originally rooted in the quantum
regime.
|
We report the discovery of TOI-1444b, a 1.4-$R_\oplus$ super-Earth on a
0.47-day orbit around a Sun-like star discovered by {\it TESS}. Precise radial
velocities from Keck/HIRES confirmed the planet and constrained the mass to be
$3.87 \pm 0.71 M_\oplus$. The RV dataset also indicates a possible
non-transiting, 16-day planet ($11.8\pm2.9M_\oplus$). We report a tentative
detection of phase curve variation and secondary eclipse of TOI-1444b in the
{\it TESS} bandpass. TOI-1444b joins the growing sample of 17
ultra-short-period planets with well-measured masses and sizes, most of which
are compatible with an Earth-like composition. We take this opportunity to
examine the expanding sample of ultra-short-period planets ($<2R_\oplus$) and
contrast them with the newly discovered sub-day ultra-hot Neptunes
($>3R_\oplus$, $>2000F_\oplus$ TOI-849 b, LTT9779 b and K2-100). We find that
1) USPs have predominately Earth-like compositions with inferred iron core mass
fractions of 0.32$\pm$0.04; and have masses below the threshold of runaway
accretion ($\sim 10M_\oplus$), while ultra-hot Neptunes are above the threshold
and have H/He or other volatile envelope. 2) USPs are almost always found in
multi-planet system consistent with a secular interaction formation scenario;
ultra-hot Neptunes ($P_{\rm orb} \lesssim$1 day) tend to be ``lonely' similar
to longer-period hot Neptunes($P_{\rm orb}$1-10 days) and hot Jupiters. 3) USPs
occur around solar-metallicity stars while hot Neptunes prefer higher
metallicity hosts. 4) In all these respects, the ultra-hot Neptunes show more
resemblance to hot Jupiters than the smaller USP planets, although ultra-hot
Neptunes are rarer than both USP and hot Jupiters by 1-2 orders of magnitude.
|
A wave function exposed to measurements undergoes pure state dynamics, with
deterministic unitary and probabilistic measurement induced state updates,
defining a quantum trajectory. For many-particle systems, the competition of
these different elements of dynamics can give rise to a scenario similar to
quantum phase transitions. To access it despite the randomness of single
quantum trajectories, we construct an $n$-replica Keldysh field theory for the
ensemble average of the $n$-th moment of the trajectory projector. A key
finding is that this field theory decouples into one set of degrees of freedom
that heats up indefinitely, while $n-1$ others can be cast into the form of
pure state evolutions generated by an effective non-Hermitian Hamiltonian. This
decoupling is exact for free theories, and useful for interacting ones. In
particular, we study locally measured Dirac fermions in $(1+1)$ dimensions,
which can be bosonized to a monitored interacting Luttinger liquid at long
wavelengths. For this model, the non-Hermitian Hamiltonian corresponds to a
quantum Sine-Gordon model with complex coefficients. A renormalization group
analysis reveals a gapless critical phase with logarithmic entanglement entropy
growth, and a gapped area law phase, separated by a
Berezinskii-Kosterlitz-Thouless transition. The physical picture emerging here
is a pinning of the trajectory wave function into eigenstates of the
measurement operators upon increasing the monitoring rate.
|
The rise of social media has led to the increasing of comments on online
forums. However, there still exists invalid comments which are not informative
for users. Moreover, those comments are also quite toxic and harmful to people.
In this paper, we create a dataset for constructive and toxic speech detection,
named UIT-ViCTSD (Vietnamese Constructive and Toxic Speech Detection dataset)
with 10,000 human-annotated comments. For these tasks, we propose a system for
constructive and toxic speech detection with the state-of-the-art transfer
learning model in Vietnamese NLP as PhoBERT. With this system, we obtain
F1-scores of 78.59% and 59.40% for classifying constructive and toxic comments,
respectively. Besides, we implement various baseline models as traditional
Machine Learning and Deep Neural Network-Based models to evaluate the dataset.
With the results, we can solve several tasks on the online discussions and
develop the framework for identifying constructiveness and toxicity of
Vietnamese social media comments automatically.
|
We extend the scattering result for the radial defocusing-focusing
mass-energy double critical nonlinear Schr\"odinger equation in $d\leq 4$ given
by Cheng et al. to the case $d\geq 5$. The main ingredient is a suitable long
time perturbation theory which is applicable for $d\geq 5$. The paper will
therefore give a full characterization on the scattering threshold for the
radial defocusing-focusing mass-energy double critical nonlinear Schr\"odinger
equation in all dimensions $d\geq 3$.
|
Accurate description of finite-temperature vibrational dynamics is
indispensable in the computation of two-dimensional electronic spectra. Such
simulations are often based on the density matrix evolution, statistical
averaging of initial vibrational states, or approximate classical or
semiclassical limits. While many practical approaches exist, they are often of
limited accuracy and difficult to interpret. Here, we use the concept of
thermo-field dynamics to derive an exact finite-temperature expression that
lends itself to an intuitive wavepacket-based interpretation. Furthermore, an
efficient method for computing finite-temperature two-dimensional spectra is
obtained by combining the exact thermo-field dynamics approach with the thawed
Gaussian approximation for the wavepacket dynamics, which is exact for any
displaced, distorted, and Duschinsky-rotated harmonic potential but also
accounts partially for anharmonicity effects in general potentials. Using this
new method, we directly relate a symmetry breaking of the two-dimensional
signal to the deviation from the conventional Brownian oscillator picture.
|
We provide sufficient conditions for the existence of periodic solutions of
the of the Lorentz force equation, which models the motion of a charged
particle under the action of an electromagnetic fields. The basic assumptions
cover relevant models with singularities like Coulomb-like electric potentials
or the magnetic dipole.
|
In this work we compare the capacity and achievable rate of uncoded faster
than Nyquist (FTN) signalling in the frequency domain, also referred to as
spectrally efficient FDM (SEFDM). We propose a deep residual convolutional
neural network detector for SEFDM signals in additive white Gaussian noise
channels, that allows to approach the Mazo limit in systems with up to 60
subcarriers. Notably, the deep detectors achieve a loss less than 0.4-0.7 dB
for uncoded QPSK SEFDM systems of 12 to 60 subcarriers at a 15% spectral
compression.
|
We propose an autoencoder-based geometric shaping that learns a constellation
robust to SNR and laser linewidth estimation errors. This constellation
maintains shaping gain in mutual information (up to 0.3 bits/symbol) with
respect to QAM over various SNR and laser linewidth values.
|
Since its development, Stokesian Dynamics has been a leading approach for the
dynamic simulation of suspensions of particles at arbitrary concentrations with
full hydrodynamic interactions. Although originally developed for the
simulation of passive particle suspensions, the Stokesian Dynamics framework is
equally well suited to the analysis and dynamic simulation of suspensions of
active particles, as we elucidate here. We show how the reciprocal theorem can
be used to formulate the exact dynamics for a suspension of arbitrary active
particles and then show how the Stokesian Dynamics method provides a rigorous
way to approximate and compute the dynamics of dense active suspensions where
many-body hydrodynamic interactions are important.
|
As the photovoltaic sector approaches 1 TW in cumulative installed capacity,
we provide an overview of the current challenges to achieve further
technological improvements. On the raw materials side, we see no fundamental
limitation to expansion in capacity of the current market technologies, even
though basic estimates predict that the PV sector will become the largest
consumer of Ag in the world after 2030. On the other hand, recent market data
on PV costs indicates that the largest cost fraction is now infrastructure and
area-related, and nearly independent of the core cell technology. Therefore,
additional value adding is likely to proceed via an increase in energy yield
metrics such as the power density and/or efficiency of the PV module. However,
current market technologies are near their fundamental detailed balance
efficiency limits. The transition to multijunction PV in tandem configurations
is regarded as the most promising path to surpass this limitation and increase
the power per unit area of PV modules. So far, each specific multijunction
concept faces particular obstacles that have prevented their upscaling, but the
field is rapidly improving. In this review work, we provide a global comparison
between the different types of multijunction concepts, including III-Vs,
Si-based tandems and the emergence of perovskite/Si devices. Coupled with
analyses of new notable developments in the field, we discuss the challenges
common to different multijunction cell architectures, and the specific
challenges of each type of device, both on a cell level and on a module
integration level. From the analysis, we conclude that several tandem concepts
are nearing the disruption level where a breakthrough into mainstream PV is
possible.
|
Human trajectory forecasting in crowds, at its core, is a sequence prediction
problem with specific challenges of capturing inter-sequence dependencies
(social interactions) and consequently predicting socially-compliant multimodal
distributions. In recent years, neural network-based methods have been shown to
outperform hand-crafted methods on distance-based metrics. However, these
data-driven methods still suffer from one crucial limitation: lack of
interpretability. To overcome this limitation, we leverage the power of
discrete choice models to learn interpretable rule-based intents, and
subsequently utilise the expressibility of neural networks to model
scene-specific residual. Extensive experimentation on the interaction-centric
benchmark TrajNet++ demonstrates the effectiveness of our proposed architecture
to explain its predictions without compromising the accuracy.
|
A quantum many-body system with a conserved electric charge can have a DC
resistivity that is either exactly zero (implying it supports dissipationless
current) or nonzero. Exactly zero resistivity is related to conservation laws
that prevent the current from degrading. In this paper, we carefully examine
the situations in which such a circumstance can occur. We find that exactly
zero resistivity requires either continuous translation symmetry, or an
internal symmetry that has a certain kind of "mixed anomaly" with the electric
charge. (The symmetry could be a generalized global symmetry associated with
the emergence of unbreakable loop or higher dimensional excitations.) However,
even if one of these is satisfied, we show that there is still a mechanism to
get nonzero resistivity, through critical fluctuations that drive the
susceptibility of the conserved quantity to infinity; we call this mechanism
"critical drag". Critical drag is thus a mechanism for resistivity that, unlike
conventional mechanisms, is unrelated to broken symmetries. We furthermore
argue that an emergent symmetry that has the appropriate mixed anomaly with
electric charge is in fact an inevitable consequence of compressibility in
systems with lattice translation symmetry. Critical drag therefore seems to be
the only way (other than through irrelevant perturbations breaking the emergent
symmetry, that disappear at the renormalization group fixed point) to get
nonzero resistivity in such systems. Finally, we present a very simple and
concrete model -- the "Quantum Lifshitz Model" -- that illustrates the critical
drag mechanism as well as the other considerations of the paper.
|
Voltage manipulation of skyrmions is a promising path towards low-energy
spintronic devices. Here, voltage effects on skyrmions in a GdOx/Gd/Co/Pt
heterostructure are observed experimentally. The results show that the skyrmion
density can be both enhanced and depleted by the application of an electric
field, along with the ability, at certain magnetic fields to completely switch
the skyrmion state on and off. Further, a zero magnetic field skyrmion state
can be stablized under a negative bias voltage using a defined voltage and
magnetic field sequence. The voltage effects measured here occur on a
few-second timescale, suggesting an origin in voltage-controlled magnetic
anisotropy rather than ionic effects. By investigating the skyrmion nucleation
rate as a function of temperature, we extract the energy barrier to skyrmion
nucleation in our sample. Further, micromagnetic simulations are used to
explore the effect of changing the anisotropy and Dzyaloshinskii-Moriya
interaction on skyrmion density. Our work demonstrates the control of skyrmions
by voltages, showing functionalities desirable for commercial devices.
|
We investigate the nonequilibrium dynamics of the spinless Haldane model with
nearest-neighbor interactions on the honeycomb lattice by employing an unbiased
numerical method. In this system, a first-order transition from the Chern
insulator (CI) at weak coupling to the charge-density-wave (CDW) phase at
strong coupling can be characterized by a level crossing of the lowest energy
levels. Here we show that adiabatically following the eigenstates across this
level crossing, their Chern numbers are preserved, leading to the
identification of a topologically-nontrivial low-energy excited state in the
CDW regime. By promoting a resonant energy excitation via an ultrafast
circularly polarized pump pulse, we find that the system acquires a
non-vanishing Hall response as a result of the large overlap enhancement
between the time-dependent wave-function and the topologically non-trivial
excited state. This is suggestive of a photoinduced topological phase
transition via unitary dynamics, despite a proper definition of the Chern
number remaining elusive for an out-of-equilibrium interacting system. We
contrast these results with more common quench protocols, where such features
are largely absent in the dynamics even if the post-quench Hamiltonian displays
a topologically nontrivial ground state.
|
Over the past few years, there is a heated debate and serious public concerns
regarding online content moderation, censorship, and the principle of free
speech on the Web. To ease these concerns, social media platforms like Twitter
and Facebook refined their content moderation systems to support soft
moderation interventions. Soft moderation interventions refer to warning labels
attached to potentially questionable or harmful content to inform other users
about the content and its nature while the content remains accessible, hence
alleviating concerns related to censorship and free speech. In this work, we
perform one of the first empirical studies on soft moderation interventions on
Twitter. Using a mixed-methods approach, we study the users who share tweets
with warning labels on Twitter and their political leaning, the engagement that
these tweets receive, and how users interact with tweets that have warning
labels. Among other things, we find that 72% of the tweets with warning labels
are shared by Republicans, while only 11% are shared by Democrats. By analyzing
content engagement, we find that tweets with warning labels had more engagement
compared to tweets without warning labels. Also, we qualitatively analyze how
users interact with content that has warning labels finding that the most
popular interactions are related to further debunking false claims, mocking the
author or content of the disputed tweet, and further reinforcing or resharing
false claims. Finally, we describe concrete examples of inconsistencies, such
as warning labels that are incorrectly added or warning labels that are not
added on tweets despite sharing questionable and potentially harmful
information.
|
We present a combined neutron diffraction (ND) and high-field muon spin
rotation ($\mu$SR) study of the magnetic and superconducting phases of the
high-temperature superconductor La$_{1.94}$Sr$_{0.06}$CuO$_{4+y}$ ($T_{c} =
38$~K). We observe a linear dependence of the ND signal from the modulated
antiferromagnetic order (m-AFM) on the applied field. The magnetic volume
fraction measured with $\mu$SR increases linearly from 0\% to $\sim$40\% with
applied magnetic field up to 8~T. This allows us to conclude, in contrast to
earlier field-dependent neutron diffraction studies, that the long-range m-AFM
regions are induced by an applied field, and that their ordered magnetic moment
remains constant.
|
We study the running vacuum model in which the vaccum energy density depends
on square of Hubble parameter in comparison with the $\Lambda$CDM model. In
this work, the Bayesian inference method is employed to test against the
standard $\Lambda$CDM model to appraise the relative significance of our model,
using the combined data sets, Pantheon+CMB+BAO and Pantheon+CMB+BAO+Hubble
data. The model parameters and the corresponding errors are estimated from the
marginal likelihood function of the model parameters. Marginalizing over all
model parameters with suitable prior, we have obtained the Bayes factor as the
ratio of Bayesian evidence of our model and the $\Lambda$CDM model. The
analysis based on Jeffrey's scale of bayesian inference shows that the evidence
of our model against the $\Lambda$CDM model is weak for both data combinations.
Even though the running vacuum model gives a good account of the evolution of
the universe, it is not superior to the $\Lambda$CDM model.
|
Private blockchain networks are used by enterprises to manage decentralized
processes without trusted mediators and without exposing their assets publicly
on an open network like Ethereum. Yet external parties that cannot join such
networks may have a compelling need to be informed about certain data items on
their shared ledgers along with certifications of data authenticity; e.g., a
mortgage bank may need to know about the sale of a mortgaged property from a
network managing property deeds. These parties are willing to compensate the
networks in exchange for privately sharing information with proof of
authenticity and authorization for external use. We have devised a novel and
cryptographically secure protocol to effect a fair exchange between rational
network members and information recipients using a public blockchain and atomic
swap techniques. Using our protocol, any member of a private blockchain can
atomically reveal private blockchain data with proofs in exchange for a
monetary reward to an external party if and only if the external party is a
valid recipient. The protocol preserves confidentiality of data for the
recipient, and in addition, allows it to mount a challenge if the data turns
out to be inauthentic. We also formally analyze the security and privacy of
this protocol, which can be used in a wide array of practical scenarios
|
Developers create software branches for tentative feature addition and bug
fixing, and periodically merge branches to release software with new features
or repairing patches. When the program edits from different branches textually
overlap (i.e., textual conflicts), or the co-application of those edits lead to
compilation or runtime errors (i.e., compiling or dynamic conflicts), it is
challenging and time-consuming for developers to eliminate merge conflicts.
Prior studies examined %the popularity of merge conflicts and how conflicts
were related to code smells or software development process; tools were built
to find and solve conflicts.
However, some fundamental research questions are still not comprehensively
explored, including (1) how conflicts were introduced, (2) how developers
manually resolved conflicts, and (3) what conflicts cannot be handled by
current tools.
For this paper, we took a hybrid approach that combines automatic detection
with manual inspection to reveal 204 merge conflicts and their resolutions in
15 open-source repositories. %in the version history of 15 open-source
projects. Our data analysis reveals three phenomena. First, compiling and
dynamic conflicts are harder to detect, although current tools mainly focus on
textual conflicts. Second, in the same merging context, developers usually
resolved similar textual conflicts with similar strategies. Third, developers
manually fixed most of the inspected compiling and dynamic conflicts by
similarly editing the merged version as what they did for one of the branches.
Our research reveals the challenges and opportunities for automatic detection
and resolution of merge conflicts; it also sheds light on related areas like
systematic program editing and change recommendation.
|
We consider averaging a number of candidate models to produce a prediction of
lower risk in the context of partially linear functional additive models. These
models incorporate the parametric effect of scalar variables and the additive
effect of a functional variable to describe the relationship between a response
variable and regressors. We develop a model averaging scheme that assigns the
weights by minimizing a cross-validation criterion. Under the framework of
model misspecification, the resulting estimator is proved to be asymptotically
optimal in terms of the lowest possible square error loss for prediction. Also,
simulation studies and real data analysis demonstrate the good performance of
our proposed method.
|
The imprints of large-scale structures on the Cosmic Microwave Background can
be studied via the CMB lensing and Integrated Sachs-Wolfe (ISW) signals. In
particular, the stacked ISW signal around supervoids has been claimed in
several works to be anomalously high. In this study, we find cluster and void
superstructures using four tomographic redshift bins with $0<z<0.8$ from the
DESI Legacy Survey, and measure the stacked CMB lensing and ISW signals around
them. To compare our measurements with $\Lambda$CDM model predictions, we
construct a mock catalogue with matched galaxy number density and bias, and
apply the same photo-$z$ uncertainty as the data. The consistency between the
mock and data is verified via the stacked galaxy density profiles around the
superstructures and their quantity. The corresponding lensing convergence and
ISW maps are then constructed and compared. The stacked lensing signal agrees
with data well except at the highest redshift bin in density peaks, where the
mock prediction is significantly higher, by approximately a factor 1.3. The
stacked ISW signal is generally consistent with the mock prediction. We do not
obtain a significant signal from voids, $A_{\rm ISW}=-0.10\pm0.69$, and the
signal from clusters, $A_{\rm ISW}=1.52\pm0.72$, is at best weakly detected.
However, these results are strongly inconsistent with previous claims of ISW
signals at many times the level of the $\Lambda$CDM prediction. We discuss the
comparison of our results with past work in this area, and investigate possible
explanations for this discrepancy.
|
The dynamics of cellular chemical reactions are variable due to stochastic
noise from intrinsic and extrinsic sources. The intrinsic noise is the
intracellular fluctuations of molecular copy numbers caused by the
probabilistic encounter of molecules and is modeled by the chemical master
equation. The extrinsic noise, on the other hand, represents the intercellular
variation of the kinetic parameters due to the variation of global factors
affecting gene expression. The objective of this paper is to propose a
theoretical framework to analyze the combined effect of the intrinsic and the
extrinsic noise modeled by the chemical master equation with uncertain
parameters. More specifically, we formulate a semidefinite program to compute
the intervals of the stationary solution of uncertain moment equations whose
parameters are given only partially in the form of the statistics of their
distributions. The semidefinite program is derived without approximating the
governing equation in contrast with many existing approaches. Thus, we can
obtain guaranteed intervals of the worst possible values of the moments for all
parameter distributions satisfying the given statistics, which are
prohibitively hard to estimate from sample-path simulations since sampling from
all possible uncertain distributions is difficult. We demonstrate the proposed
optimization approach using two examples of stochastic chemical reactions and
show that the solution of the optimization problem gives practically useful
upper and lower bounds of the statistics of the stationary copy number
distributions.
|
The transconductance and effective Land\'{e} $g^*$ factors for a quantum
point contact defined in silicene by the electric field of a split gate is
investigated. The strong spin-orbit coupling in buckled silicene reduces the
$g^*$ factor for in-plane magnetic field from the nominal value 2 to around 1.2
for the first- to 0.45 for the third conduction subband. However, for
perpendicular magnetic field we observe an enhancement of $g^*$ factors for the
first subband to 5.8 in nanoribbon with zigzag and to 2.5 with armchair edge.
The main contribution to the Zeeman splitting comes from the intrinsic
spin-orbit coupling defined by the Kane-Mele form of interaction.
|
We discuss in this survey several network modeling methods and their
applicability to precision medicine. We review several network centrality
methods (degree centrality, closeness centrality, eccentricity centrality,
betweenness centrality, and eigenvector-based prestige) and two systems
controllability methods (minimum dominating sets and network structural
controllability). We demonstrate their applicability to precision medicine on
three multiple myeloma patient disease networks. Each network consists of
protein-protein interactions built around a specific patient's mutated genes,
around the targets of the drugs used in the standard of care in multiple
myeloma, and around multiple myeloma-specific essential genes. For each network
we demonstrate how the network methods we discuss can be used to identify
personalized, targeted drug combinations uniquely suited to that patient.
|
This comment on the Phys. Rev. A paper "Nonlinear quantum effects in
electromagnetic radiation of a vortex electron" by Karlovets and
Pupasov-Maximov [Phys. Rev. A 103, 12214 (2021)] addresses their criticism of
the combined experimental and theoretical study "Observing the quantum wave
nature of free electrons through spontaneous emission" by Remez et al,
published in Phys. Rev. Lett. [Phys. Rev. Lett. 123, 060401 (2019)]. We show,
by means of simple optical arguments as well as numerical simulations, that the
criticism raised by Karlovets and Pupasov-Maximov regarding the experimental
regime reported by Remez et al is false. Further, we discuss a necessary
clarification for the theoretical derivations presented by Karlovets and
Pupasov-Maximov, as they only hold for a certain experimental situation where
the final state of the emitting electron is observed in coincidence with the
emitted photon - which is not the common scenario in cathodoluminescence. Upon
lifting the concerns regarding the experimental regime reported by Remez et al,
and explicitly clarifying the electron post-selection, we believe that the
paper by Karlovets and Pupasov-Maximov may constitute a valuable contribution
to the problem of spontaneous emission by shaped electron wavefunctions, as it
presents new expressions for the emission rates beyond the ubiquitous paraxial
approximation.
|
In this paper I shall give the complete solution of the equations governing
the bilateral birth and death process on path set $\mathbb{R}_q=\{q^n,\quad
n\in\mathbb{Z}\}$ in which the birth and death rates $\lambda_n=q^{2\nu-2n}$
and $\mu_n=q^{-2n}$ where $0<q<1$ and $\nu>-1$ . The mathematical methods
employed here are based on $q$-Bessel Fourier analysis.
|
The execution of quantum circuits on real systems has largely been limited to
those which are simply time-ordered sequences of unitary operations followed by
a projective measurement. As hardware platforms for quantum computing continue
to mature in size and capability, it is imperative to enable quantum circuits
beyond their conventional construction. Here we break into the realm of dynamic
quantum circuits on a superconducting-based quantum system. Dynamic quantum
circuits involve not only the evolution of the quantum state throughout the
computation, but also periodic measurements of a subset of qubits mid-circuit
and concurrent processing of the resulting classical information within
timescales shorter than the execution times of the circuits. Using noisy
quantum hardware, we explore one of the most fundamental quantum algorithms,
quantum phase estimation, in its adaptive version, which exploits dynamic
circuits, and compare the results to a non-adaptive implementation of the same
algorithm. We demonstrate that the version of real-time quantum computing with
dynamic circuits can offer a substantial and tangible advantage when noise and
latency are sufficiently low in the system, opening the door to a new realm of
available algorithms on real quantum systems.
|
Using classical electrodynamics, this work analyzes the dynamics of a closed
microwave cavity as a function of its center of energy. Starting from the
principle of momentum conservation, expressions for the maximum electromagnetic
momentum stored in a free microwave cavity are obtained. Next, it is shown
that, for coherent fields and special shape conditions, this momentum component
may not completely average out to zero when the fields change in the transient
regime. Non-zero conditions are illustrated for the asymmetric conical frustum
whose exact modes can not be calculated analytically. One concludes that the
electromagnetic momentum can be imparted to the mechanical body so as to
displace it in relation to the original center of energy. However, the average
time range of the effect is much shorter than any time regime of the
experimental tests performed to measure presently, suggesting it has not been
observed yet in copper-made resonators.
|
Constitutive models are widely used for modeling complex systems in science
and engineering, where first-principle-based, well-resolved simulations are
often prohibitively expensive. For example, in fluid dynamics, constitutive
models are required to describe nonlocal, unresolved physics such as turbulence
and laminar-turbulent transition. However, traditional constitutive models
based on partial differential equations (PDEs) often lack robustness and are
too rigid to accommodate diverse calibration datasets. We propose a
frame-independent, nonlocal constitutive model based on a vector-cloud neural
network that can be learned with data. The model predicts the closure variable
at a point based on the flow information in its neighborhood. Such nonlocal
information is represented by a group of points, each having a feature vector
attached to it, and thus the input is referred to as vector cloud. The cloud is
mapped to the closure variable through a frame-independent neural network,
invariant both to coordinate translation and rotation and to the ordering of
points in the cloud. As such, the network can deal with any number of
arbitrarily arranged grid points and thus is suitable for unstructured meshes
in fluid simulations. The merits of the proposed network are demonstrated for
scalar transport PDEs on a family of parameterized periodic hill geometries.
The vector-cloud neural network is a promising tool not only as nonlocal
constitutive models and but also as general surrogate models for PDEs on
irregular domains.
|
We report here results on the analysis of correlated flux variations between
the optical and GeV $\gamma$-ray bands in three bright BL Lac objects, namely
AO\, 0235+164, OJ 287 and PKS 2155$-$304. This was based on the analysis of
about 10 years of data from the {\it Fermi} Gamma-ray Space Telescope covering
the period between 08 August 2008 to 08 August 2018 along with optical data
covering the same period. For all the sources, during the flares analysed in
this work, the optical and $\gamma$-ray flux variations are found to be closely
correlated. From broad band spectral energy distribution modelling of different
epochs in these sources using the one zone leptonic emission model, we found
that the optical-UV emission is dominated by synchrotron emission from the jet.
The $\gamma$-ray emission in the low synchrotron peaked sources AO\, 0235+164
and OJ 287 are found to be well fit with external Compton (EC) component,
while, the $\gamma$-ray emission in the high synchrotron peaked source PKS
2155$-$304 is well fit with synchrotron self Compton component. Further we note
that the $\gamma$-ray emission during the high flux state of AO 0235+164
(epochs A and B) requires seed photons from both the dusty torus and broad line
region, while the $\gamma$-ray emission in OJ 287 and during epochs C and D of
AO\,0235+164 can be modelled by EC scattering of infra-red photons from the
torus.
|
In this work we describe the High-Dimensional Matrix Mechanism (HDMM), a
differentially private algorithm for answering a workload of predicate counting
queries. HDMM represents query workloads using a compact implicit matrix
representation and exploits this representation to efficiently optimize over (a
subset of) the space of differentially private algorithms for one that is
unbiased and answers the input query workload with low expected error. HDMM can
be deployed for both $\epsilon$-differential privacy (with Laplace noise) and
$(\epsilon, \delta)$-differential privacy (with Gaussian noise), although the
core techniques are slightly different for each. We demonstrate empirically
that HDMM can efficiently answer queries with lower expected error than
state-of-the-art techniques, and in some cases, it nearly matches existing
lower bounds for the particular class of mechanisms we consider.
|
This note gives a detailed proof of the following statement. Let $d\in
\mathbb{N}$ and $m,n \ge d + 1$, with $m + n \ge \binom{d+2}{2} + 1$. Then the
complete bipartite graph $K_{m,n}$ is generically globally rigid in dimension
$d$.
|
The emergence of new technologies and innovative communication tools permits
us to transcend societal challenges. While particle accelerators are essential
instruments to improve our quality of life through science and technology, an
adequate ecosystem is essential to activate and maximize this potential.
Research Infrastructure (RI) and industries supported by enlightened
organizations and education, can generate a sustainable environment to serve
this purpose. In this paper, we will discuss state-of-the-art infrastructures
taking the lead to reach this impact, thus contributing to economic and social
transformation.
|
Bayesian decision theory provides an elegant framework for acting optimally
under uncertainty when tractable posterior distributions are available. Modern
Bayesian models, however, typically involve intractable posteriors that are
approximated with, potentially crude, surrogates. This difficulty has
engendered loss-calibrated techniques that aim to learn posterior
approximations that favor high-utility decisions. In this paper, focusing on
Bayesian neural networks, we develop methods for correcting approximate
posterior predictive distributions encouraging them to prefer high-utility
decisions. In contrast to previous work, our approach is agnostic to the choice
of the approximate inference algorithm, allows for efficient test time decision
making through amortization, and empirically produces higher quality decisions.
We demonstrate the effectiveness of our approach through controlled experiments
spanning a diversity of tasks and datasets.
|
Isolated post-capillary pulmonary hypertension (Ipc-PH) occurs due to left
heart failure, which contributes to 1 out of every 9 deaths in the United
States. In some patients, through unknown mechanisms, Ipc-PH transitions to
combined pre-/post-capillary PH (Cpc-PH), diagnosed by an increase in pulmonary
vascular resistance and associated with a dramatic increase in mortality. We
hypothesize that altered mechanical forces and subsequent vasoactive signaling
in the pulmonary capillary bed drive the transition from Ipc-PH to Cpc-PH.
However, even in a healthy pulmonary circulation, the mechanical forces in the
smallest vessels (the arterioles, venules, and capillary bed) have not been
quantitatively defined. This study is the first to examine this question via a
computational fluid dynamics model of the human pulmonary arteries, veins,
arterioles, and venules. Using this model we predict temporal and spatial
dynamics of cyclic stretch and wall shear stress. In the large vessels,
numerical simulations show that shear stress increases coincides with larger
flow and pressure. In the microvasculature, we found that as vessel radius
decreases, shear stress increases and flow decreases. In arterioles, this
corresponds with lower pressures; however, the venules and smaller veins have
higher pressure than larger veins. Our model provides predictions for pressure,
flow, shear stress, and cyclic stretch that provides a way to analyze and
investigate hypotheses related to disease progression in the pulmonary
circulation.
|
We report spatially resolved measurements of static and fluctuating electric
fields over conductive (Au) and non-conductive (SiO2) surfaces. Using an
ultrasensitive `nanoladder' cantilever probe to scan over these surfaces at
distances of a few tens of nanometers, we record changes in the probe resonance
frequency and damping that we associate with static and fluctuating fields,
respectively. We find that the two quantities are spatially correlated and of
similar magnitude for the two materials. We quantitatively describe the
observed effects on the basis of trapped surface charges and dielectric
fluctuations in an adsorbate layer. Our results provide direct, spatial
evidence for surface dissipation in adsorbates that affects nanomechanical
sensors, trapped ions, superconducting resonators, and color centers in
diamond.
|
This paper studies the estimation of large-scale optimal transport maps
(OTM), which is a well-known challenging problem owing to the curse of
dimensionality. Existing literature approximates the large-scale OTM by a
series of one-dimensional OTM problems through iterative random projection.
Such methods, however, suffer from slow or none convergence in practice due to
the nature of randomly selected projection directions. Instead, we propose an
estimation method of large-scale OTM by combining the idea of projection
pursuit regression and sufficient dimension reduction. The proposed method,
named projection pursuit Monge map (PPMM), adaptively selects the most
``informative'' projection direction in each iteration. We theoretically show
the proposed dimension reduction method can consistently estimate the most
``informative'' projection direction in each iteration. Furthermore, the PPMM
algorithm weakly convergences to the target large-scale OTM in a reasonable
number of steps. Empirically, PPMM is computationally easy and converges fast.
We assess its finite sample performance through the applications of Wasserstein
distance estimation and generative models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.