abstract
stringlengths 42
2.09k
|
---|
Embedding static graphs in low-dimensional vector spaces plays a key role in
network analytics and inference, supporting applications like node
classification, link prediction, and graph visualization. However, many
real-world networks present dynamic behavior, including topological evolution,
feature evolution, and diffusion. Therefore, several methods for embedding
dynamic graphs have been proposed to learn network representations over time,
facing novel challenges, such as time-domain modeling, temporal features to be
captured, and the temporal granularity to be embedded. In this survey, we
overview dynamic graph embedding, discussing its fundamentals and the recent
advances developed so far. We introduce the formal definition of dynamic graph
embedding, focusing on the problem setting and introducing a novel taxonomy for
dynamic graph embedding input and output. We further explore different dynamic
behaviors that may be encompassed by embeddings, classifying by topological
evolution, feature evolution, and processes on networks. Afterward, we describe
existing techniques and propose a taxonomy for dynamic graph embedding
techniques based on algorithmic approaches, from matrix and tensor
factorization to deep learning, random walks, and temporal point processes. We
also elucidate main applications, including dynamic link prediction, anomaly
detection, and diffusion prediction, and we further state some promising
research directions in the area.
|
Two types of non-holonomic constraints (imposing a prescription on velocity)
are analyzed, connected to an end of a (visco)elastic rod, straight in its
undeformed configuration. The equations governing the nonlinear dynamics are
obtained and then linearized near the trivial equilibrium configuration. The
two constraints are shown to lead to the same equations governing the
linearized dynamics of the Beck (or Pfluger) column in one case and of the Reut
column in the other. Therefore, although the structural systems are fully
conservative (when viscosity is set to zero), they exhibit flutter and
divergence instability. In addition, the Ziegler's destabilization paradox is
found when dissipation sources are introduced. It follows that these features
are proven to be not only a consequence of 'unrealistic non-conservative loads'
(as often stated in the literature), rather, the models proposed by Beck, Reut,
and Ziegler can exactly describe the linearized dynamics of structures subject
to non-holonomic constraints, which are made now fully accessible to
experiments.
|
We consider an improper reinforcement learning setting where a learner is
given $M$ base controllers for an unknown Markov decision process, and wishes
to combine them optimally to produce a potentially new controller that can
outperform each of the base ones. This can be useful in tuning across
controllers, learnt possibly in mismatched or simulated environments, to obtain
a good controller for a given target environment with relatively few trials.
\par We propose a gradient-based approach that operates over a class of
improper mixtures of the controllers. We derive convergence rate guarantees for
the approach assuming access to a gradient oracle. The value function of the
mixture and its gradient may not be available in closed-form; however, we show
that we can employ rollouts and simultaneous perturbation stochastic
approximation (SPSA) for explicit gradient descent optimization. Numerical
results on (i) the standard control theoretic benchmark of stabilizing an
inverted pendulum and (ii) a constrained queueing task show that our improper
policy optimization algorithm can stabilize the system even when the base
policies at its disposal are unstable\footnote{Under review. Please do not
distribute.}.
|
In this article, we introduce a novel variant of the Tsetlin machine (TM)
that randomly drops clauses, the key learning elements of a TM. In effect, TM
with drop clause ignores a random selection of the clauses in each epoch,
selected according to a predefined probability. In this way, additional
stochasticity is introduced in the learning phase of TM. Along with producing
more distinct and well-structured patterns that improve the performance, we
also show that dropping clauses increases learning robustness. To explore the
effects clause dropping has on accuracy, training time, and interpretability,
we conduct extensive experiments on various benchmark datasets in natural
language processing (NLP) (IMDb and SST2) as well as computer vision (MNIST and
CIFAR10). In brief, we observe from +2% to +4% increase in accuracy and 2x to
4x faster learning. We further employ the Convolutional TM to document
interpretable results on the CIFAR10 dataset. To the best of our knowledge,
this is the first time an interpretable machine learning algorithm has been
used to produce pixel-level human-interpretable results on CIFAR10. Also,
unlike previous interpretable methods that focus on attention visualisation or
gradient interpretability, we show that the TM is a more general interpretable
method. That is, by producing rule-based propositional logic expressions that
are \emph{human}-interpretable, the TM can explain how it classifies a
particular instance at the pixel level for computer vision and at the word
level for NLP.
|
An important problem in quantum information is to construct multiqubit
unextendible product bases (UPBs). By using the unextendible orthogonal
matrices, we construct a 7-qubit UPB of size 11. It solves an open problem in
[Quantum Information Processing 19:185 (2020)]. Next, we graph-theoretically
show that the UPB is locally indistinguishable in the bipartite systems of two
qubits and five qubits, respectively. It turns out that the UPB corresponds to
a complete graph with 11 vertices constructed by three sorts of nonisomorphic
graphs. Taking the graphs as product vectors, we show that they are in three
different orbits up to local unitary equivalence. Moreover, we also present the
number of sorts of nonisomorphic graphs of complete graphs of some known UPBs
and their orbits.
|
We study the effect of a small fermion mass in the formulation of the
on-shell effective field theory (OSEFT). This is our starting point to derive
small mass corrections to the chiral kinetic theory. In the massless case, only
four Wigner functions are needed to describe positive and negative energy
fermions of left and right chirality, corresponding to the vectorial components
of a fermionic two-point Green's function. As soon as mass correction are
introduced, tensorial components are also needed, while the scalar components
strictly vanish in the OSEFT. The tensorial components are conveniently
parametrized in the so-called spin coherence function, which describe quantum
coherent mixtures of left-right and right-left chiral fermions, of either
positive or negative energy. We show that, up to second order in the energy
expansion, vectorial and tensorial components are decoupled, and obey the same
dispersion law and transport equation, depending on their respective chirality.
We study the mass modifications of the reparametrization invariance of the
OSEFT, and check that vector and tensorial components are related by the
associated symmetry transformations. We study how the macroscopic properties of
the system are described in terms of the whole set of Wigner functions, and
check that our framework allows to account for the mass modifications to the
chiral anomaly equation.
|
The Molecular Ridge in the LMC extends several kiloparsecs south from 30
Doradus, and it contains ~30% of the molecular gas in the entire galaxy.
However, the southern end of the Molecular Ridge is quiescent - it contains
almost no massive star formation, which is a dramatic decrease from the very
active massive star-forming regions 30 Doradus, N159, and N160. We present new
ALMA and APEX observations of the Molecular Ridge at a resolution as high as
~16'' (~3.9 pc) with molecular lines 12CO(1-0), 13CO(1-0), 12CO(2-1),
13CO(2-1), and CS(2-1). We analyze these emission lines with our new multi-line
non-LTE fitting tool to produce maps of T_kin, n_H2, and N_CO across the region
based on models from RADEX. Using simulated data for a range of parameter space
for each of these variables, we evaluate how well our fitting method can
recover these physical parameters for the given set of molecular lines. We then
compare the results of this fitting with LTE and X_CO methods of obtaining mass
estimates and how line ratios correspond with physical conditions. We find that
this fitting tool allows us to more directly probe the physical conditions of
the gas and estimate values of T_kin, n_H2, and N_CO that are less subject to
the effects of optical depth and line-of-sight projection than previous
methods. The fitted n_H2 values show a strong correlation with the presence of
YSOs, and with the total and average mass of the associated YSOs. Typical star
formation diagnostics, such as mean density, dense gas fraction, and virial
parameter do not show a strong correlation with YSO properties.
|
In this paper, we propose an offline-online strategy based on the Localized
Orthogonal Decomposition (LOD) method for elliptic multiscale problems with
randomly perturbed diffusion coefficient. We consider a periodic deterministic
coefficient with local defects that occur with probability $p$. The offline
phase pre-computes entries to global LOD stiffness matrices on a single
reference element (exploiting the periodicity) for a selection of defect
configurations. Given a sample of the perturbed diffusion the corresponding LOD
stiffness matrix is then computed by taking linear combinations of the
pre-computed entries, in the online phase. Our computable error estimates show
that this yields a good coarse-scale approximation of the solution for small
$p$, which is illustrated by extensive numerical experiments. This makes the
proposed technique attractive already for moderate sample sizes in a Monte
Carlo simulation.
|
A novel approach for solving the general absolute value equation $Ax+B|x| =
c$ where $A,B\in \mathbb{R}^{m\times n}$ and $c\in \mathbb{R}^m$ is presented.
We reformulate the equation as a feasibility problem which we solve via the
method of alternating projections (MAP). The fixed points set of the
alternating projections map is characterized under nondegeneracy conditions on
$A$ and $B$. Furthermore, we prove linear convergence of the algorithm. Unlike
most of the existing approaches in the literature, the algorithm presented here
is capable of handling problems with $m\neq n$, both theoretically and
numerically.
|
We prove a quantitative version of the classical Tits' alternative for
discrete groups acting on packed Gromov-hyperbolic spaces supporting a convex
geodesic bicombing. Some geometric consequences, as uniform estimates on
systole, diastole, algebraic entropy and critical exponent of the groups, will
be presented. Finally we will study the behaviour of these group actions under
limits, providing new examples of compact classes of metric spaces.
|
Recent work has explored how complementary strengths of humans and artificial
intelligence (AI) systems might be productively combined. However, successful
forms of human-AI partnership have rarely been demonstrated in real-world
settings. We present the iterative design and evaluation of Lumilo, smart
glasses that help teachers help their students in AI-supported classrooms by
presenting real-time analytics about students' learning, metacognition, and
behavior. Results from a field study conducted in K-12 classrooms indicate that
students learn more when teachers and AI tutors work together during class. We
discuss implications of this research for the design of human-AI partnerships.
We argue for more participatory approaches to research and design in this area,
in which practitioners and other stakeholders are deeply, meaningfully involved
throughout the process. Furthermore, we advocate for theory-building and for
principled approaches to the study of human-AI decision-making in real-world
contexts.
|
We investigate the problem of nonparametric estimation of the trend for
stochastic differential equations with delay and driven by a fractional
Brownian motion through the method of kernel-type estimation for the estimation
of a probability density function.
|
High fidelity segmentation of both macro and microvascular structure of the
retina plays a pivotal role in determining degenerative retinal diseases, yet
it is a difficult problem. Due to successive resolution loss in the encoding
phase combined with the inability to recover this lost information in the
decoding phase, autoencoding based segmentation approaches are limited in their
ability to extract retinal microvascular structure. We propose RV-GAN, a new
multi-scale generative architecture for accurate retinal vessel segmentation to
alleviate this. The proposed architecture uses two generators and two
multi-scale autoencoding discriminators for better microvessel localization and
segmentation. In order to avoid the loss of fidelity suffered by traditional
GAN-based segmentation systems, we introduce a novel weighted feature matching
loss. This new loss incorporates and prioritizes features from the
discriminator's decoder over the encoder. Doing so combined with the fact that
the discriminator's decoder attempts to determine real or fake images at the
pixel level better preserves macro and microvascular structure. By combining
reconstruction and weighted feature matching loss, the proposed architecture
achieves an area under the curve (AUC) of 0.9887, 0.9914, and 0.9887 in
pixel-wise segmentation of retinal vasculature from three publicly available
datasets, namely DRIVE, CHASE-DB1, and STARE, respectively. Additionally,
RV-GAN outperforms other architectures in two additional relevant metrics, mean
intersection-over-union (Mean-IOU) and structural similarity measure (SSIM).
|
We present a phase-space study of two stellar groups located at the core of
the Orion complex: Brice\~no-1 and Orion Belt Population-near (OBP-near). We
identify the groups with the unsupervised clustering algorithm, Shared Nearest
Neighbor (SNN), which previously identified twelve new stellar substructures in
the Orion complex. For each of the two groups, we derive the 3D space motions
of individual stars using Gaia EDR3 proper motions supplemented by radial
velocities from Gaia DR2, APOGEE-2, and GALAH DR3. We present evidence for
radial expansion of the two groups from a common center. Unlike previous work,
our study suggests that evidence of stellar group expansion is confined only to
OBP-near and Brice\~no-1 whereas the rest of the groups in the complex show
more complicated motions. Interestingly, the stars in the two groups lie at the
center of a dust shell, as revealed via an extant 3D dust map. The exact
mechanism that produces such coherent motions remains unclear, while the
observed radial expansion and dust shell suggest that massive stellar feedback
could have influenced the star formation history of these groups.
|
Renormalization-Group (RG) improvement has been frequently applied to capture
the effect of quantum corrections on cosmological and black-hole spacetimes.
This work utilizes an algebraically complete set of curvature invariants to
establish that: On the one hand, RG improvement at the level of the metric is
coordinate-dependent. On the other hand, a newly proposed RG improvement at the
level of curvature invariants is coordinate-independent. Spherically-symmetric
and axially-symmetric black-hole spacetimes serve as physically relevant
examples.
|
In this paper, we propose a unified pre-training approach called UniSpeech to
learn speech representations with both unlabeled and labeled data, in which
supervised phonetic CTC learning and phonetically-aware contrastive
self-supervised learning are conducted in a multi-task learning manner. The
resultant representations can capture information more correlated with phonetic
structures and improve the generalization across languages and domains. We
evaluate the effectiveness of UniSpeech for cross-lingual representation
learning on public CommonVoice corpus. The results show that UniSpeech
outperforms self-supervised pretraining and supervised transfer learning for
speech recognition by a maximum of 13.4% and 17.8% relative phone error rate
reductions respectively (averaged over all testing languages). The
transferability of UniSpeech is also demonstrated on a domain-shift speech
recognition task, i.e., a relative word error rate reduction of 6% against the
previous approach.
|
CdTe is a key thin-film photovoltaic technology. Non-radiative electron-hole
recombination reduces the solar conversion efficiency from an ideal value of
32% to a current champion performance of 22%. The cadmium vacancy (V_Cd) is a
prominent acceptor species in p-type CdTe; however, debate continues regarding
its structural and electronic behavior. Using ab initio defect techniques, we
calculate a negative-U double-acceptor level for V_Cd, while reproducing the
V_Cd^-1 hole-polaron, reconciling theoretical predictions with experimental
observations. We find the cadmium vacancy facilitates rapid charge-carrier
recombination, reducing maximum power-conversion efficiency by over 5% for
untreated CdTe -- a consequence of tellurium dimerization, metastable
structural arrangements, and anharmonic potential energy surfaces for carrier
capture.
|
Quasi-geostrophic (QG) theory describes the dynamics of synoptic scale flows
in the trophosphere that are balanced with respect to both acoustic and
internal gravity waves. Within this framework, effects of (turbulent) friction
near the ground are usually represented by Ekman Layer theory. The troposphere
covers roughly the lowest ten kilometers of the atmosphere while Ekman layer
heights are typically just a few hundred meters. However, this two-layer
asymptotic theory does not explicitly account for substantial changes of the
potential temperature stratification due to diabatic heating associated with
cloud formation or with radiative and turbulent heat fluxes, which, in the
middle latitudes, can be particularly important in about the lowest three
kilometers. To address this deficiency, this paper extends the classical
QG-Ekman layer model by introducing an intermediate, dynamically and
thermodynamically active layer, called the "diabatic layer" (DL) from here on.
The flow in this layer is also in acoustic, hydrostatic, and geostrophic
balance but, in contrast to QG flow, variations of potential temperature are
not restricted to small deviations from a stable and time independent
background stratification. Instead, within the diabatic layer, diabatic
processes are allowed to affect the leading-order stratification. As a
consequence, the diabatic layer modifies the pressure field at the top of the
Ekman layer, and with it the intensity of Ekman pumping seen by the
quasi-geostrophic bulk flow. The result is the proposed extended
quasi-geostrophic three-layer QG-DL-Ekman model for mid-latitude (dry and
moist) dynamics.
|
Quantization has become a popular technique to compress neural networks and
reduce compute cost, but most prior work focuses on studying quantization
without changing the network size. Many real-world applications of neural
networks have compute cost and memory budgets, which can be traded off with
model quality by changing the number of parameters. In this work, we use ResNet
as a case study to systematically investigate the effects of quantization on
inference compute cost-quality tradeoff curves. Our results suggest that for
each bfloat16 ResNet model, there are quantized models with lower cost and
higher accuracy; in other words, the bfloat16 compute cost-quality tradeoff
curve is Pareto-dominated by the 4-bit and 8-bit curves, with models primarily
quantized to 4-bit yielding the best Pareto curve. Furthermore, we achieve
state-of-the-art results on ImageNet for 4-bit ResNet-50 with
quantization-aware training, obtaining a top-1 eval accuracy of 77.09%. We
demonstrate the regularizing effect of quantization by measuring the
generalization gap. The quantization method we used is optimized for
practicality: It requires little tuning and is designed with hardware
capabilities in mind. Our work motivates further research into optimal numeric
formats for quantization, as well as the development of machine learning
accelerators supporting these formats. As part of this work, we contribute a
quantization library written in JAX, which is open-sourced at
https://github.com/google-research/google-research/tree/master/aqt.
|
Learning quickly and continually is still an ambitious task for neural
networks. Indeed, many real-world applications do not reflect the learning
setting where neural networks shine, as data are usually few, mostly unlabelled
and come as a stream. To narrow this gap, we introduce FUSION - Few-shot
UnSupervIsed cONtinual learning - a novel strategy which aims to deal with
neural networks that "learn in the wild", simulating a real distribution and
flow of unbalanced tasks. We equip FUSION with MEML - Meta-Example
Meta-Learning - a new module that simultaneously alleviates catastrophic
forgetting and favours the generalisation and future learning of new tasks. To
encourage features reuse during the meta-optimisation, our model exploits a
single inner loop per task, taking advantage of an aggregated representation
achieved through the use of a self-attention mechanism. To further enhance the
generalisation capability of MEML, we extend it by adopting a technique that
creates various augmented tasks and optimises over the hardest. Experimental
results on few-shot learning benchmarks show that our model exceeds the other
baselines in both FUSION and fully supervised case. We also explore how it
behaves in standard continual learning consistently outperforming
state-of-the-art approaches.
|
The origin of boson peak -- an excess of density of states over Debye's model
in glassy solids -- is still under intense debate, among which some theories
and experiments suggest that boson peak is related to van-Hove singularity.
Here we show that boson peak and van-Hove singularity are well separated
identities, by measuring the vibrational density of states of a two-dimensional
granular system, where packings are tuned gradually from a crystalline, to
polycrystals, and to an amorphous material. We observe a coexistence of well
separated boson peak and van-Hove singularities in polycrystals, in which the
van-Hove singularities gradually shift to higher frequency values while
broadening their shapes and eventually disappear completely when the structural
disorder $\eta$ becomes sufficiently high. By analyzing firstly the strongly
disordered system ($\eta=1$) and the disordered granular crystals ($\eta=0$),
and then systems of intermediate disorder with $\eta$ in between, we find that
boson peak is associated with spatially uncorrelated random flucutations of
shear modulus $\delta G/\langle G \rangle$ whereas the smearing of van-Hove
singularities is associated with spatially correlated fluctuations of shear
modulus $\delta G/\langle G \rangle$.
|
We present the discovery of NGTS-19b, a high mass transiting brown dwarf
discovered by the Next Generation Transit Survey (NGTS). We investigate the
system using follow up photometry from the South African Astronomical
Observatory, as well as sector 11 TESS data, in combination with radial
velocity measurements from the CORALIE spectrograph to precisely characterise
the system. We find that NGTS-19b is a brown dwarf companion to a K-star, with
a mass of $69.5 ^{+5.7}_{-5.4}$ M$_{Jup}$ and radius of $1.034
^{+0.055}_{-0.053}$ R$_{Jup}$. The system has a reasonably long period of 17.84
days, and a high degree of eccentricity of $0.3767 ^{+0.0061}_{-0.0061}$. The
mass and radius of the brown dwarf imply an age of $0.46 ^{+0.26}_{-0.15}$ Gyr,
however this is inconsistent with the age determined from the host star SED,
suggesting that the brown dwarf may be inflated. This is unusual given that its
large mass and relatively low levels of irradiation would make it much harder
to inflate. NGTS-19b adds to the small, but growing number of brown dwarfs
transiting main sequence stars, and is a valuable addition as we begin to
populate the so called brown dwarf desert.
|
A $3$-Prismatoid $P$ is the convex hull of two convex polygons $A$ and $B$
which lie in parallel planes $H_A, H_B\subset\mathbb{R}^3$. Let $A'$ be the
orthogonal projection of $A$ onto $H_B$. A prismatoid is called nested if $A'$
properly contained in $B$, or vice versa. We show that nested prismatoids can
be edge-unfolded.
|
We introduce a simple entropy-based formalism to characterize the role of
mixing in pressure-balanced multiphase clouds, and demonstrate example
applications using Enzo-E (magneto)hydrodynamic simulations. Under this
formalism, the high-dimensional description of the system's state at a given
time is simplified to the joint distribution of mass over pressure ($P$) and
entropy ($K=P/\rho^\gamma$). As a result, this approach provides a way for
(empirically and analytically) quantifying the impact of different initial
conditions and sets of physics on the system evolution. We find that mixing
predominantly alters the distribution along the $K$ direction and illustrate
how the formalism can be used to model mixing and cooling for fluid elements
originating in the cloud. We further confirm and generalize a previously
suggested criterion for cloud growth in the presence of radiative cooling, and
demonstrate that the shape of the cooling curve, particularly at the low
temperature end, can play an important role in controlling condensation.
Moreover, we discuss the capacity of our approach to generalize such a
criterion to apply to additional sets of physics, and to build intuition for
the impact of subtle higher order effects not directly addressed by the
criterion.
|
Recently, a first-order differentiator based on time-varying gains was
introduced in the literature, in its non recursive form, for a class of
differentiable signals $y(t)$, satisfying $|\ddot{y}(t)|\leq L(t-t_0)$, for a
known function $L(t-t_0)$, such that $\frac{1}{L(t-t_0)}\left|\frac{d
{L}(t-t_0)}{dt}\right|\leq M$ with a known constant $M$. It has been shown that
such differentiator is globally finite-time convergent. In this paper, we
redesign such an algorithm, using time base generators (a class of time-varying
gains), to obtain a differentiator algorithm for the same class of signals,
with guaranteed convergence before a desired time, i.e., with fixed-time
convergence with an a priori user-defined upper bound for the settling time.
Thus, our approach can be applied for scenarios under time-constraints.
We present numerical examples exposing the contribution with respect to
state-of-the-art algorithms.
|
The application of digital technologies in agriculture can improve
traditional practices to adapt to climate change, reduce Greenhouse Gases (GHG)
emissions, and promote a sustainable intensification for food security. Some
authors argued that we are experiencing a Digital Agricultural Revolution (DAR)
that will boost sustainable farming. This study aims to find evidence of the
ongoing DAR process and clarify its roots, what it means, and where it is
heading. We investigated the scientific literature with bibliometric analysis
tools to produce an objective and reproducible literature review. We retrieved
4995 articles by querying the Web of Science database in the timespan
2012-2019, and we analyzed the obtained dataset to answer three specific
research questions: i) what is the spectrum of the DAR-related terminology?;
ii) what are the key articles and the most influential journals, institutions,
and countries?; iii) what are the main research streams and the emerging
topics? By grouping the authors' keywords reported on publications, we
identified five main research streams: Climate-Smart Agriculture (CSA),
Site-Specific Management (SSM), Remote Sensing (RS), Internet of Things (IoT),
and Artificial Intelligence (AI). To provide a broad overview of each of these
topics, we analyzed relevant review articles, and we present here the main
achievements and the ongoing challenges. Finally, we showed the trending topics
of the last three years (2017, 2018, 2019).
|
In this thesis, we provide new insights into the theory of cascade feedback
linearization of control systems. In particular, we present a new explicit
class of cascade feedback linearizable control systems, as well as a new
obstruction to the existence of a cascade feedback linearization for a given
invariant control system. These theorems are presented in Chapter 4, where
truncated versions of operators from the calculus of variations are introduced
and explored to prove these new results. This connection reveals new geometry
behind cascade feedback linearization and establishes a foundation for future
exciting work on the subject with important consequences for dynamic feedback
linearization.
|
In this work, we prove a novel one-shot multi-sender decoupling theorem
generalising Dupuis result. We start off with a multipartite quantum state, say
on A1 A2 R, where A1, A2 are treated as the two sender systems and R is the
reference system. We apply independent Haar random unitaries in tensor product
on A1 and A2 and then send the resulting systems through a quantum channel. We
want the channel output B to be almost in tensor with the untouched reference
R. Our main result shows that this is indeed the case if suitable entropic
conditions are met. An immediate application of our main result is to obtain a
one-shot simultaneous decoder for sending quantum information over a k-sender
entanglement unassisted quantum multiple access channel (QMAC). The rate region
achieved by this decoder is the natural one-shot quantum analogue of the
pentagonal classical rate region. Assuming a simultaneous smoothing conjecture,
this one-shot rate region approaches the optimal rate region of Yard, Dein the
asymptotic iid limit. Our work is the first one to obtain a non-trivial
simultaneous decoder for the QMAC with limited entanglement assistance in both
one-shot and asymptotic iid settings; previous works used unlimited
entanglement assistance.
|
The capability of reinforcement learning (RL) agent directly depends on the
diversity of learning scenarios the environment generates and how closely it
captures real-world situations. However, existing environments/simulators lack
the support to systematically model distributions over initial states and
transition dynamics. Furthermore, in complex domains such as soccer, the space
of possible scenarios is infinite, which makes it impossible for one research
group to provide a comprehensive set of scenarios to train, test, and benchmark
RL algorithms. To address this issue, for the first time, we adopt an existing
formal scenario specification language, SCENIC, to intuitively model and
generate interactive scenarios. We interfaced SCENIC to Google Research Soccer
environment to create a platform called SCENIC4RL. Using this platform, we
provide a dataset consisting of 36 scenario programs encoded in SCENIC and
demonstration data generated from a subset of them. We share our experimental
results to show the effectiveness of our dataset and the platform to train,
test, and benchmark RL algorithms. More importantly, we open-source our
platform to enable RL community to collectively contribute to constructing a
comprehensive set of scenarios.
|
In this paper, we describe novel components for extracting clinically
relevant information from medical conversations which will be available as
Google APIs. We describe a transformer-based Recurrent Neural Network
Transducer (RNN-T) model tailored for long-form audio, which can produce rich
transcriptions including speaker segmentation, speaker role labeling,
punctuation and capitalization. On a representative test set, we compare
performance of RNN-T models with different encoders, units and streaming
constraints. Our transformer-based streaming model performs at about 20% WER on
the ASR task, 6% WDER on the diarization task, 43% SER on periods, 52% SER on
commas, 43% SER on question marks and 30% SER on capitalization. Our recognizer
is paired with a confidence model that utilizes both acoustic and lexical
features from the recognizer. The model performs at about 0.37 NCE. Finally, we
describe a RNN-T based tagging model. The performance of the model depends on
the ontologies, with F-scores of 0.90 for medications, 0.76 for symptoms, 0.75
for conditions, 0.76 for diagnosis, and 0.61 for treatments. While there is
still room for improvement, our results suggest that these models are
sufficiently accurate for practical applications.
|
Visual sensors serve as a critical component of the Internet of Things (IoT).
There is an ever-increasing demand for broad applications and higher
resolutions of videos and cameras in smart homes and smart cities, such as in
security cameras. To utilize this large volume of video data generated from
networks of visual sensors for various machine vision applications, it needs to
be compressed and securely transmitted over the Internet. H.266/VVC, as the new
compression standard, brings the highest compression for visual data. To
provide security along with high compression, a selective encryption method for
hiding information of videos is presented for this new compression standard.
Selective encryption methods can lower the computation overhead of the
encryption while keeping the video bitstream format which is useful when the
video goes into untrusted blocks such as transcoding or watermarking. Syntax
elements that represent considerable information are selected for the
encryption, i.e., luma Intra Prediction Modes (IPMs), Motion Vector Difference
(MVD), and residual signs., then the results of the proposed method are
investigated in terms of visual security and bit rate change. Our experiments
show that the encrypted videos provide higher visual security compared to other
similar works in previous standards, and integration of the presented
encryption scheme into the VVC encoder has little impact on the bit rate
efficiency (results in 2% to 3% bit rate increase).
|
Deep learning based molecular graph generation and optimization has recently
been attracting attention due to its great potential for de novo drug design.
On the one hand, recent models are able to efficiently learn a given graph
distribution, and many approaches have proven very effective to produce a
molecule that maximizes a given score. On the other hand, it was shown by
previous studies that generated optimized molecules are often unrealistic, even
with the inclusion of mechanics to enforce similarity to a dataset of real drug
molecules. In this work we use a hybrid approach, where the dataset
distribution is learned using an autoregressive model while the score
optimization is done using the Metropolis algorithm, biased toward the learned
distribution. We show that the resulting method, that we call learned realism
sampling (LRS), produces empirically more realistic molecules and outperforms
all recent baselines in the task of molecule optimization with similarity
constraints.
|
In single-reference coupled-cluster (CC) methods, one has to solve a set of
non-linear polynomial equations in order to determine the so-called amplitudes
which are then used to compute the energy and other properties. Although it is
of common practice to converge to the (lowest-energy) ground-state solution, it
is also possible, thanks to tailored algorithms, to access higher-energy roots
of these equations which may or may not correspond to genuine excited states.
Here, we explore the structure of the energy landscape of variational CC (VCC)
and we compare it with its (projected) traditional version (TCC) in the case
where the excitation operator is restricted to paired double excitations
(pCCD). By investigating two model systems (the symmetric stretching of the
linear \ce{H4} molecule and the continuous deformation of the square \ce{H4}
molecule into a rectangular arrangement) in the presence of weak and strong
correlations, the performance of VpCCD and TpCCD are gauged against their
configuration interaction (CI) equivalent, known as doubly-occupied CI (DOCI),
for reference Slater determinants made of ground- or excited-state Hartree-Fock
orbitals or state-specific orbitals optimized directly at the VpCCD level. The
influence of spatial symmetry breaking is also investigated.
|
Multi-agent value-based approaches recently make great progress, especially
value decomposition methods. However, there are still a lot of limitations in
value function factorization. In VDN, the joint action-value function is the
sum of per-agent action-value function while the joint action-value function of
QMIX is the monotonic mixing of per-agent action-value function. To some
extent, QTRAN reduces the limitation of joint action-value functions that can
be represented, but it has unsatisfied performance in complex tasks. In this
paper, in order to extend the class of joint value functions that can be
represented, we propose a novel actor-critic method called NQMIX. NQMIX
introduces an off-policy policy gradient on QMIX and modify its network
architecture, which can remove the monotonicity constraint of QMIX and
implement a non-monotonic value function factorization for the joint
action-value function. In addition, NQMIX takes the state-value as the learning
target, which overcomes the problem in QMIX that the learning target is
overestimated. Furthermore, NQMIX can be extended to continuous action space
settings by introducing deterministic policy gradient on itself. Finally, we
evaluate our actor-critic methods on SMAC domain, and show that it has a
stronger performance than COMA and QMIX on complex maps with heterogeneous
agent types. In addition, our ablation results show that our modification of
mixer is effective.
|
We present a chronology of the formation and early evolution of the Oort
cloud by simulations. These simulations start with the Solar System being born
with planets and asteroids in a stellar cluster orbiting the Galactic center.
Upon ejection from its birth environment, we continue to follow the evolution
of the Solar System while it navigates the Galaxy as an isolated planetary
system. We conclude that the range in semi-major axis between 100au and several
10$^3$\,au still bears the signatures of the Sun being born in a
1000MSun/pc$^3$ star cluster, and that most of the outer Oort cloud formed
after the Solar System was ejected. The ejection of the Solar System, we argue,
happened between 20Myr and 50Myr after its birth. Trailing and leading trails
of asteroids and comets along the Sun's orbit in the Galactic potential are the
by-product of the formation of the Oort cloud. These arms are composed of
material that became unbound from the Solar System when the Oort cloud formed.
Today, the bulk of the material in the Oort cloud ($\sim 70$\%) originates from
the region in the circumstellar disk that was located between $\sim 15$\,au and
$\sim 35$\,au, near the current location of the ice giants and the Centaur
family of asteroids. According to our simulations, this population is
eradicated if the ice-giant planets are born in orbital resonance. Planet
migration or chaotic orbital reorganization occurring while the Solar System is
still a cluster member is, according to our model, inconsistent with the
presence of the Oort cloud. About half the inner Oort cloud, between 100 and
$10^4$\,au, and a quarter of the material in the outer Oort cloud, $\apgt
10^4$\,au, could be non-native to the Solar System but was captured from
free-floating debris in the cluster or from the circumstellar disk of other
stars in the birth cluster.
|
We study outer Lipschitz geometry of real semialgebraic or, more general,
definable in a polynomially bounded o-minimal structure over the reals, surface
germs. In particular, any definable H\"older triangle is either Lipschitz
normally embedded or contains some "abnormal" arcs. We show that abnormal arcs
constitute finitely many "abnormal zones" in the space of all arcs, and
investigate geometric and combinatorial properties of abnormal surface germs.
We establish a strong relation between geometry and combinatorics of abnormal
H\"older triangles.
|
We propose a lattice spin model on a cubic lattice that shares many of the
properties of the 3D toric code and the X-cube fracton model. The model, made
of Z_3 degrees of freedom at the links, has the vertex, the cube, and the
plaquette terms. Being a stabilizer code the ground states are exactly solved.
With only the vertex and the cube terms present, we show that the ground state
degeneracy (GSD) is 3^(L3+3L-1) where L is the linear dimension of the cubic
lattice. In addition to fractons, there are free vertex excitations we call the
freeons. With the addition of the plaquette terms, GSD is vastly reduced to
3^3, with fracton, fluxon, and freeon excitations, among which only the freeons
are deconfined. The model is called the AB model if only the vertex (A_v) and
the cube (B_c) terms are present, and the ABC model if in addition the
plaquette terms (C_p) are included. The AC model consisting of vertex and
plaquette terms is the Z_3 3D toric code. The extensive GSD of the AB model
derives from the existence of both local and non-local logical operators that
connect different ground states. The latter operators are identical to the
logical operators of the Z_3 X-cube model. Fracton excitations are immobile and
accompanied by the creation of fluxons - plaquettes having nonzero flux. In the
ABC model, such fluxon creation costs energy and ends up confining the
fractons. Unlike past models of fractons, vertex excitations are free to move
in any direction and pick up a non-trivial statistical phase when passing
through a fluxon or a fracton cluster.
|
Results of analysis of 60010 data photometric observations from the AAVSO
international database are presented, which span 120 years of monitoring. The
periodogram analysis shows the best fit period of 70.74d, a half of typically
published periods for smaller intervals. Contrary to expectation for
deep/shallow minima, the changes between them are not so regular. There may be
series of deep (or shallow) minima without alternations. There may be two
acting periods of 138.5 days and 70.74, so the beat modulation may be expected.
The dependence of the phases of deep minima argue for two alternating periods
with a characteristic life-time of a mode of 30years. These phenomenological
results better explain the variability than the model of chaos.
|
In this worldwide spread of SARS-CoV-2 (COVID-19) infection, it is of utmost
importance to detect the disease at an early stage especially in the hot spots
of this epidemic. There are more than 110 Million infected cases on the globe,
sofar. Due to its promptness and effective results computed tomography
(CT)-scan image is preferred to the reverse-transcription polymerase chain
reaction (RT-PCR). Early detection and isolation of the patient is the only
possible way of controlling the spread of the disease. Automated analysis of
CT-Scans can provide enormous support in this process. In this article, We
propose a novel approach to detect SARS-CoV-2 using CT-scan images. Our method
is based on a very intuitive and natural idea of analyzing shapes, an attempt
to mimic a professional medic. We mainly trace SARS-CoV-2 features by
quantifying their topological properties. We primarily use a tool called
persistent homology, from Topological Data Analysis (TDA), to compute these
topological properties. We train and test our model on the "SARS-CoV-2 CT-scan
dataset" \citep{soares2020sars}, an open-source dataset, containing 2,481
CT-scans of normal and COVID-19 patients. Our model yielded an overall
benchmark F1 score of $99.42\% $, accuracy $99.416\%$, precision $99.41\%$, and
recall $99.42\%$. The TDA techniques have great potential that can be utilized
for efficient and prompt detection of COVID-19. The immense potential of TDA
may be exploited in clinics for rapid and safe detection of COVID-19 globally,
in particular in the low and middle-income countries where RT-PCR labs and/or
kits are in a serious crisis.
|
Observations of positional offsets between the location of X-ray and radio
features in many resolved, extragalactic jets indicates that the emitting
regions are not co-spatial, an important piece of evidence in the debate over
the origin of the X-ray emission on kpc scales. The existing literature is
nearly exclusively focused on jets with sufficiently deep Chandra observations
to yield accurate positions for X-ray features, but most of the known X-ray
jets are detected with tens of counts or fewer, making detailed morphological
comparisons difficult. Here we report the detection of X-ray-to-radio
positional offsets in 15 extragalactic jets from an analysis of 22 sources with
low-count Chandra observations, where we utilized the Low-count Image
Reconstruction Algorithm (LIRA). This algorithm has allowed us to account for
effects such as Poisson background fluctuations and nearby point sources which
have previously made the detection of offsets difficult in shallow
observations. Using this method, we find that in 55 % of knots with detectable
offsets, the X-rays peak upstream of the radio, questioning the applicability
of one-zone models, including the IC/CMB model for explaining the X-ray
emission. We also report the non-detection of two previously claimed X-ray
jets. Many, but not all, of our sources, follow a loose trend of increasing
offset between the X-ray and radio emission, as well as a decreasing X-ray to
radio flux ratio along the jet.
|
In this paper, we deal with an elliptic problem with the Dirichlet boundary
condition. We operate in Sobolev spaces and the main analytic tool we use is
the Lax-Milgram lemma. First, we present the variational approach of the
problem which allows us to apply different functional analysis techniques. Then
we study thoroughly the well-posedness of the problem. We conclude our work
with a solution of the problem using numerical analysis techniques and the free
software freefem++.
|
The aim of this article is to study semigroups of composition operators on
the BMOA-type spaces $BMOA_p$, and on their "little oh" analogues $VMOA_p$. The
spaces $BMOA_p$ were introduced by R. Zhao as part of the large family of
F(p,q,s) spaces, and are the M\"{o}bius invariant subspaces of the Dirichlet
spaces $D^p_{p-1}$. We study the maximal subspace of strong continuity,
providing a sufficient condition on the infinitesimal generator of ${\phi}$,
under which $[{\phi}_t,BMOA_p]=VMOA_p$, and a related necessary condition in
the case where the Denjoy - Wolff point of the semigroup is in $\mathbb{D}$.
Further, we characterize those semigroups, for which $[{\phi}_t,
BMOA_p]=VMOA_p$, in terms of the resolvent operator of the infinitesimal
generator of $T_t$. In addition we provide a connection between the maximal
subspace of strong continuity and the Volterra-type operators $T_g$. We
characterize the symbols g for which $T_g$ acting from $BMOA$ to $BMOA_1$ is
bounded or compact, thus extending a related result to the case $p=1$. We also
prove that for $1<p<2$ compactness of $T_g$ on $BMOA_p$ is equivalent to weak
compactness.
|
Titanium nitride (TiN) is a paradigm of refractory transition metal nitrides
with great potential in vast applications. Generally, the plasmonic performance
of TiN can be tuned by oxidation, which was thought to be only temperature-,
oxygen partial pressure-, and time-dependent. Regarding the role of
crystallographic orientation in the oxidation and resultant optical properties
of TiN films, little is known thus far. Here we reveal that both the oxidation
resistance behavior and the plasmonic performance of epitaxial TiN films follow
the order of (001) < (110) < (111). The effects of crystallographic orientation
on the lattice constants, optical properties, and oxidation levels of epitaxial
TiN films have been systematically studied by combined high-resolution X-ray
diffraction, spectroscopic ellipsometry, X-ray absorption spectroscopy, and
X-ray photoemission spectroscopy. To further understand the role of
crystallographic orientation in the initial oxidation process of TiN films,
density-functional-theory calculations are carried out, indicating the energy
cost of oxidation is (001) < (110) < (111), consistent with the experiments.
The superior endurance of the (111) orientation against mild oxidation can
largely alleviate the previously stringent technical requirements for the
growth of TiN films with high plasmonic performance. The crystallographic
orientation can also offer an effective controlling parameter to design
TiN-based plasmonic devices with desired peculiarity, e.g., superior chemical
stability against mild oxidation or large optical tunability upon oxidation.
|
Recently, the MiniBooNE experiment at Fermilab has updated the results with
increased data and reported an excess of $560.6 \pm 119.6$ electronlike events
($4.7\sigma$) in the neutrino operation mode. In this paper, we propose a
scenario to account for the excess where a Dirac-type sterile neutrino,
produced by a charged kaon decay through the neutrino mixing, decays into a
leptophilic axionlike particle ($\ell$ALP) and a muon neutrino. The
electron-positron pairs produced from the $\ell$ALP decays can be interpreted
as electronlike events provided that their opening angle is sufficiently small.
In our framework, we consider the $\ell$ALP with a mass $m^{}_a =
20\,\text{MeV}$ and an inverse decay constant $c^{}_e/f^{}_a =
10^{-2}\,\text{GeV}^{-1}$, allowed by the astrophysical and experimental
constraints. Then, after integrating the predicted angular or visible energy
spectra of the $\ell$ALP to obtain the total excess event number, we find that
our scenario with sterile neutrino masses within $150\,\text{MeV}\lesssim
m^{}_N \lesssim 380 \,\text{MeV}$ ($150\,\text{MeV}\lesssim m^{}_N \lesssim 180
\,\text{MeV}$) and neutrino mixing parameters between $10^{-10} \lesssim
|U_{\mu 4}|^2 \lesssim 10^{-8}$ ($3\times 10^{-7} \lesssim |U_{\mu 4}|^2
\lesssim 8 \times10^{-7}$) can explain the MiniBooNE data.
|
Artificial stock market simulation based on agent is an important means to
study financial market. Based on the assumption that the investors are composed
of a main fund, small trend and contrarian investors characterized by four
parameters, we simulate and research a kind of financial phenomenon with the
characteristics of pyramid schemes. Our simulation results and theoretical
analysis reveal the relationships between the rate of return of the main fund
and the proportion of the trend investors in all small investors, the small
investors' parameters of taking profit and stopping loss, the order size of the
main fund and the strategies adopted by the main fund. Our work are helpful to
explain the financial phenomenon with the characteristics of pyramid schemes in
financial markets, design trading rules for regulators and develop trading
strategies for investors.
|
We consider estimating the effect of a treatment on the progress of subjects
tested both before and after treatment assignment. A vast literature compares
the competing approaches of modeling the post-test score conditionally on the
pre-test score versus modeling the difference, namely the gain score. Our
contribution resides in analyzing the merits and drawbacks of the two
approaches in a multilevel setting. This is relevant in many fields, for
example education with students nested into schools. The multilevel structure
raises peculiar issues related to the contextual effects and the distinction
between individual-level and cluster-level treatment. We derive approximate
analytical results and compare the two approaches by a simulation study. For an
individual-level treatment our findings are in line with the literature,
whereas for a cluster-level treatment we point out the key role of the cluster
mean of the pre-test score, which favors the conditioning approach in settings
with large clusters.
|
We report a systematic measurement of cumulants, $C_{n}$, for net-proton,
proton and antiproton multiplicity distributions, and correlation functions,
$\kappa_n$, for proton and antiproton multiplicity distributions up to the
fourth order in Au+Au collisions at $\sqrt{s_{\mathrm {NN}}}$ = 7.7, 11.5,
14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The $C_{n}$ and $\kappa_n$ are
presented as a function of collision energy, centrality and kinematic
acceptance in rapidity, $y$, and transverse momentum, $p_{T}$. The data were
taken during the first phase of the Beam Energy Scan (BES) program (2010 --
2017) at the BNL Relativistic Heavy Ion Collider (RHIC) facility. The
measurements are carried out at midrapidity ($|y| <$ 0.5) and transverse
momentum 0.4 $<$ $p_{\rm T}$ $<$ 2.0 GeV/$c$, using the STAR detector at RHIC.
We observe a non-monotonic energy dependence ($\sqrt{s_{\mathrm {NN}}}$ = 7.7
-- 62.4 GeV) of the net-proton $C_{4}$/$C_{2}$ with the significance of
3.1$\sigma$ for the 0-5\% central Au+Au collisions. This is consistent with the
expectations of critical fluctuations in a QCD-inspired model. Thermal and
transport model calculations show a monotonic variation with $\sqrt{s_{\mathrm
{NN}}}$. For the multiparticle correlation functions, we observe significant
negative values for a two-particle correlation function, $\kappa_2$, of protons
and antiprotons, which are mainly due to the effects of baryon number
conservation. Furthermore, it is found that the four-particle correlation
function, $\kappa_4$, of protons plays a role in determining the energy
dependence of proton $C_4/C_1$ below 19.6 GeV, which cannot be understood by
the effect of baryon number conservation.
|
We address the challenge of policy evaluation in real-world applications of
reinforcement learning systems where the available historical data is limited
due to ethical, practical, or security considerations. This constrained
distribution of data samples often leads to biased policy evaluation estimates.
To remedy this, we propose that instead of policy evaluation, one should
perform policy comparison, i.e. to rank the policies of interest in terms of
their value based on available historical data. In addition we present the
Limited Data Estimator (LDE) as a simple method for evaluating and comparing
policies from a small number of interactions with the environment. According to
our theoretical analysis, the LDE is shown to be statistically reliable on
policy comparison tasks under mild assumptions on the distribution of the
historical data. Additionally, our numerical experiments compare the LDE to
other policy evaluation methods on the task of policy ranking and demonstrate
its advantage in various settings.
|
Significant clustering around the rarest luminous quasars is a feature
predicted by dark matter theory combined with number density matching
arguments. However, this expectation is not reflected by observations of
quasars residing in a diverse range of environments. Here, we assess the
tension in the diverse clustering of visible $i$-band dropout galaxies around
luminous $z\sim6$ quasars. Our approach uses a simple empirical method to
derive the median luminosity to halo mass relation, $L_{c}(M_{h})$ for both
quasars and galaxies under the assumption of log-normal luminosity scatter,
$\Sigma_{Q}$ and $\Sigma_{G}$. We show that higher $\Sigma_{Q}$ reduces the
average halo mass hosting a quasar of a given luminosity, thus introducing at
least a partial reversion to the mean in the number count distribution of
nearby Lyman-Break galaxies. We generate a large sample of mock Hubble Space
Telescope fields-of-view centred across rare $z\sim6$ quasars by resampling
pencil beams traced through the dark matter component of the BlueTides
cosmological simulation. We find that diverse quasar environments are expected
for $\Sigma_{Q}>0.4$, consistent with numerous observations and theoretical
studies. However, we note that the average number of galaxies around the
central quasar is primarily driven by galaxy evolutionary processes in
neighbouring halos, as embodied by our parameter $\Sigma_{G}$, instead of a
difference in the large scale structure around the central quasar host,
embodied by $\Sigma_{Q}$. We conclude that models with $\Sigma_{G}>0.3$ are
consistent with current observational constraints on high-z quasars, and that
such a value is comparable to the scatter estimated from hydrodynamical
simulations of galaxy formation.
|
Here we develop a method for investigating global strong solutions of
partially dissipative hyperbolic systems in the critical regularity setting.
Compared to the recent works by Kawashima and Xu, we use hybrid Besov spaces
with different regularity exponent in low and high frequency. This allows to
consider more general data and to track the exact dependency on the dissipation
parameter for the solution. Our approach enables us to go beyond the L^2
framework in the treatment of the low frequencies of the solution, which is
totally new, to the best of our knowledge. Focus is on the one-dimensional
setting (the multi-dimensional case will be considered in a forthcoming paper)
and, for expository purpose, the first part of the paper is devoted to a toy
model that may be seen as a simplification of the compressible Euler system
with damping. More elaborated systems (including the compressible Euler system
with general increasing pressure law) are considered at the end of the paper.
|
The input space of a neural network with ReLU-like activations is partitioned
into multiple linear regions, each corresponding to a specific activation
pattern of the included ReLU-like activations. We demonstrate that this
partition exhibits the following encoding properties across a variety of deep
learning models: (1) {\it determinism}: almost every linear region contains at
most one training example. We can therefore represent almost every training
example by a unique activation pattern, which is parameterized by a {\it neural
code}; and (2) {\it categorization}: according to the neural code, simple
algorithms, such as $K$-Means, $K$-NN, and logistic regression, can achieve
fairly good performance on both training and test data. These encoding
properties surprisingly suggest that {\it normal neural networks well-trained
for classification behave as hash encoders without any extra efforts.} In
addition, the encoding properties exhibit variability in different scenarios.
{Further experiments demonstrate that {\it model size}, {\it training time},
{\it training sample size}, {\it regularization}, and {\it label noise}
contribute in shaping the encoding properties, while the impacts of the first
three are dominant.} We then define an {\it activation hash phase chart} to
represent the space expanded by {model size}, training time, training sample
size, and the encoding properties, which is divided into three canonical
regions: {\it under-expressive regime}, {\it critically-expressive regime}, and
{\it sufficiently-expressive regime}. The source code package is available at
\url{https://github.com/LeavesLei/activation-code}.
|
Light binary convolutional neural networks (LB-CNN) are particularly useful
when implemented in low-energy computing platforms as required in many
industrial applications. Herein, a framework for optimizing compact LB-CNN is
introduced and its effectiveness is evaluated. The framework is freely
available and may run on free-access cloud platforms, thus requiring no major
investments. The optimized model is saved in the standardized .h5 format and
can be used as input to specialized tools for further deployment into specific
technologies, thus enabling the rapid development of various intelligent image
sensors. The main ingredient in accelerating the optimization of our model,
particularly the selection of binary convolution kernels, is the Chainer/Cupy
machine learning library offering significant speed-ups for training the output
layer as an extreme-learning machine. Additional training of the output layer
using Keras/Tensorflow is included, as it allows an increase in accuracy.
Results for widely used datasets including MNIST, GTSRB, ORL, VGG show very
good compromise between accuracy and complexity. Particularly, for face
recognition problems a carefully optimized LB-CNN model provides up to 100%
accuracies. Such TinyML solutions are well suited for industrial applications
requiring image recognition with low energy consumption.
|
Given a Lie group $G$, we elaborate the dynamics on $T^*T^*G$ and $T^*TG$,
which is given by a Hamiltonian, as well as the dynamics on the Tulczyjew
symplectic space $TT^*G$, which may be defined by a Lagrangian or a Hamiltonian
function. As the trivializations we adapted respect the group structures of the
iterated bundles, we exploit all possible subgroup reductions (Poisson,
symplectic or both) of higher order dynamics.
|
X-ray polarimetry promises us an unprecedented look at the structure of
magnetic fields and on the processes at the base of acceleration of particles
up to ultrarelativistic energies in relativistic jets. Crucial pieces of
information are expected from observations of blazars (that are characterized
by the presence of a jet pointing close to the Earth), in particular of the
subclass defined by a synchrotron emission extending to the X-ray band
(so-called high synchrotron peak blazars, HSP). In this review, I give an
account of some of the models and numerical simulations developed to predict
the polarimetric properties of HSP at high energy, contrasting the predictions
of scenarios assuming particle acceleration at shock fronts with those that are
based on magnetic reconnection, and I discuss the prospects for the
observations of the upcoming Imaging X-ray Polarimetry Explorer (IXPE)
satellite.
|
In robotic bin-picking applications, the perception of texture-less, highly
reflective parts is a valuable but challenging task. The high glossiness can
introduce fake edges in RGB images and inaccurate depth measurements especially
in heavily cluttered bin scenario. In this paper, we present the ROBI
(Reflective Objects in BIns) dataset, a public dataset for 6D object pose
estimation and multi-view depth fusion in robotic bin-picking scenarios. The
ROBI dataset includes a total of 63 bin-picking scenes captured with two active
stereo camera: a high-cost Ensenso sensor and a low-cost RealSense sensor. For
each scene, the monochrome/RGB images and depth maps are captured from sampled
view spheres around the scene, and are annotated with accurate 6D poses of
visible objects and an associated visibility score. For evaluating the
performance of depth fusion, we captured the ground truth depth maps by
high-cost Ensenso camera with objects coated in anti-reflective scanning spray.
To show the utility of the dataset, we evaluated the representative algorithms
of 6D object pose estimation and multi-view depth fusion on the full dataset.
Evaluation results demonstrate the difficulty of highly reflective objects,
especially in difficult cases due to the degradation of depth data quality,
severe occlusions and cluttered scene. The ROBI dataset is available online at
https://www.trailab.utias.utoronto.ca/robi.
|
We give an explicit formula for the zeroth $\mathbb{A}^1$-homology sheaf of a
smooth proper variety. We also provide a simple proof of a theorem of
Kahn-Sujatha which describes hom sets in the birational localization of the
category of smooth varieties.
|
The diffusion of innovations theory has been studied for years. Previous
research efforts mainly focus on key elements, adopter categories, and the
process of innovation diffusion. However, most of them only consider single
innovations. With the development of modern technology, recurrent innovations
gradually come into vogue. In order to reveal the characteristics of recurrent
innovations, we present the first large-scale analysis of the adoption of
recurrent innovations in the context of mobile app updates. Our analysis
reveals the adoption behavior and new adopter categories of recurrent
innovations as well as the features that have impact on the process of
adoption.
|
Important advances have recently been achieved in developing procedures
yielding uniformly valid inference for a low dimensional causal parameter when
high-dimensional nuisance models must be estimated. In this paper, we review
the literature on uniformly valid causal inference and discuss the costs and
benefits of using uniformly valid inference procedures. Naive estimation
strategies based on regularisation, machine learning, or a preliminary model
selection stage for the nuisance models have finite sample distributions which
are badly approximated by their asymptotic distributions. To solve this serious
problem, estimators which converge uniformly in distribution over a class of
data generating mechanisms have been proposed in the literature. In order to
obtain uniformly valid results in high-dimensional situations, sparsity
conditions for the nuisance models need typically to be made, although a double
robustness property holds, whereby if one of the nuisance model is more sparse,
the other nuisance model is allowed to be less sparse. While uniformly valid
inference is a highly desirable property, uniformly valid procedures pay a high
price in terms of inflated variability. Our discussion of this dilemma is
illustrated by the study of a double-selection outcome regression estimator,
which we show is uniformly asymptotically unbiased, but is less variable than
uniformly valid estimators in the numerical experiments conducted.
|
The planar Hall effect (PHE), wherein a rotating magnetic field in the plane
of a sample induces oscillating transverse voltage, has recently garnered
attention in a wide range of topological metals and insulators. The observed
twofold oscillations in $\rho_{yx}$ as the magnetic field completes one
rotation are the result of chiral, orbital and/or spin effects. The
antiperovskites $A_3B$O ($A$ = Ca, Sr, Ba; $B$ = Sn, Pb) are topological
crystalline insulators whose low-energy excitations are described by a
generalized Dirac equation for fermions with total angular momentum $J = 3/2$.
We report unusual sixfold oscillations in the PHE of Sr$_3$SnO, which persisted
nearly up to room temperature. Multiple harmonics (twofold, fourfold and
sixfold), which exhibited distinct field and temperature dependencies, were
detected in $\rho_{xx}$ and $\rho_{yx}$. These observations are more diverse
than those in other Dirac and Weyl semimetals and point to a richer interplay
of microscopic processes underlying the PHE in the antiperovskites.
|
In this paper, we derive a stability result for $L_1$ and $L_{\infty}$
perturbations of diffusions under weak regularity conditions on the
coefficients. In particular, the drift terms we consider can be unbounded with
at most linear growth, and we do not require uniform convergence of perturbed
diffusions. Instead, we require a weaker convergence condition in a special
metric introduced in this paper, related to the Holder norm of the diffusion
matrix differences. Our approach is based on a special version of the
McKean-Singer parametrix expansion.
|
Given a generically finite local extension of valuation rings $V \subset W$,
the question of whether $W$ is the localization of a finitely generated
$V$-algebra is significant for approaches to the problem of local
uniformization of valuations using ramification theory. Hagen Knaf proposed a
characterization of when $W$ is essentially of finite type over $V$ in terms of
classical invariants of the extension of associated valuations. Knaf's
conjecture has been verified in important special cases by Cutkosky and
Novacoski using local uniformization of Abhyankar valuations and resolution of
singularities of excellent surfaces in arbitrary characteristic, and by
Cutkosky for valuation rings of function fields of characteristic $0$ using
embedded resolution of singularities. In this paper we prove Knaf's conjecture
in full generality.
|
The relationship between words in a sentence often tells us more about the
underlying semantic content of a document than its actual words, individually.
In this work, we propose two novel algorithms, called Flexible Lexical Chain II
and Fixed Lexical Chain II. These algorithms combine the semantic relations
derived from lexical chains, prior knowledge from lexical databases, and the
robustness of the distributional hypothesis in word embeddings as building
blocks forming a single system. In short, our approach has three main
contributions: (i) a set of techniques that fully integrate word embeddings and
lexical chains; (ii) a more robust semantic representation that considers the
latent relation between words in a document; and (iii) lightweight word
embeddings models that can be extended to any natural language task. We intend
to assess the knowledge of pre-trained models to evaluate their robustness in
the document classification task. The proposed techniques are tested against
seven word embeddings algorithms using five different machine learning
classifiers over six scenarios in the document classification task. Our results
show the integration between lexical chains and word embeddings representations
sustain state-of-the-art results, even against more complex systems.
|
In this paper, we consider a reconfigurable intelligent surface
(RIS)-assisted two-way relay network, in which two users exchange information
through the base station (BS) with the help of an RIS. By jointly designing the
phase shifts at the RIS and beamforming matrix at the BS, our objective is to
maximize the minimum signal-to-noise ratio (SNR) of the two users, under the
transmit power constraint at the BS. We first consider the single-antenna BS
case, and propose two algorithms to design the RIS phase shifts and the BS
power amplification parameter, namely the SNR-upper-bound-maximization (SUM)
method, and genetic-SNR-maximization (GSM) method. When there are multiple
antennas at the BS, the optimization problem can be approximately addressed by
successively solving two decoupled subproblems, one to optimize the RIS phase
shifts, the other to optimize the BS beamforming matrix. The first subproblem
can be solved by using SUM or GSM method, while the second subproblem can be
solved by using optimized beamforming or maximum-ratio-beamforming method. The
proposed algorithms have been verified through numerical results with
computational complexity analysis.
|
We give a double copy construction for the symmetries of the self-dual
sectors of Yang-Mills (YM) and gravity, in the light-cone formulation. We find
an infinite set of double copy constructible symmetries. We focus on two
families which correspond to the residual diffeomorphisms on the gravitational
side. For the first one, we find novel non-perturbative double copy rules in
the bulk. The second family has a more striking structure, as a
non-perturbative gravitational symmetry is obtained from a perturbatively
defined symmetry on the YM side.
At null infinity, we find the YM origin of the subset of extended
Bondi-Metzner-Sachs (BMS) symmetries that preserve the self-duality condition.
In particular, holomorphic large gauge YM symmetries are double copied to
holomorphic supertranslations. We also identify the single copy of
superrotations with certain non-gauge YM transformations that to our knowledge
have not been previously presented in the literature.
|
We use a Hamiltonian (transition matrix) description of height-restricted
Dyck paths on the plane in which generating functions for the paths arise as
matrix elements of the propagator to evaluate the length and area generating
function for paths with arbitrary starting and ending points, expressing it as
a rational combination of determinants. Exploiting a connection between random
walks and quantum exclusion statistics that we previously established, we
express this generating function in terms of grand partition functions for
exclusion particles in a finite harmonic spectrum and present an alternative,
simpler form for its logarithm that makes its polynomial structure explicit.
|
We prove that for a domain $\Omega \subset \mathbb{R}^n$, being
$(\epsilon,\delta)$ in the sense of Jones is equivalent to being an extension
domain for bmo$(\Omega)$, the nonhonomogeneous version of the space of function
of bounded mean oscillation on $\Omega$. In the process we demonstrate that
these conditions are equivalent to local versions of two other conditions
characterizing uniform domains, one involving the presence of length cigars
between nearby points and the other a local version of the quasi-hyperbolic
uniform condition. Our results show that the definition of bmo$(\Omega)$ is
closely connected to the geometry of the domain.
|
There is accelerating interest in developing memory devices using
antiferromagnetic (AFM) materials, motivated by the possibility for
electrically controlling AFM order via spin-orbit torques, and its read-out via
magnetoresistive effects. Recent studies have shown, however, that high current
densities create non-magnetic contributions to resistive switching signals in
AFM/heavy metal (AFM/HM) bilayers, complicating their interpretation. Here we
introduce an experimental protocol to unambiguously distinguish current-induced
magnetic and nonmagnetic switching signals in AFM/HM structures, and
demonstrate it in IrMn$_3$/Pt devices. A six-terminal double-cross device is
constructed, with an IrMn$_3$ pillar placed on one cross. The differential
voltage is measured between the two crosses with and without IrMn$_3$ after
each switching attempt. For a wide range of current densities, reversible
switching is observed only when write currents pass through the cross with the
IrMn$_3$ pillar, eliminating any possibility of non-magnetic switching
artifacts. Micromagnetic simulations support our findings, indicating a complex
domain-mediated switching process.
|
Graph Convolutional Networks (GCNs) are increasingly adopted in large-scale
graph-based recommender systems. Training GCN requires the minibatch generator
traversing graphs and sampling the sparsely located neighboring nodes to obtain
their features. Since real-world graphs often exceed the capacity of GPU
memory, current GCN training systems keep the feature table in host memory and
rely on the CPU to collect sparse features before sending them to the GPUs.
This approach, however, puts tremendous pressure on host memory bandwidth and
the CPU. This is because the CPU needs to (1) read sparse features from memory,
(2) write features into memory as a dense format, and (3) transfer the features
from memory to the GPUs. In this work, we propose a novel GPU-oriented data
communication approach for GCN training, where GPU threads directly access
sparse features in host memory through zero-copy accesses without much CPU
help. By removing the CPU gathering stage, our method significantly reduces the
consumption of the host resources and data access latency. We further present
two important techniques to achieve high host memory access efficiency by the
GPU: (1) automatic data access address alignment to maximize PCIe packet
efficiency, and (2) asynchronous zero-copy access and kernel execution to fully
overlap data transfer with training. We incorporate our method into PyTorch and
evaluate its effectiveness using several graphs with sizes up to 111 million
nodes and 1.6 billion edges. In a multi-GPU training setup, our method is
65-92% faster than the conventional data transfer method, and can even match
the performance of all-in-GPU-memory training for some graphs that fit in GPU
memory.
|
In this paper, we study the phenomenon of quantum interference in the
presence of external gravitational fields described by alternative theories of
gravity. We analyze both non-relativistic and relativistic effects induced by
the underlying curved background on a superposed quantum system. In the
non-relativistic regime, it is possible to come across a gravitational
counterpart of the Bohm-Aharonov effect, which results in a phase shift
proportional to the derivative of the modified Newtonian potential. On the
other hand, beyond the Newtonian approximation, the relativistic nature of
gravity plays a crucial r\^ole. Indeed, the existence of a gravitational time
dilation between the two arms of the interferometer causes a loss of coherence
that is in principle observable in quantum interference patterns. We work in
the context of generalized quadratic theories of gravity to compare their
physical predictions with the analogous outcomes in general relativity. In so
doing, we show that the decoherence rate strongly depends on the gravitational
model under investigation, which means that this approach turns out to be a
promising test bench to probe and discriminate among all the extensions of
Einstein's theory in future experiments.
|
The analysis of the double-diffusion model and
$\mathbf{H}(\mathrm{div})$-conforming method introduced in [B\"urger, M\'endez,
Ruiz-Baier, SINUM (2019), 57:1318--1343] is extended to the time-dependent
case. In addition, the efficiency and reliability analysis of residual-based
{\it a posteriori} error estimators for the steady, semi-discrete, and fully
discrete problems is established. The resulting methods are applied to simulate
the sedimentation of small particles in salinity-driven flows. The method
consists of Brezzi-Douglas-Marini approximations for velocity and compatible
piecewise discontinuous pressures, whereas Lagrangian elements are used for
concentration and salinity distribution. Numerical tests confirm the properties
of the proposed family of schemes and of the adaptive strategy guided by the
{\it a posteriori} error indicators.
|
As robots are becoming more and more ubiquitous in human environments, it
will be necessary for robotic systems to better understand and predict human
actions. However, this is not an easy task, at times not even for us humans,
but based on a relatively structured set of possible actions, appropriate cues,
and the right model, this problem can be computationally tackled. In this
paper, we propose to use an ensemble of long-short term memory (LSTM) networks
for human action prediction. To train and evaluate models, we used the MoGaze
dataset - currently the most comprehensive dataset capturing poses of human
joints and the human gaze. We have thoroughly analyzed the MoGaze dataset and
selected a reduced set of cues for this task. Our model can predict (i) which
of the labeled objects the human is going to grasp, and (ii) which of the macro
locations the human is going to visit (such as table or shelf). We have
exhaustively evaluated the proposed method and compared it to individual cue
baselines. The results suggest that our LSTM model slightly outperforms the
gaze baseline in single object picking accuracy, but achieves better accuracy
in macro object prediction. Furthermore, we have also analyzed the prediction
accuracy when the gaze is not used, and in this case, the LSTM model
considerably outperformed the best single cue baseline
|
Pretraining on large labeled datasets is a prerequisite to achieve good
performance in many computer vision tasks like 2D object recognition, video
classification etc. However, pretraining is not widely used for 3D recognition
tasks where state-of-the-art methods train models from scratch. A primary
reason is the lack of large annotated datasets because 3D data is both
difficult to acquire and time consuming to label. We present a simple
self-supervised pertaining method that can work with any 3D data - single or
multiview, indoor or outdoor, acquired by varied sensors, without 3D
registration. We pretrain standard point cloud and voxel based model
architectures, and show that joint pretraining further improves performance. We
evaluate our models on 9 benchmarks for object detection, semantic
segmentation, and object classification, where they achieve state-of-the-art
results and can outperform supervised pretraining. We set a new
state-of-the-art for object detection on ScanNet (69.0% mAP) and SUNRGBD (63.5%
mAP). Our pretrained models are label efficient and improve performance for
classes with few examples.
|
Open quantum systems can be systematically controlled by making changes to
their environment. A well-known example is the spontaneous radiative decay of
an electronically excited emitter, such as an atom or a molecule, which is
significantly influenced by the feedback from the emitter's environment, for
example, by the presence of reflecting surfaces. A prerequisite for a
deliberate control of an open quantum system is to reveal the physical
mechanisms that determine the state of the system. Here, we investigate the
Bose-Einstein condensation of a photonic Bose gas in an environment with
controlled dissipation and feedback realised by a potential landscape that
effectively acts as a Mach-Zehnder interferometer. Our measurements offer a
highly systematic picture of Bose-Einstein condensation under non-equilibrium
conditions. We show that the condensation process is an interplay between
minimising the energy of the condensate, minimising particle losses and
maximising constructive feedback from the environment. In this way our
experiments reveal physical mechanisms involved in the formation of a
Bose-Einstein condensate, which typically remain hidden when the system is
close to thermal equilibrium. Beyond a deeper understanding of Bose-Einstein
condensation, our results open new pathways in quantum simulation with optical
Bose-Einstein condensates.
|
Non-abelian anyons are highly desired for topological quantum computation
purposes, with Majorana fermions providing a promising route, particularly zero
modes with non-trivial mutual statistics. Yet realizing Majorana zero modes in
matter is a challenge, with various proposals in chiral superconductors,
nanowires, and spin liquids, but no clear experimental examples. Heavy fermion
materials have long been known to host Majorana fermions at two-channel Kondo
impurity sites, however, these impurities cannot be moved adiabatically and
generically occur in metals, where the absence of a gap removes the topological
protection. Here, we consider an ordered lattice of these two-channel Kondo
impurities, which at quarter-filling form a Kondo insulator. We show that
topological defects in this state will host Majorana zero modes, or possibly
more complicated parafermions. These states are protected by the insulating gap
and may be adiabatically braided, providing the novel possibility of realizing
topological quantum computation in heavy fermion materials.
|
The immersion and the interaction are the important features of the driving
simulator. To improve these characteristics, this paper proposes a low-cost and
mark-less driver head tracking framework based on the head pose estimation
model, which makes the view of the simulator can automatically align with the
driver's head pose. The proposed method only uses the RGB camera without the
other hardware or marker. To handle the error of the head pose estimation
model, this paper proposes an adaptive Kalman Filter. By analyzing the error
distribution of the estimation model and user experience, the proposed Kalman
Filter includes the adaptive observation noise coefficient and loop closure
module, which can adaptive moderate the smoothness of the curve and keep the
curve stable near the initial position. The experiments show that the proposed
method is feasible, and it can be used with different head pose estimation
models.
|
In this article we solve this ancient problem of perfect tuning in all keys
and present a system were all harmonies are conserved at once. It will become
clear, when we expose our solution, why this solution could not be found in the
way in which earlier on musicians and scientist have been approaching the
problem. We follow indeed a different approach. We first construct a
mathematical representation of the complete harmony by means of a vector space,
where the different tones are represented in complete harmonic way for all keys
at once. One of the essential differences with earlier systems is that tones
will no longer be ordered within an octave, and we find the octave-like
ordering back as a projection of our system. But it is exactly by this
projection procedure that the possibility to create a harmonic system for all
keys at once is lost. So we see why the old way of ordering tones within an
octave could not lead to a solution of the problem. We indicate in which way a
real musical instrument could be built that realizes our harmonic scheme.
Because tones are no longer ordered within an octave such a musical instrument
will be rather unconventional. It is however a physically realizable musical
instrument, at least for the Pythagorean harmony. We also indicate how perfect
harmonies of every dimension could be realized by computers.
|
We use idealized N-body simulations of equilibrium stellar disks embedded
within course-grained dark matter haloes to study the effects of spurious
collisional heating on disk structure and kinematics. Collisional heating
artificially increases the vertical and radial velocity dispersions of disk
stars, as well as the thickness and size of disks; the effects are felt at all
galacto-centric radii. The integrated effects of collisional heating are
determined by the mass of dark matter halo particles (or equivalently, by the
number of particles at fixed halo mass), their local density and characteristic
velocity dispersion, but are largely insensitive to the stellar particle mass.
The effects can therefore be reduced by increasing the mass resolution of dark
matter in cosmological simulations, with limited benefits from increasing the
baryonic (or stellar) mass resolution. We provide a simple empirical model that
accurately captures the effects of spurious collisional heating on the
structure and kinematics of simulated disks, and use it to assess the
importance of disk heating for simulations of galaxy formation. We find that
the majority of state-of-the-art zoom simulations, and a few of the
highest-resolution, smallest-volume cosmological runs, are in principle able to
resolve thin stellar disks in Milky Way-mass haloes, but most large-volume
cosmological simulations cannot. For example, dark matter haloes resolved with
fewer than $\approx 10^6$ particles will collisionally heat stars near the
stellar half-mass radius such that their vertical velocity dispersion increases
by $\gtrsim 10$ per cent of the halo's virial velocity in approximately one
Hubble time.
|
Scattering polarization tends to dominate the linear polarization signals of
the Ca II 8542 A line in weakly magnetized areas ($B \lesssim 100$ G),
especially when the observing geometry is close to the limb. In this paper we
evaluate the degree of applicability of existing non-LTE spectral line
inversion codes (which assume that the spectral line polarization is due to the
Zeeman effect only) at inferring the magnetic field vector and, particularly,
its transverse component. To this end, we use the inversion code STiC to
extract the strength and orientation of the magnetic field from synthetic
spectropolarimetric data generated with the Hanle-RT code. The latter accounts
for the generation of polarization through scattering processes as well as the
joint actions of the Hanle and the Zeeman effects. We find that, when the
transverse component of the field is stronger than $\sim$80 G, the inversion
code is able to retrieve accurate estimates of the transverse field strength as
well as its azimuth in the plane of the sky. Below this threshold, the
scattering polarization signatures become the major contributors to the linear
polarization signals and often mislead the inversion code into severely over-
or under-estimating the field strength. Since the line-of-sight component of
the field is derived from the circular polarization signal, which is not
affected by atomic alignment, the corresponding inferences are always good.
|
We present lattice results for the non-perturbative Collins-Soper (CS)
kernel, which describes the energy-dependence of transverse momentum-dependent
parton distributions (TMDs). The CS kernel is extracted from the ratios of
first Mellin moments of quasi-TMDs evaluated at different nucleon momenta.The
analysis is done with dynamical $N_f=2+1$ clover fermions for the CLS ensemble
H101 ($a=0.0854\,\mathrm{fm}$, $m_{\pi}=m_K=422\,\mathrm{MeV}$). The computed
CS kernel is in good agreement with experimental extractions and previous
lattice studies.
|
One crucial objective of multi-task learning is to align distributions across
tasks so that the information between them can be transferred and shared.
However, existing approaches only focused on matching the marginal feature
distribution while ignoring the semantic information, which may hinder the
learning performance. To address this issue, we propose to leverage the label
information in multi-task learning by exploring the semantic conditional
relations among tasks. We first theoretically analyze the generalization bound
of multi-task learning based on the notion of Jensen-Shannon divergence, which
provides new insights into the value of label information in multi-task
learning. Our analysis also leads to a concrete algorithm that jointly matches
the semantic distribution and controls label distribution divergence. To
confirm the effectiveness of the proposed method, we first compare the
algorithm with several baselines on some benchmarks and then test the
algorithms under label space shift conditions. Empirical results demonstrate
that the proposed method could outperform most baselines and achieve
state-of-the-art performance, particularly showing the benefits under the label
shift conditions.
|
We study the giant component problem slightly above the critical regime for
percolation on Poissonian random graphs in the scale-free regime, where the
vertex weights and degrees have a diverging second moment. Critical percolation
on scale-free random graphs have been observed to have incredibly subtle
features that are markedly different compared to those in random graphs with
converging second moment. In particular, the critical window for percolation
depends sensitively on whether we consider single- or multi-edge versions of
the Poissonian random graph.
In this paper, and together with our companion paper with Bhamidi, we build a
bridge between these two cases. Our results characterize the part of the barely
supercritical regime where the size of the giant components are approximately
same for the single- and multi-edge settings. The methods for establishing
concentration of giant for the single- and multi-edge versions are quite
different. While the analysis in the multi-edge case is based on scaling limits
of exploration processes, the single-edge setting requires identification of a
core structure inside certain high-degree vertices that forms the giant
component.
|
We present two generalized hybrid kinetic-Hall magnetohydrodynamics (MHD)
models describing the interaction of a two-fluid bulk plasma, which consists of
thermal ions and electrons, with energetic, suprathermal ion populations
described by Vlasov dynamics. The dynamics of the thermal components are
governed by standard fluid equations in the Hall MHD limit with the electron
momentum equation providing an Ohm's law with Hall and electron pressure terms
involving a gyrotropic electron pressure tensor. The coupling of the bulk,
low-energy plasma with the energetic particle dynamics is accomplished through
the current density (current coupling scheme; CCS) and the ion pressure tensor
appearing in the momentum equation (pressure coupling scheme; PCS) in the first
and the second model, respectively. The CCS is a generalization of two
well-known models, because in the limit of vanishing energetic and thermal ion
densities we recover the standard Hall MHD and the hybrid
kinetic-ions/fluid-electron model, respectively. This provides us with the
capability to study in a continuous manner the global impact of the energetic
particles in a regime extending from vanishing to dominant energetic particle
densities. The noncanonical Hamiltonian structures of the CCS and PCS, which
can be exploited to study equilibrium and stability properties through the
energy-Casimir variational principle, are identified. As a first application
here, we derive a generalized Hall MHD Grad--Shafranov--Bernoulli system for
translationally symmetric equilibria with anisotropic electron pressure and
kinetic effects owing to the presence of energetic particles using the PCS.
|
Given a set $B\subset \mathbb{N}$, we investigate the existence of a set
$A\subset \mathbb{N}$ such that the sumset $A+B = \{a + b\,:\, a\in A, b\in
B\}$ has a prescribed asymptotic density. A set $B = \{b_1, b_2, \ldots\}$ is
said to be highly sparse if $B$ is either finite or infinite with
$\lim_{j\rightarrow\infty} b_{j+1}/b_j = \infty$. In this note, we prove that
if $B$ is highly sparse, such a set $A$ exists. This generalizes a recent
result by Faisant et al.
|
We present the novel Efficient Line Segment Detector and Descriptor (ELSD) to
simultaneously detect line segments and extract their descriptors in an image.
Unlike the traditional pipelines that conduct detection and description
separately, ELSD utilizes a shared feature extractor for both detection and
description, to provide the essential line features to the higher-level tasks
like SLAM and image matching in real time. First, we design the one-stage
compact model, and propose to use the mid-point, angle and length as the
minimal representation of line segment, which also guarantees the
center-symmetry. The non-centerness suppression is proposed to filter out the
fragmented line segments caused by lines' intersections. The fine offset
prediction is designed to refine the mid-point localization. Second, the line
descriptor branch is integrated with the detector branch, and the two branches
are jointly trained in an end-to-end manner. In the experiments, the proposed
ELSD achieves the state-of-the-art performance on the Wireframe dataset and
YorkUrban dataset, in both accuracy and efficiency. The line description
ability of ELSD also outperforms the previous works on the line matching task.
|
Quantum graphs are defined by having a Laplacian defined on the edges a
metric graph with boundary conditions on each vertex such that the resulting
operator, L, is self-adjoint. We use Neumann boundary conditions. The spectrum
of L does not determine the graph uniquely, that is, there exist non-isomorphic
graphs with the same spectra. There are few known examples of pairs of
non-isomorphic but isospectral quantum graphs. We have found all pairs of
isospectral but non-isomorphic equilateral connected quantum graphs with at
most seven vertices. We find three isospectral triplets including one involving
a loop. We also present a combinatorial method to generate arbitrarily large
sets of isospectral graphs and give an example of an isospectral set of four.
This has been done this using computer algebra. We discuss the possibilities
that our program is incorrect, present our tests and open source it for
inspection at github.com/meapistol/Spectra-of-graphs.
|
Spin ensembles coupled to optical cavities provide a powerful platform for
engineering synthetic quantum matter. Recently, we demonstrated that cavity
mediated infinite range interactions can induce fast scrambling in a Heisenberg
$XXZ$ spin chain (Phys. Rev. Research {\bf 2}, 043399 (2020)). In this work, we
analyze the kaleidoscope of quantum phases that emerge in this system from the
interplay of these interactions. Employing both analytical spin-wave theory as
well as numerical DMRG calculations, we find that there is a large parameter
regime where the continuous $U(1)$ symmetry of this model is spontaneously
broken and the ground state of the system exhibits $XY$ order. This kind of
symmetry breaking and the consequent long range order is forbidden for short
range interacting systems by the Mermin-Wagner theorem. Intriguingly, we find
that the $XY$ order can be induced by even an infinitesimally weak infinite
range interaction. Furthermore, we demonstrate that in the $U(1)$ symmetry
broken phase, the half chain entanglement entropy violates the area law
logarithmically. Finally, we discuss a proposal to verify our predictions in
state-of-the-art quantum emulators.
|
The detection and quantification of quantum coherence play significant roles
in quantum information processing. We present an efficient way of tomographic
witnessing for both theoretical and experimental detection of coherence. We
prove that a coherence witness is optimal if and only if all of its diagonal
elements are zero. Naturally, we obtain a bona fide homographic measure of
coherence given by the sum of the absolute values of the real and the imaginary
parts of the non-diagonal entries of a density matrix, together with its
interesting relations with other coherence measures like $l_1$ norm coherence
and robust of coherence.
|
The growing size and complexity of software in embedded systems poses new
challenges to the safety assessment of embedded control systems. In industrial
practice, the control software is mostly treated as a black box during the
system's safety analysis. The appropriate representation of the failure
propagation of the software is a pressing need in order to increase the
accuracy of safety analyses. However, it also increase the effort for creating
and maintaining the safety analysis models (such as fault trees) significantly.
In this work, we present a method to automatically generate Component Fault
Trees from Continuous Function Charts. This method aims at generating the
failure propagation model of the detailed software specification. Hence,
control software can be included into safety analyses without additional manual
effort required to construct the safety analysis models of the software.
Moreover, safety analyses created during early system specification phases can
be verified by comparing it with the automatically generated one in the
detailed specification phased.
|
We initiate the study of computational complexity of graph coverings, aka
locally bijective graph homomorphisms, for {\em graphs with semi-edges}. The
notion of graph covering is a discretization of coverings between surfaces or
topological spaces, a notion well known and deeply studied in classical
topology. Graph covers have found applications in discrete mathematics for
constructing highly symmetric graphs, and in computer science in the theory of
local computations. In 1991, Abello et al. asked for a classification of the
computational complexity of deciding if an input graph covers a fixed target
graph, in the ordinary setting (of graphs with only edges). Although many
general results are known, the full classification is still open. In spite of
that, we propose to study the more general case of covering graphs composed of
normal edges (including multiedges and loops) and so-called semi-edges.
Semi-edges are becoming increasingly popular in modern topological graph
theory, as well as in mathematical physics. They also naturally occur in the
local computation setting, since they are lifted to matchings in the covering
graph. We show that the presence of semi-edges makes the covering problem
considerably harder; e.g., it is no longer sufficient to specify the vertex
mapping induced by the covering, but one necessarily has to deal with the edge
mapping as well. We show some solvable cases, and completely characterize the
complexity of the already very nontrivial problem of covering one- and
two-vertex (multi)graphs with semi-edges. Our NP-hardness results are proven
for simple input graphs, and in the case of regular two-vertex target graphs,
even for bipartite ones. This provides a strengthening of previously known
results for covering graphs without semi-edges, and may contribute to better
understanding of this notion and its complexity.
|
In this paper we introduce the stack of polarized twisted conics and we use
it to give a new point of view on $\overline{\mathcal{M}}_2$. In particular, we
present a new and independent approach to the computation of the integral Chow
ring of $\overline{\mathcal{M}}_2$, previously determined by Eric Larson.
|
The proton electric and magnetic form factors, $G_E$ and $G_M$, are
intrinsically connected to the spatial distribution of charge and magnetization
in the proton. For decades, Rosenbluth separation measurements of the angular
dependence of elastic e$^-$-p scattering were used to extract $G_E$ and $G_M$.
More recently, polarized electron scattering measurements, aiming to improve
the precision of $G_E$ extractions, showed significant disagreement with
Rosenbluth measurements at large momentum transfers ($Q^2$). This discrepancy
is generally attributed to neglected two-photon exchange (TPE) corrections.
At larger $Q^2$ values, a new `Super-Rosenbluth' technique was used to
improve the precision of the Rosenbluth extraction, allowing for a better
quantification of the discrepancy, while comparisons of e$^+$-p and e$^-$-p
scattering indicated the presence of TPE corrections, but at $Q^2$ values below
where a clear discrepancy is observed. In this work, we demonstrate the
significant benefits to combining the Super-Rosenbluth technique with positron
beam measurements. This approach provides a greater kinematic reach and is
insensitive to some of the key systematic uncertainties in previous positron
measurements.
|
A matroid is uniform if and only if it has no minor isomorphic to
$U_{1,1}\oplus U_{0,1}$ and is paving if and only if it has no minor isomorphic
to $U_{2,2}\oplus U_{0,1}$. This paper considers, more generally, when a
matroid $M$ has no $U_{k,k}\oplus U_{0,\ell}$-minor for a fixed pair of
positive integers $(k,\ell)$. Calling such a matroid $(k,\ell)$-uniform, it is
shown that this is equivalent to the condition that every rank-$(r(M)-k)$ flat
of $M$ has nullity less than $\ell$. Generalising a result of Rajpal, we prove
that for any pair $(k,\ell)$ of positive integers and prime power $q$, only
finitely many simple cosimple $GF(q)$-representable matroids are \kl-uniform.
Consequently, if Rota's Conjecture holds, then for every prime power $q$, there
exists a pair $(k_q,\ell_q)$ of positive integers such that every excluded
minor of $GF(q)$-representability is $(k_q,\ell_q)$-uniform. We also determine
all binary $(2,2)$-uniform matroids and show the maximally $3$-connected
members to be $Z_5\backslash t, AG(4,2), AG(4,2)^*$ and a particular self-dual
matroid $P_{10}$. Combined with results of Acketa and Rajpal, this completes
the list of binary $(k,\ell)$-uniform matroids for which $k+\ell\leq 4$.
|
Tracking non-rigidly deforming scenes using range sensors has numerous
applications including computer vision, AR/VR, and robotics. However, due to
occlusions and physical limitations of range sensors, existing methods only
handle the visible surface, thus causing discontinuities and incompleteness in
the motion field. To this end, we introduce 4DComplete, a novel data-driven
approach that estimates the non-rigid motion for the unobserved geometry.
4DComplete takes as input a partial shape and motion observation, extracts 4D
time-space embedding, and jointly infers the missing geometry and motion field
using a sparse fully-convolutional network. For network training, we
constructed a large-scale synthetic dataset called DeformingThings4D, which
consists of 1972 animation sequences spanning 31 different animals or humanoid
categories with dense 4D annotation. Experiments show that 4DComplete 1)
reconstructs high-resolution volumetric shape and motion field from a partial
observation, 2) learns an entangled 4D feature representation that benefits
both shape and motion estimation, 3) yields more accurate and natural
deformation than classic non-rigid priors such as As-Rigid-As-Possible (ARAP)
deformation, and 4) generalizes well to unseen objects in real-world sequences.
|
The automatic analysis of fine art paintings presents a number of novel
technical challenges to artificial intelligence, computer vision, machine
learning, and knowledge representation quite distinct from those arising in the
analysis of traditional photographs. The most important difference is that many
realist paintings depict stories or episodes in order to convey a lesson,
moral, or meaning. One early step in automatic interpretation and extraction of
meaning in artworks is the identifications of figures (actors). In Christian
art, specifically, one must identify the actors in order to identify the
Biblical episode or story depicted, an important step in understanding the
artwork. We designed an automatic system based on deep convolutional neural
networks and simple knowledge database to identify saints throughout six
centuries of Christian art based in large part upon saints symbols or
attributes. Our work represents initial steps in the broad task of automatic
semantic interpretation of messages and meaning in fine art.
|
The scaling of the turbulent spectra provides a key measurement that allows
to discriminate between different theoretical predictions of turbulence. In the
solar wind, this has driven a large number of studies dedicated to this issue
using in-situ data from various orbiting spacecraft. While a semblance of
consensus exists regarding the scaling in the MHD and dispersive ranges, the
precise scaling in the transition range and the actual physical mechanisms that
control it remain open questions. Using the high-resolution data in the inner
heliosphere from Parker Solar Probe (PSP) mission, we find that the sub-ion
scales (i.e., at the frequency f ~ [2, 9] Hz) follow a power-law spectrum f^a
with a spectral index a varying between -3 and -5.7. Our results also show that
there is a trend toward and anti-correlation between the spectral slopes and
the power amplitudes at the MHD scales, in agreement with previous studies: the
higher the power amplitude the steeper the spectrum at sub-ion scales. A
similar trend toward an anti-correlation between steep spectra and increasing
normalized cross helicity is found, in agreement with previous theoretical
predictions about the imbalanced solar wind. We discuss the ubiquitous nature
of the ion transition range in solar wind turbulence in the inner heliosphere.
|
The theory of spectral filtering is a remarkable tool to understand the
statistical properties of learning with kernels. For least squares, it allows
to derive various regularization schemes that yield faster convergence rates of
the excess risk than with Tikhonov regularization. This is typically achieved
by leveraging classical assumptions called source and capacity conditions,
which characterize the difficulty of the learning task. In order to understand
estimators derived from other loss functions, Marteau-Ferey et al. have
extended the theory of Tikhonov regularization to generalized self concordant
loss functions (GSC), which contain, e.g., the logistic loss. In this paper, we
go a step further and show that fast and optimal rates can be achieved for GSC
by using the iterated Tikhonov regularization scheme, which is intrinsically
related to the proximal point method in optimization, and overcomes the
limitation of the classical Tikhonov regularization.
|
It is well known that there are asymmetric dependence structures between
financial returns. In this paper we use a new nonparametric measure of local
dependence, the local Gaussian correlation, to improve portfolio allocation. We
extend the classical mean-variance framework, and show that the portfolio
optimization is straightforward using our new approach, only relying on a
tuning parameter (the bandwidth). The new method is shown to outperform the
equally weighted (1/N) portfolio and the classical Markowitz portfolio for
monthly asset returns data.
|
We report on the fabrication of fractal dendrites by laser induced melting of
aluminum alloys. We target boron carbide (B4C) that is one of the most
effective radiation-absorbing materials which is characterised by a low
coefficient of thermal expansion. Due to the high fragility of B4C crystals we
were able to introduce its nanoparticles into a stabilization aluminum matrix
of AA385.0. The high intensity laser field action led to the formation of
composite dendrite structures under the effect of local surface melting. The
modelling of the dendrite cluster growth confirms its fractal nature and sheds
light on the pattern behavior of the resulting quasicrystal structure.
|
Let $H$ be a simple undirected graph. The family of all matchings of $H$
forms a simplicial complex called the matching complex of $H$. Here , we give a
classification of all graphs with a Gorenstein matching complex. Also we study
when the matching complex of $H$ is Cohen-Macaulay and, in certain classes of
graphs, we fully characterize those graphs which have a Cohen-Macaulay matching
complex. In particular, we characterize when the matching complex of a graph
with girth at least 5 or a complete graph is Cohen-Macaulay.
|
Ultra-relativistic heavy-ion collisions are expected to produce the strongest
electromagnetic fields in the known Universe. These highly-Lorentz contracted
fields can manifest themselves as linearly polarized quasi-real photons that
can interact via the Breit-Wheeler process to produce lepton anti-lepton pairs.
The energy and momentum distribution of the produced dileptons carry
information about the strength and spatial distribution of the colliding
fields. Recently it has been demonstrated that photons from these fields can
interact even in heavy-ion collisions with hadronic overlap, providing a purely
electromagnetic probe of the produced medium. In this review we discuss the
recent theoretical progress and experimental advances for mapping the
ultra-strong electromagnetic fields produced in heavy-ion collisions via
measurement of the Breit-Wheeler process.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.