abstract
stringlengths 42
2.09k
|
---|
Elasticities in depth, width, kernel size and resolution have been explored
in compressing deep neural networks (DNNs). Recognizing that the kernels in a
convolutional neural network (CNN) are 4-way tensors, we further exploit a new
elasticity dimension along the input-output channels. Specifically, a novel
nuclear-norm rank minimization factorization (NRMF) approach is proposed to
dynamically and globally search for the reduced tensor ranks during training.
Correlation between tensor ranks across multiple layers is revealed, and a
graceful tradeoff between model size and accuracy is obtained. Experiments then
show the superiority of NRMF over the previous non-elastic variational Bayesian
matrix factorization (VBMF) scheme.
|
Electronic topology in metallic kagome compounds is under intense scrutiny.
We present transport experiments in Na2/3CoO2 in which the Na order
differentiates a Co kagome sub-lattice in the triangular CoO2 layers. Hall and
magnetoresistance (MR) data under high fields give evidence for the coexistence
of light and heavy carriers. At low temperatures, the dominant light carrier
conductivity at zero field is suppressed by a B-linear MR suggesting Dirac like
quasiparticles. Lifshitz transitions induced at large B and T unveil the lower
mobility carriers. They display a negative B^2 MR due to scattering from
magnetic moments likely pertaining to a flat band. We underline an analogy with
heavy Fermion physics.
|
The work attempts to unify the conceptual model of the user's virtual
computer environment, with the aim of combining the local environments of
operating systems and the global Internet environment into a single virtual
environment built on general principles. To solve this problem, it is proposed
to unify the conceptual basis of these environments. The existing conceptual
basis of operating systems, built on the "desktop" metaphor, contains redundant
concepts associated with computer architecture. The use of the spatial
conceptual basis "object - place" with the concepts of "domain", "site", and
"data object" allows to completely virtualize the user environment, separating
it from the hardware concepts. The virtual concept "domain" is becoming a
universal way of structuring the user's space. The introduction of this concept
to describe the environments of operating systems provides at the mental level
the integration of the structures of the local and global space. The use of the
concept of "personal domain" will allow replacing the concept of "personal
computer" in the mind of the user. The virtual concept of "site" as an
environment for activities and data storage will allow abandoning such concepts
as "application" (program), or "memory device". The site in the mind of the
user is a virtual environment that includes both places for storing data
objects and places for working with them. The introduction of the concept
"site" into the structure of operating systems environments and the concept of
"data site" into the structure of the global network integrates the structure
of the global and local space in the user's mind. The introduction of the
concept of "portal" as a means of integrating information necessary for
interaction, allows ensuring the methodological homogeneity of the user's work
in a single virtual environment.
|
Existing gradient-based meta-learning approaches to few-shot learning assume
that all tasks have the same input feature space. However, in the real world
scenarios, there are many cases that the input structures of tasks can be
different, that is, different tasks may vary in the number of input modalities
or data types. Existing meta-learners cannot handle the heterogeneous task
distribution (HTD) as there is not only global meta-knowledge shared across
tasks but also type-specific knowledge that distinguishes each type of tasks.
To deal with task heterogeneity and promote fast within-task adaptions for each
type of tasks, in this paper, we propose HetMAML, a task-heterogeneous
model-agnostic meta-learning framework, which can capture both the
type-specific and globally shared knowledge and can achieve the balance between
knowledge customization and generalization. Specifically, we design a
multi-channel backbone module that encodes the input of each type of tasks into
the same length sequence of modality-specific embeddings. Then, we propose a
task-aware iterative feature aggregation network which can automatically take
into account the context of task-specific input structures and adaptively
project the heterogeneous input spaces to the same lower-dimensional embedding
space of concepts. Our experiments on six task-heterogeneous datasets
demonstrate that HetMAML successfully leverages type-specific and globally
shared meta-parameters for heterogeneous tasks and achieves fast within-task
adaptions for each type of tasks.
|
Video anomaly detection is a challenging task because of diverse abnormal
events. To this task, methods based on reconstruction and prediction are wildly
used in recent works, which are built on the assumption that learning on normal
data, anomalies cannot be reconstructed or predicated as good as normal
patterns, namely the anomaly result with more errors. In this paper, we propose
to discriminate anomalies from normal ones by the duality of normality-granted
optical flow, which is conducive to predict normal frames but adverse to
abnormal frames. The normality-granted optical flow is predicted from a single
frame, to keep the motion knowledge focused on normal patterns. Meanwhile, We
extend the appearance-motion correspondence scheme from frame reconstruction to
prediction, which not only helps to learn the knowledge about object
appearances and correlated motion, but also meets the fact that motion is the
transformation between appearances. We also introduce a margin loss to enhance
the learning of frame prediction. Experiments on standard benchmark datasets
demonstrate the impressive performance of our approach.
|
Latest study reports that plasma emission can be generated by energetic
electrons of DGH distribution via the electron cyclotron maser instability
(ECMI) in plasmas characterized by a large ratio of plasma oscillation
frequency to electron gyro-frequency ($\omega_{pe}/\Omega_{ce}$). In this
study, on the basis of the ECMI-plasma emission mechanism, we examine the
double plasma resonance (DPR) effect and the corresponding plasma emission at
both harmonic (H) and fundamental (F) bands using PIC simulations with various
$\omega_{pe}/\Omega_{ce}$. This allows us to directly simulate the feature of
zebra pattern (ZP) observed in solar radio bursts for the first time. We find
that (1) the simulations reproduce the DPR effect nicely for the upper hybrid
(UH) and Z modes, as seen from their variation of intensity and linear growth
rate with $\omega_{pe}/\Omega_{ce}$, (2) the intensity of the H emission is
stronger than that of the F emission by $\sim$ 2 orders of magnitude and vary
periodically with increasing $\omega_{pe}/\Omega_{ce}$, while the F emission is
too weak to be significant, therefore we suggest that it is the H emission
accounting for solar ZPs, (3) the peak-valley contrast of the total intensity
of H is $\sim 4$, and the peak lies around integer values of
$\omega_{pe}/\Omega_{ce}$ (= 10 and 11) for the present parameter setup. We
also evaluate the effect of energy of energetic electrons on the
characteristics of ECMI-excited waves and plasma radiation. The study provides
novel insight on the physical origin of ZPs of solar radio bursts.
|
In order to solve the critical issues in Wireless Sensor Networks (WSNs),
with concern for limited sensor lifetime, nature-inspired algorithms are
emerging as a suitable method. Getting optimal network coverage is one of those
challenging issues that need to be examined critically before any network
setup. Optimal network coverage not only minimizes the consumption of limited
energy of battery-driven sensors but also reduce the sensing of redundant
information. In this paper, we focus on nature-inspired optimization algorithms
concerning the optimal coverage in WSNs. In the first half of the paper, we
have briefly discussed the taxonomy of the optimization algorithms along with
the problem domains in WSNs. In the second half of the paper, we have compared
the performance of two nature-inspired algorithms for getting optimal coverage
in WSNs. The first one is a combined Improved Genetic Algorithm and Binary Ant
Colony Algorithm (IGABACA), and the second one is Lion Optimization (LO). The
simulation results confirm that LO gives better network coverage, and the
convergence rate of LO is faster than that of IGA-BACA. Further, we observed
that the optimal coverage is achieved at a lesser number of generations in LO
as compared to IGA-BACA. This review will help researchers to explore the
applications in this field as well as beyond this area. Keywords: Optimal
Coverage, Bio-inspired Algorithm, Lion Optimization, WSNs.
|
This paper proposes a novel scheme for mitigating strong interferences, which
is applicable to various wireless scenarios, including full-duplex wireless
communications and uncoordinated heterogenous networks. As strong interferences
can saturate the receiver's analog-to-digital converters (ADC), they need to be
mitigated both before and after the ADCs, i.e., via hybrid processing. The key
idea of the proposed scheme, namely the Hybrid Interference Mitigation using
Analog Prewhitening (HIMAP), is to insert an M-input M-output analog phase
shifter network (PSN) between the receive antennas and the ADCs to spatially
prewhiten the interferences, which requires no signal information but only an
estimate of the covariance matrix. After interference mitigation by the PSN
prewhitener, the preamble can be synchronized, the signal channel response can
be estimated, and thus a minimum mean squared error (MMSE) beamformer can be
applied in the digital domain to further mitigate the residual interferences.
The simulation results verify that the HIMAP scheme can suppress interferences
80dB stronger than the signal by using off-the-shelf phase shifters (PS) of
6-bit resolution.
|
In the standard (classic) approach, galaxy clustering measurements from
spectroscopic surveys are compressed into baryon acoustic oscillations and
redshift space distortions measurements, which in turn can be compared to
cosmological models. Recent works have shown that avoiding this intermediate
step and fitting directly the full power spectrum signal (full modelling) leads
to much tighter constraints on cosmological parameters. Here we show where this
extra information is coming from and extend the classic approach with one
additional effective parameter, such that it captures, effectively, the same
amount of information as the full modelling approach, but in a
model-independent way. We validate this new method (ShapeFit) on mock catalogs,
and compare its performance to the full modelling approach finding both to
deliver equivalent results. The ShapeFit extension of the classic approach
promotes the standard analyses at the level of full modelling ones in terms of
information content, with the advantages of i) being more model independent;
ii) offering an understanding of the origin of the extra cosmological
information; iii) allowing a robust control on the impact of observational
systematics.
|
INTRODUCTION: Wald's, the likelihood ratio (LR) and Rao's score tests and
their corresponding confidence intervals (CIs), are the three most common
estimators of parameters of Generalized Linear Models. On finite samples, these
estimators are biased. The objective of this work is to analyze the coverage
errors of the CI estimators in small samples for the log-Poisson model (i.e.
estimation of incidence rate ratio) with innovative evaluation criteria, taking
in account the overestimation/underestimation unbalance of coverage errors and
the variable inclusion rate and follow-up in epidemiological studies.
METHODS: Exact calculations equivalent to Monte Carlo simulations with an
infinite number of simulations have been used. Underestimation errors (due to
the upper bound of the CI) and overestimation coverage errors (due to the lower
bound of the CI) have been split. The level of confidence has been analyzed
from $0.95$ to $1-10^{-6}$, allowing the interpretation of P-values below
$10^{-6}$ for hypothesis tests.
RESULTS: The LR bias was small (actual coverage errors less than 1.5 times
the nominal errors) when the expected number of events in both groups was above
1, even when unbalanced (e.g. 10 events in one group vs 1 in the other). For
95% CI, Wald's and the Score estimators showed high bias even when the number
of events was large ($\geq 20$ in both groups) when groups were unbalanced. For
small P-values ($<10^{-6}$), the LR kept acceptable bias while Wald's and the
score P-values had severely inflated errors ($\times 100$).
CONCLUSION: The LR test and LR CI should be used.
|
Recovering the wavelength from disordered speckle patterns has become an
exciting prospect as a wavelength measurement method due to its high resolution
and simple design. In previous studies, panel cameras have been used to detect
the subtle differences between speckle patterns. However, the volume,
bandwidth, sensitivity, and cost (in non-visible bands) associated with panel
cameras have hindered their utility in broader applications, especially in high
speed and low-cost measurements. In this work, we broke the limitations imposed
by panel cameras by using a quadrant detector (QD) to capture the speckle
images. In the scheme of QD detection, speckle images are directly filtered by
convolution, where the kernel is equal to one quarter of a speckle pattern.
First, we proposed an up-sampling algorithm to pre-process the QD data. Then a
new convolution neural network (CNN) based algorithm, shallow residual network
(SRN), was proposed to train the up-sampled images. The experimental results
show that a resolution of 4 fm (~ 0.5 MHz) was achieved at 1550nm with an
updating speed of ~ 1 kHz. More importantly, the SRN shows excellent
robustness. The wavelength can be precisely reconstructed from raw QD data
without any averaging, even where there exists apparent noise. The low-cost,
simple structure, high speed and robustness of this design promote the
speckle-based wavemeter to the industrial grade. In addition, without the
restriction of panel cameras, it is believed that this wavemeter opens new
routes in many other fields, such as distributed optical fiber sensors, optical
communications, and laser frequency stabilization.
|
The hidden ancestor graph is a new stochastic model for a vertex-labelled
multigraph $G$ in which the observable vertices are the leaves $L$ of a random
rooted tree $T$, whose edges and non-leaf nodes are hidden. The likelihood of
an edge in $G$ between two vertices in $L$ depends on the height of their
lowest common ancestor in $T$. The label of a vertex $v \in L$ depends on a
randomized label inheritance mechanism within $T$ such that vertices with the
same parent often have the same label. High label assortativity, high average
local clustering, heavy tailed vertex degree distribution, and sparsity, can
all coexist in this model. The agreement edges (end point labels agree) and the
conflict edges (end point labels differ) constitute complementary subgraphs,
useful for testing anomaly correction algorithms. Instances with a hundred
million edges can easily be built on a workstation in minutes.
|
We give a simple proof for the global convergence of gradient descent in
training deep ReLU networks with the standard square loss, and show some of its
improvements over the state-of-the-art. In particular, while prior works
require all the hidden layers to be wide with width at least $\Omega(N^8)$ ($N$
being the number of training samples), we require a single wide layer of
linear, quadratic or cubic width depending on the type of initialization.
Unlike many recent proofs based on the Neural Tangent Kernel (NTK), our proof
need not track the evolution of the entire NTK matrix, or more generally, any
quantities related to the changes of activation patterns during training.
Instead, we only need to track the evolution of the output at the last hidden
layer, which can be done much more easily thanks to the Lipschitz property of
ReLU. Some highlights of our setting: (i) all the layers are trained with
standard gradient descent, (ii) the network has standard parameterization as
opposed to the NTK one, and (iii) the network has a single wide layer as
opposed to having all wide hidden layers as in most of NTK-related results.
|
Vision models trained on multimodal datasets can benefit from the wide
availability of large image-caption datasets. A recent model (CLIP) was found
to generalize well in zero-shot and transfer learning settings. This could
imply that linguistic or "semantic grounding" confers additional generalization
abilities to the visual feature space. Here, we systematically evaluate various
multimodal architectures and vision-only models in terms of unsupervised
clustering, few-shot learning, transfer learning and adversarial robustness. In
each setting, multimodal training produced no additional generalization
capability compared to standard supervised visual training. We conclude that
work is still required for semantic grounding to help improve vision models.
|
Detecting cyber-anomalies and attacks are becoming a rising concern these
days in the domain of cybersecurity. The knowledge of artificial intelligence,
particularly, the machine learning techniques can be used to tackle these
issues. However, the effectiveness of a learning-based security model may vary
depending on the security features and the data characteristics. In this paper,
we present "CyberLearning", a machine learning-based cybersecurity modeling
with correlated-feature selection, and a comprehensive empirical analysis on
the effectiveness of various machine learning based security models. In our
CyberLearning modeling, we take into account a binary classification model for
detecting anomalies, and multi-class classification model for various types of
cyber-attacks. To build the security model, we first employ the popular ten
machine learning classification techniques, such as naive Bayes, Logistic
regression, Stochastic gradient descent, K-nearest neighbors, Support vector
machine, Decision Tree, Random Forest, Adaptive Boosting, eXtreme Gradient
Boosting, as well as Linear discriminant analysis. We then present the
artificial neural network-based security model considering multiple hidden
layers. The effectiveness of these learning-based security models is examined
by conducting a range of experiments utilizing the two most popular security
datasets, UNSW-NB15 and NSL-KDD. Overall, this paper aims to serve as a
reference point for data-driven security modeling through our experimental
analysis and findings in the context of cybersecurity.
|
Concept of p-frame with the help of b-linear functional in the case of
n-Banach space is being presented and its few properties, one of them,
Cartesian product of two p-frames again becomes a p-frame, have been discussed.
Finally, the perturbation results and the stability of p-frame in n-Banach
space with respect to b-linear functional are being studied.
|
Cannabis legalization has been welcomed by many U.S. states but its role in
escalation from tobacco e-cigarette use to cannabis vaping is unclear.
Meanwhile, cannabis vaping has been associated with new lung diseases and
rising adolescent use. To understand the impact of cannabis legalization on
escalation, we design an observational study to estimate the causal effect of
recreational cannabis legalization on the development of pro-cannabis attitude
for e-cigarette users. We collect and analyze Twitter data which contains
opinions about cannabis and JUUL, a very popular e-cigarette brand. We use
weakly supervised learning for personal tweet filtering and classification for
stance detection. We discover that recreational cannabis legalization policy
has an effect on increased development of pro-cannabis attitudes for users
already in favor of e-cigarettes.
|
In finite element calculations, the integral forms are usually evaluated
using nested loops over elements, and over quadrature points. Many such forms
(e.g. linear or multi-linear) can be expressed in a compact way, without the
explicit loops, using a single tensor contraction expression by employing the
Einstein summation convention. To automate this process and leverage existing
high performance codes, we first introduce a notation allowing trivial
differentiation of multi-linear finite element forms. Based on that we propose
and describe a new transpiler from Einstein summation based expressions,
augmented to allow defining multi-linear finite element weak forms, to regular
tensor contraction expressions. The resulting expressions are compatible with a
number of Python scientific computing packages, that implement, optimize and in
some cases parallelize the general tensor contractions. We assess the
performance of those packages, as well as the influence of operand memory
layouts and tensor contraction paths optimizations on the elapsed time and
memory requirements of the finite element form evaluations. We also compare the
efficiency of the transpiled weak form implementations to the C-based functions
available in the finite element package SfePy.
|
Efficient methods for loading given classical data into quantum circuits are
essential for various quantum algorithms. In this paper, we propose an
algorithm called that can effectively load all the components of a given
real-valued data vector into the amplitude of quantum state, while the previous
proposal can only load the absolute values of those components. The key of our
algorithm is to variationally train a shallow parameterized quantum circuit,
using the results of two types of measurement; the standard computational-basis
measurement plus the measurement in the Hadamard-transformed basis, introduced
in order to handle the sign of the data components. The variational algorithm
changes the circuit parameters so as to minimize the sum of two costs
corresponding to those two measurement basis, both of which are given by the
efficiently-computable maximum mean discrepancy. We also consider the problem
of constructing the singular value decomposition entropy via the stock market
dataset to give a financial market indicator; a quantum algorithm (the
variational singular value decomposition algorithm) is known to produce a
solution faster than classical, which yet requires the sign-dependent amplitude
encoding. We demonstrate, with an in-depth numerical analysis, that our
algorithm realizes loading of time-series of real stock prices on quantum state
with small approximation error, and thereby it enables constructing an
indicator of the financial market based on the stock prices.
|
Helical edge states of two-dimensional topological insulators show a gap in
the density of states (DOS) and suppressed conductance in the presence of
ordered magnetic impurities. Here we will consider the dynamical effects on the
DOS and transmission when the magnetic impurities are driven periodically.
Using the Floquet formalism and Green's functions, the system properties are
studied as a function of the driving frequency and the potential energy
contribution of the impurities. We see that increasing the potential part
closes the DOS gap for all driving regimes. The transmission gap is also
closed, showing a pronounced asymmetry as a function of energy. These features
indicate that the dynamical transport properties could yield valuable
information about the magnetic impurities.
|
Subglacial lakes are isolated, cold-temperature and high-pressure water
environments hidden under ice sheets, which might host extreme microorganisms.
Here, we use two-dimensional direct numerical simulations in order to
investigate the characteristic temperature fluctuations and velocities in
freshwater subglacial lakes as functions of the ice overburden pressure, $p_i$,
the water depth, $h$, and the geothermal flux, $F$. Geothermal heating is the
unique forcing mechanism as we consider a flat ice-water interface. Subglacial
lakes are fully convective when $p_i$ is larger than the critical pressure
$p_*\approx 2848$ dbar, but self organize into a lower convective bulk and an
upper stably-stratified layer when $p_i < p_*$, because of the existence at low
pressure of a density maximum at temperature $T_d$ greater than the freezing
temperature $T_f$. For both high and low $p_i$, we demonstrate that the Nusselt
number $Nu$ and Reynolds number $Re$ satisfy classical scaling laws provided
that an effective Rayleigh number $Ra_{eff}$ is considered. We show that the
convective and stably-stratified layers at low pressure are dynamically
decoupled at leading order because plume penetration is weak and induces
limited entrainment of the stable fluid. From the empirical power law equation
for $Nu$ with $Ra_{eff}$, we derive two sets of closed-form expressions for the
variables of interest, including the unknown bottom temperature, in terms of
the problem parameters $p_i$, $h$ and $F$. The two predictions correspond to
two limiting regimes obtained when the effective thermal expansion coefficient
is either approximately constant or linearly proportional to the temperature
difference driving the convection.
|
We consider the canonical periodic review lost sales inventory system with
positive lead-times and stochastic i.i.d. demand under the average cost
criterion. We introduce a new policy that places orders such that the expected
inventory level at the time of arrival of an order is at a fixed level and call
it the Projected Inventory Level (PIL) policy. We prove that this policy has a
cost-rate superior to the equivalent system where excess demand is back-ordered
instead of lost and is therefore asymptotically optimal as the cost of losing a
sale approaches infinity under mild distributional assumptions. We further show
that this policy dominates the constant order policy for any finite lead-time
and is therefore asymptotically optimal as the lead-time approaches infinity
for the case of exponentially distributed demand per period. Numerical results
show this policy also performs superior relative to other policies.
|
This paper describes a novel approach to emulate a universal quantum computer
with a wholly classical system, one that uses a signal of bounded duration and
amplitude to represent an arbitrary quantum state. The signal may be of any
modality (e.g. acoustic, electromagnetic, etc.) but this paper will focus on
electronic signals. Individual qubits are represented by in-phase and
quadrature sinusoidal signals, while unitary gate operations are performed
using simple analog electronic circuit devices. In this manner, the Hilbert
space structure of a multi-qubit quantum state, as well as a universal set of
gate operations, may be fully emulated classically. Results from a programmable
prototype system are presented and discussed.
|
Generative adversarial networks (GANs), e.g., StyleGAN2, play a vital role in
various image generation and synthesis tasks, yet their notoriously high
computational cost hinders their efficient deployment on edge devices. Directly
applying generic compression approaches yields poor results on GANs, which
motivates a number of recent GAN compression works. While prior works mainly
accelerate conditional GANs, e.g., pix2pix and CycleGAN, compressing
state-of-the-art unconditional GANs has rarely been explored and is more
challenging. In this paper, we propose novel approaches for unconditional GAN
compression. We first introduce effective channel pruning and knowledge
distillation schemes specialized for unconditional GANs. We then propose a
novel content-aware method to guide the processes of both pruning and
distillation. With content-awareness, we can effectively prune channels that
are unimportant to the contents of interest, e.g., human faces, and focus our
distillation on these regions, which significantly enhances the distillation
quality. On StyleGAN2 and SN-GAN, we achieve a substantial improvement over the
state-of-the-art compression method. Notably, we reduce the FLOPs of StyleGAN2
by 11x with visually negligible image quality loss compared to the full-size
model. More interestingly, when applied to various image manipulation tasks,
our compressed model forms a smoother and better disentangled latent manifold,
making it more effective for image editing.
|
Property-preserving hash functions allow for compressing long inputs $x_0$
and $x_1$ into short hashes $h(x_0)$ and $h(x_1)$ in a manner that allows for
computing a predicate $P(x_0, x_1)$ given only the two hash values without
having access to the original data. Such hash functions are said to be
adversarially robust if an adversary that gets to pick $x_0$ and $x_1$ after
the hash function has been sampled, cannot find inputs for which the predicate
evaluated on the hash values outputs the incorrect result.
In this work we construct robust property-preserving hash functions for the
hamming-distance predicate which distinguishes inputs with a hamming distance
at least some threshold $t$ from those with distance less than $t$. The
security of the construction is based on standard lattice hardness assumptions.
Our construction has several advantages over the best known previous
construction by Fleischhacker and Simkin. Our construction relies on a single
well-studied hardness assumption from lattice cryptography whereas the previous
work relied on a newly introduced family of computational hardness assumptions.
In terms of computational effort, our construction only requires a small number
of modular additions per input bit, whereas previously several exponentiations
per bit as well as the interpolation and evaluation of high-degree polynomials
over large fields were required. An additional benefit of our construction is
that the description of the hash function can be compressed to $\lambda$ bits
assuming a random oracle. Previous work has descriptions of length
$\mathcal{O}(\ell \lambda)$ bits for input bit-length $\ell$, which has a
secret structure and thus cannot be compressed.
We prove a lower bound on the output size of any property-preserving hash
function for the hamming distance predicate. The bound shows that the size of
our hash value is not far from optimal.
|
Quantum State Sharing (QSS) is a protocol by which a (secret) quantum state
may be securely split, shared between multiple potentially dishonest players,
and reconstructed. Crucially the players are each assumed to be dishonest, and
so QSS requires that only a collaborating authorised subset of players can
access the original secret state; any dishonest unauthorised conspiracy cannot
reconstruct it. We analyse a QSS protocol involving three untrusted players and
demonstrate that quantum steering is the required resource which enables the
protocol to proceed securely. We analyse the level of steering required to
share any single-mode Gaussian secret which enables the states to be shared
with the optimal use of resources.
|
Consider the universal gate set for quantum computing consisting of the gates
X, CX, CCX, omega^dagger H, and S. All of these gates have matrix entries in
the ring Z[1/2,i], the smallest subring of the complex numbers containing 1/2
and i. Amy, Glaudell, and Ross proved the converse, i.e., any unitary matrix
with entries in Z[1/2,i] can be realized by a quantum circuit over the above
gate set using at most one ancilla. In this paper, we give a finite
presentation by generators and relations of U_n(Z[1/2,i]), the group of unitary
nxn-matrices with entries in Z[1/2,i].
|
In this work, we explore the recently proposed new Tsallis agegraphic dark
energy model in a flat FLRW Universe by taking the conformal time as IR cutoff
with interaction. The deceleration parameter of the interacting new Tsallis
agegraphic dark energy model provides the phase transition of the Universe from
decelerated to accelerated phase. The EoS parameter of the model shows a rich
behaviour as it can be quintessence-like or phantom-like depending on the
interaction ($b^2$) and parameter $B$. The evolutionary trajectories of the
statefinder parameters and $(\omega_D, \omega_D^{'})$ planes are plotted by
considering the initial condition $\Omega_{D}^{0} =0.73$, $H_{0}= 67$ according
to $\Lambda$CDM observational Planck 2018 data for different $b^2$ and $B$. The
model shows both quintessence and Chaplygin gas behaviour in the statefinder
$(r, s)$ and $(r, q)$ pair planes for different $b^2$ and $B$.
|
These lectures present some basic ideas and techniques in the spectral
analysis of lattice Schrodinger operators with disordered potentials. In
contrast to the classical Anderson tight binding model, the randomness is also
allowed to possess only finitely many degrees of freedom. This refers to
dynamically defined potentials, i.e., those given by evaluating a function
along an orbit of some ergodic transformation (or of several commuting such
transformations on higher-dimensional lattices). Classical localization
theorems by Frohlich--Spencer for large disorders are presented, both for
random potentials in all dimensions, as well as even quasi-periodic ones on the
line. After providing the needed background on subharmonic functions, we then
discuss the Bourgain-Goldstein theorem on localization for quasiperiodic
Schrodinger cocycles assuming positive Lyapunov exponents.
|
We study at the single-photon level the nonreciprocal excitation transfer
between emitters coupled with a common waveguide. Non-Markovian retarded
effects are taken into account due to the large separation distance between
different emitter-waveguide coupling ports. It is shown that the excitation
transfer between the emitters of a small-atom dimer can be obviously
nonreciprocal by introducing between them a coherent coupling channel with
nontrivial coupling phase. We prove that for dimer models the nonreciprocity
cannot coexist with the decoherence-free giant-atom structure although the
latter markedly lengthens the lifetime of the emitters. In view of this, we
further propose a giant-atom trimer which supports both nonreciprocal transfer
(directional circulation) of the excitation and greatly lengthened lifetime.
Such a trimer model also exhibits incommensurate emitter-waveguide entanglement
for different initial states in which case the excitation transfer is however
reciprocal. We believe that the proposals in this paper are of potential
applications in large-scale quantum networks and quantum information
processing.
|
We prove the non-linear asymptotic stability of the Schwarzschild family as
solutions to the Einstein vacuum equations in the exterior of the black hole
region: general vacuum initial data, with no symmetry assumed, sufficiently
close to Schwarzschild data evolve to a vacuum spacetime which (i) possesses a
complete future null infinity $\mathcal{I}^+$ (whose past $J^-(\mathcal{I}^+)$
is moreover bounded by a regular future complete event horizon
$\mathcal{H}^+$), (ii) remains close to Schwarzschild in its exterior, and
(iii) asymptotes back to a member of the Schwarzschild family as an appropriate
notion of time goes to infinity, provided that the data are themselves
constrained to lie on a teleologically constructed codimension-$3$
"submanifold" of moduli space. This is the full nonlinear asymptotic stability
of Schwarzschild since solutions not arising from data lying on this
submanifold should by dimensional considerations approach a Kerr spacetime with
rotation parameter $a\neq 0$, i.e. such solutions cannot satisfy (iii). The
proof employs teleologically normalised double null gauges, is expressed
entirely in physical space and makes essential use of the analysis in our
previous study of the linear stability of the Kerr family around Schwarzschild
[DHR], as well as techniques developed over the years to control the
non-linearities of the Einstein equations. The present work, however, is
entirely self-contained. In view of the recent [DHR19, TdCSR20] our approach
can be applied to the full non-linear asymptotic stability of the subextremal
Kerr family.
|
Automatic extraction of forum posts and metadata is a crucial but challenging
task since forums do not expose their content in a standardized structure.
Content extraction methods, therefore, often need customizations such as
adaptations to page templates and improvements of their extraction code before
they can be deployed to new forums. Most of the current solutions are also
built for the more general case of content extraction from web pages and lack
key features important for understanding forum content such as the
identification of author metadata and information on the thread structure.
This paper, therefore, presents a method that determines the XPath of forum
posts, eliminating incorrect mergers and splits of the extracted posts that
were common in systems from the previous generation. Based on the individual
posts further metadata such as authors, forum URL and structure are extracted.
We also introduce Harvest, a new open source toolkit that implements the
presented methods and create a gold standard extracted from 52 different Web
forums for evaluating our approach. A comprehensive evaluation reveals that
Harvest clearly outperforms competing systems.
|
We provide a number of new conjectures and questions concerning the syzygies
of $\mathbb{P}^1\times \mathbb{P}^1$. The conjectures are based on computing
the graded Betti tables and related data for large number of different
embeddings of $\mathbb{P}^1\times \mathbb{P}^1$. These computations utilize
linear algebra over finite fields and high-performance computing.
|
This study examines the influence of learning in a female teacher homeroom
class in elementary school on pupils' voting behavior later in life, using
independently collected individual-level data. Further, we evaluate its effect
on preference for women's participation in the workplace in adulthood. Our
study found that having a female teacher in the first year of school makes
individuals more likely to vote for female candidates, and to prefer policy for
female labor participation in adulthood. However, the effect is only observed
among males, and not female pupils. These findings offer new evidence for the
female socialization hypothesis.
|
Main results are, firstly, a generalization of the Conley-Zehnder index from
ODEs to the delay equation at hand and, secondly, the equality of the Morse
index and the clockwise normalized Conley-Zehnder index.
|
We revisit to investigate shadows cast by Kerr-like wormholes. The boundary
of the shadow is determined by unstable circular photon orbits. We find that,
in certain parameter regions, the orbit is located at the throat of the
Kerr-like wormhole, which was not considered in the literature. In these cases,
the existence of the throat alters the shape of the shadow significantly, and
makes it possible for us to differentiate it from that of a Kerr black hole.
|
Low-electron-dose observation is indispensable for observing various samples
using a transmission electron microscope; consequently, image processing has
been used to improve transmission electron microscopy (TEM) images. To apply
such image processing to in situ observations, we here apply a convolutional
neural network to TEM imaging. Using a dataset that includes short-exposure
images and long-exposure images, we develop a pipeline for processed
short-exposure images, based on end-to-end training. The quality of images
acquired with a total dose of approximately 5 e- per pixel becomes comparable
to that of images acquired with a total dose of approximately 1000 e- per
pixel. Because the conversion time is approximately 8 ms, in situ observation
at 125 fps is possible. This imaging technique enables in situ observation of
electron-beam-sensitive specimens.
|
A strong connection between cluster algebras and representation theory was
established by the cluster category. Cluster characters, like the original
Caldero-Chapoton (CC) map, are maps from certain triangulated categories to
cluster algebras and they have generated much interest. Holm and J{\o}rgensen
constructed a modified CC map from a sufficiently nice triangulated category to
a commutative ring, which is a generalised frieze under some conditions. In
their construction, a quotient $K_{0}^{sp}(\mathcal{T})/M$ of a Grothendieck
group of a cluster tilting subcategory $\mathcal{T}$ is used. In this article,
we show that this quotient is the Grothendieck group of a certain
extriangulated category, thereby exposing the significance of it and the
relevance of extriangulated structures. We use this to define another modified
CC map that recovers the one of Holm--J{\o}rgensen.
We prove our results in a higher homological context. Suppose $\mathcal{S}$
is a $(d+2)$-angulated category with subcategories
$\mathcal{X}\subseteq\mathcal{T}\subseteq\mathcal{S}$, where $\mathcal{X}$ is
functorially finite and $\mathcal{T}$ is $2d$-cluster tilting, satisfying some
mild conditions. We show there is an isomorphism between the Grothendieck group
$K_{0}(\mathcal{S},\mathbb{E}_{\mathcal{X}},\mathfrak{s}_{\mathcal{X}})$ of the
category $\mathcal{S}$, equipped with the $d$-exangulated structure induced by
$\mathcal{X}$, and the quotient $K_{0}^{sp}(\mathcal{T})/N$, where $N$ is the
higher analogue of $M$ above. When $\mathcal{X}=\mathcal{T}$ the isomorphism is
induced by the higher index with respect to $\mathcal{T}$ introduced recently
by J{\o}rgensen. Thus, in the general case, we can understand the map taking an
object in $\mathcal{S}$ to its $K_{0}$-class in
$K_{0}(\mathcal{S},\mathbb{E}_{\mathcal{X}},\mathfrak{s}_{\mathcal{X}})$ as a
higher index with respect to the rigid subcategory $\mathcal{X}$.
|
In this paper, a three-dimensional light detection and ranging simultaneous
localization and mapping (SLAM) method is proposed that is available for
tracking and mapping with 500--1000 Hz processing. The proposed method
significantly reduces the number of points used for point cloud registration
using a novel ICP metric to speed up the registration process while maintaining
accuracy. Point cloud registration with ICP is less accurate when the number of
points is reduced because ICP basically minimizes the distance between points.
To avoid this problem, symmetric KL-divergence is introduced to the ICP cost
that reflects the difference between two probabilistic distributions. The cost
includes not only the distance between points but also differences between
distribution shapes. The experimental results on the KITTI dataset indicate
that the proposed method has high computational efficiency, strongly
outperforms other methods, and has similar accuracy to the state-of-the-art
SLAM method.
|
A new metric for quantifying pairwise vertex connectivity in graphs is
defined and an implementation presented. While general in nature, it features a
combination of input features well-suited for social networks, including
applicability to directed or undirected graphs, weighted edges, and computes
using the impact from all-paths between the vertices. Moreover, the $O(V+E)$
method is applicable to large graphs. Comparisons with other techniques are
included.
|
Topological orders are a prominent paradigm for describing quantum many-body
systems without symmetry-breaking orders. We present a topological quantum
field theoretical (TQFT) study on topological orders in five-dimensional
spacetime ($5$D) in which \textit{topological excitations} include not only
point-like \textit{particles}, but also two types of spatially extended
objects: closed string-like \textit{loops} and two-dimensional closed
\textit{membranes}. Especially, membranes have been rarely explored in the
literature of topological orders. By introducing higher-form gauge fields, we
construct exotic TQFT actions that include mixture of two distinct types of
$BF$ topological terms and many twisted topological terms. The gauge
transformations are properly defined and utilized to compute level quantization
and classification of TQFTs. Among all TQFTs, some are not in Dijkgraaf-Witten
cohomological classification. To characterize topological orders, we concretely
construct all braiding processes among topological excitations, which leads to
very exotic links formed by closed spacetime trajectories of particles, loops,
and membranes. For each braiding process, we construct gauge-invariant Wilson
operators and calculate the associated braiding statistical phases. As a
result, we obtain expressions of link invariants all of which have manifest
geometric interpretation. Following Wen's definition, the boundary theory of a
topological order exhibits gravitational anomaly. We expect that the
characterization and classification of 5D topological orders in this paper
encode information of 4D gravitational anomaly. Further consideration, e.g.,
putting TQFTs on 5D manifolds with boundaries, is left to future work.
|
This article is the first of two in which we develop a geometric framework
for analysing silent and anisotropic big bang singularities. The results of the
present article concern the asymptotic behaviour of solutions to linear systems
of wave equations on the corresponding backgrounds. The main features are the
following: The assumptions do not involve any symmetry requirements and are
weak enough to be consistent with most big bang singularities for which the
asymptotic geometry is understood. The asymptotic rate of growth/decay of
solutions to linear systems of wave equations along causal curves going into
the singularity is determined by model systems of ODE's (depending on the
causal curve). Moreover, the model systems are essentially obtained by dropping
spatial derivatives and localising the coefficients along the causal curve.
This is in accordance with the BKL proposal. Note, however, that we here prove
this statement, we do not assume it. If the coefficients of the unknown and its
expansion normalised normal derivatives converge sufficiently quickly along a
causal curve, we obtain leading order asymptotics (along the causal curve) and
prove that the localised energy estimate (along the causal curve) is optimal.
In this setting, it is also possible to specify the leading order asymptotics
of solutions along the causal curve. On the other hand, the localised energy
estimate typically entails a substantial loss of derivatives.
In the companion article, we deduce geometric conclusions by combining the
framework with Einstein's equations. In particular, the combination reproduces
the Kasner map and yields partial bootstrap arguments.
|
Synthetic data generation has become essential in last years for feeding
data-driven algorithms, which surpassed traditional techniques performance in
almost every computer vision problem. Gathering and labelling the amount of
data needed for these data-hungry models in the real world may become
unfeasible and error-prone, while synthetic data give us the possibility of
generating huge amounts of data with pixel-perfect annotations. However, most
synthetic datasets lack from enough realism in their rendered images. In that
context UnrealROX generation tool was presented in 2019, allowing to generate
highly realistic data, at high resolutions and framerates, with an efficient
pipeline based on Unreal Engine, a cutting-edge videogame engine. UnrealROX
enabled robotic vision researchers to generate realistic and visually plausible
data with full ground truth for a wide variety of problems such as class and
instance semantic segmentation, object detection, depth estimation, visual
grasping, and navigation. Nevertheless, its workflow was very tied to generate
image sequences from a robotic on-board camera, making hard to generate data
for other purposes. In this work, we present UnrealROX+, an improved version of
UnrealROX where its decoupled and easy-to-use data acquisition system allows to
quickly design and generate data in a much more flexible and customizable way.
Moreover, it is packaged as an Unreal plug-in, which makes it more comfortable
to use with already existing Unreal projects, and it also includes new features
such as generating albedo or a Python API for interacting with the virtual
environment from Deep Learning frameworks.
|
In the context of tomographic cosmic shear surveys, there exists a nulling
transformation of weak lensing observations (also called BNT transform) that
allows us to simplify the correlation structure of tomographic cosmic shear
observations, as well as to build observables that depend only on a localised
range of redshifts and thus independent from the low-redshift/small-scale
modes. This procedure renders possible accurate, and from-first-principles,
predictions of the convergence and aperture mass one-point distributions (PDF).
We here explore other consequences of this transformation on the (reduced)
numerical complexity of the estimation of the joint PDF between nulled bins and
demonstrate how to use these results to make theoretical predictions.
|
We investigate the spin Seebeck effect and spin pumping in a junction between
a ferromagnetic insulator and a magnetic impurity deposited on a normal metal.
By the numerical renormalization group calculation, we show that spin current
is enhanced by the Kondo effect. This spin current is suppressed by increase of
the temperature or the magnetic field which is comparable with the Kondo
temperature. Our results indicate that spin transport can be a direct probe of
spin excitation in strongly correlated systems.
|
In this review, we summarize recent progress on the possible phases of
quantum chromodynamics (QCD) in the presence of a strong magnetic field, mainly
from the views of the chiral effective Nambu--Jona-Lasinio model. Four kinds of
phase transitions are explored in detail: chiral symmetry breaking and
restoration, neutral pseudoscalar superfluidity, charged pion superfluidity and
charged rho superconductivity. In particular, we revisit the unsolved problems
of inverse magnetic catalysis effect and competition between the chiral density
wave and solitonic modulation phases. It is shown that useful results can be
obtained by adopting self-consistent schemes.
|
Quantum algorithms offer significant speedups over their classical
counterparts for a variety of problems. The strongest arguments for this
advantage are borne by algorithms for quantum search, quantum phase estimation,
and Hamiltonian simulation, which appear as subroutines for large families of
composite quantum algorithms. A number of these quantum algorithms were
recently tied together by a novel technique known as the quantum singular value
transformation (QSVT), which enables one to perform a polynomial transformation
of the singular values of a linear operator embedded in a unitary matrix. In
the seminal GSLW'19 paper on QSVT [Gily\'en, Su, Low, and Wiebe, ACM STOC
2019], many algorithms are encompassed, including amplitude amplification,
methods for the quantum linear systems problem, and quantum simulation. Here,
we provide a pedagogical tutorial through these developments, first
illustrating how quantum signal processing may be generalized to the quantum
eigenvalue transform, from which QSVT naturally emerges. Paralleling GSLW'19,
we then employ QSVT to construct intuitive quantum algorithms for search, phase
estimation, and Hamiltonian simulation, and also showcase algorithms for the
eigenvalue threshold problem and matrix inversion. This overview illustrates
how QSVT is a single framework comprising the three major quantum algorithms,
thus suggesting a grand unification of quantum algorithms.
|
With music becoming an essential part of daily life, there is an urgent need
to develop recommendation systems to assist people targeting better songs with
fewer efforts. As the interactions between users and songs naturally construct
a complex network, community detection approaches can be applied to reveal
users' potential interests on songs by grouping relevant users & songs to the
same community. However, as the types of interaction could be heterogeneous, it
challenges conventional community detection methods designed originally for
homogeneous networks. Although there are existing works on heterogeneous
community detection, they are mostly task-driven approaches and not feasible
for specific music recommendation. In this paper, we propose a genetic based
approach to learn an edge-type usefulness distribution (ETUD) for all
edge-types in heterogeneous music networks. ETUD can be regarded as a linear
function to project all edges to the same latent space and make them
comparable. Therefore a heterogeneous network can be converted to a homogeneous
one where those conventional methods are eligible to use. We validate the
proposed model on a heterogeneous music network constructed from an online
music streaming service. Results show that for conventional methods, ETUD can
help to detect communities significantly improving music recommendation
accuracy while simultaneously reducing user searching cost.
|
It is standard to assume that the Wigner distribution of a mixed quantum
state consisting of square-integrable functions is a quasi-probability
distribution, that is that its integral is one and that the marginal properties
are satisfied. However this is in general not true. We introduce a class of
quantum states for which this property is satisfied, these states are dubbed
"Feichtinger states" because they are defined in terms of a class of functional
spaces (modulation spaces) introduced in the 1980's by H. Feichtinger. The
properties of these states are studied, which gives us the opportunity to prove
an extension to the general case of a result of Jaynes on the non-uniqueness of
the statistical ensemble generating a density operator. As a bonus we obtain a
result for convex sums of Wigner transforms.
|
Most recent approaches for online action detection tend to apply Recurrent
Neural Network (RNN) to capture long-range temporal structure. However, RNN
suffers from non-parallelism and gradient vanishing, hence it is hard to be
optimized. In this paper, we propose a new encoder-decoder framework based on
Transformers, named OadTR, to tackle these problems. The encoder attached with
a task token aims to capture the relationships and global interactions between
historical observations. The decoder extracts auxiliary information by
aggregating anticipated future clip representations. Therefore, OadTR can
recognize current actions by encoding historical information and predicting
future context simultaneously. We extensively evaluate the proposed OadTR on
three challenging datasets: HDD, TVSeries, and THUMOS14. The experimental
results show that OadTR achieves higher training and inference speeds than
current RNN based approaches, and significantly outperforms the
state-of-the-art methods in terms of both mAP and mcAP. Code is available at
https://github.com/wangxiang1230/OadTR.
|
In this review, we describe the application of one of the most popular deep
learning-based language models - BERT. The paper describes the mechanism of
operation of this model, the main areas of its application to the tasks of text
analytics, comparisons with similar models in each task, as well as a
description of some proprietary models. In preparing this review, the data of
several dozen original scientific articles published over the past few years,
which attracted the most attention in the scientific community, were
systematized. This survey will be useful to all students and researchers who
want to get acquainted with the latest advances in the field of natural
language text analysis.
|
We revisit the possibilities of accommodating the experimental indications of
the lepton flavor universality violation in $b$-hadron decays in the minimal
scenarios in which the Standard Model is extended by the presence of a single
$\mathcal{O}(1\,\mathrm{TeV})$ leptoquark state. To do so we combine the most
recent low energy flavor physics constraints, including
$R_{K^{(\ast)}}^\mathrm{exp}$ and $R_{D^{(\ast)}}^\mathrm{exp}$, and combine
them with the bounds on the leptoquark masses and their couplings to quarks and
leptons as inferred from the direct searches at the LHC and the studies of the
large $p_T$ tails of the $pp\to \ell\ell$ differential cross section. We find
that none of the scalar leptoquarks of $m_\mathrm{LQ} \simeq 1\div 2$ TeV can
accommodate the $B$-anomalies alone. Only the vector leptoquark, known as
$U_1$, can provide a viable solution which, in the minimal setup, provides an
interesting prediction, i.e. a lower bound to the lepton flavor violating $b\to
s\mu^\pm\tau^\mp$ decay modes, such as $\mathcal{B}(B\to K\mu\tau) \gtrsim
0.7\times 10^{-7}$.
|
This work aims to tackle the challenging heterogeneous graph encoding problem
in the text-to-SQL task. Previous methods are typically node-centric and merely
utilize different weight matrices to parameterize edge types, which 1) ignore
the rich semantics embedded in the topological structure of edges, and 2) fail
to distinguish local and non-local relations for each node. To this end, we
propose a Line Graph Enhanced Text-to-SQL (LGESQL) model to mine the underlying
relational features without constructing meta-paths. By virtue of the line
graph, messages propagate more efficiently through not only connections between
nodes, but also the topology of directed edges. Furthermore, both local and
non-local relations are integrated distinctively during the graph iteration. We
also design an auxiliary task called graph pruning to improve the
discriminative capability of the encoder. Our framework achieves
state-of-the-art results (62.8% with Glove, 72.0% with Electra) on the
cross-domain text-to-SQL benchmark Spider at the time of writing.
|
In this note, we give an elementary proof of the result given by Schenzel
that there are functorial isomorphisms between local cohomology groups and
\v{C}ech cohomology groups, by using weakly proregular sequences. In [Sch03],
he used notions of derived category theory in his proof, but we do not use them
in this paper.
|
We consider the 1d CFT defined by the half-BPS Wilson line in planar
$\mathcal{N}=4$ super Yang-Mills. Using analytic bootstrap methods we derive
the four-point function of the super-displacement operator at fourth order in a
strong coupling expansion. Via AdS/CFT, this corresponds to the first
three-loop correlator in AdS ever computed. To do so we address the operator
mixing problem by considering a family of auxiliary correlators. We further
extract the anomalous dimension of the lightest non-protected operator and find
agreement with the integrability-based numerical result of Grabner, Gromov and
Julius.
|
Open source development, to a great extent, is a type of social movement in
which shared ideologies play critical roles. For participants of open source
development, ideology determines how they make sense of things, shapes their
thoughts, actions, and interactions, enables rich social dynamics in their
projects and communities, and hereby realizes profound impacts at both
individual and organizational levels. While software engineering researchers
have been increasingly recognizing ideology's importance in open source
development, the notion of "ideology" has shown significant ambiguity and
vagueness, and resulted in theoretical and empirical confusion. In this
article, we first examine the historical development of ideology's
conceptualization, and its theories in multiple disciplines. Then, we review
the extant software engineering literature related to ideology. We further
argue the imperatives of developing an empirical theory of ideology in open
source development, and propose a research agenda for developing such a theory.
How such a theory could be applied is also discussed.
|
We study the possibilities of factorizations of products of nuclear operators
of different types through the Schatten-von Neumann operators in Hilbert spaces
with giving some applications to eigenvalues problems.
|
Understanding multivariate extreme events play a crucial role in managing the
risks of complex systems since extremes are governed by their own mechanisms.
Conditional on a given variable exceeding a high threshold (e.g.\ traffic
intensity), knowing which high-impact quantities (e.g\ air pollutant levels)
are the most likely to be extreme in the future is key. This article
investigates the contribution of marginal extreme events on future extreme
events of related quantities. We propose an Extreme Event Propagation framework
to maximise counterfactual causation probabilities between a known cause and
future high-impact quantities. Extreme value theory provides a tool for
modelling upper tails whilst vine copulas are a flexible device for capturing a
large variety of joint extremal behaviours. We optimise for the probabilities
of causation and apply our framework to a London road traffic and air
pollutants dataset. We replicate documented atmospheric mechanisms beyond
linear relationships. This provides a new tool for quantifying the propagation
of extremes in a large variety of applications.
|
This paper establishes an extended representation theorem for unit-root VARs.
A specific algebraic technique is devised to recover stationarity from the
solution of the model in the form of a cointegrating transformation. Closed
forms of the results of interest are derived for integrated processes up to the
4-th order. An extension to higher-order processes turns out to be within the
reach on an induction argument.
|
Acoustic Echo Cancellation (AEC) whose aim is to suppress the echo originated
from acoustic coupling between loudspeakers and microphones, plays a key role
in voice interaction. Linear adaptive filter (AF) is always used for handling
this problem. However, since there would be some severe effects in real
scenarios, such nonlinear distortions, background noises, and microphone
clipping, it would lead to considerable residual echo, giving poor performance
in practice. In this paper, we propose an end-to-end network structure for echo
cancellation, which is directly done on time-domain audio waveform. It is
transformed to deep representation by temporal convolution, and modelled by
Long Short-Term Memory (LSTM) for considering temporal property. Since time
delay and severe reverberation may exist at the near-end with respect to the
far-end, a local attention is employed for alignment. The network is trained
using multitask learning by employing an auxiliary classification network for
double-talk detection. Experiments show the superiority of our proposed method
in terms of the echo return loss enhancement (ERLE) for single-talk periods and
the perceptual evaluation of speech quality (PESQ) score for double-talk
periods in background noise and nonlinear distortion scenarios.
|
We study the Lie point symmetries and the similarity transformations for the
partial differential equations of the nonlinear one-dimensional
magnetohydrodynamic system with the Hall term known as HMHD system. For this
1+1 system of partial differential equations we find that is invariant under
the action of a seventh dimensional Lie algebra. Furthermore, the
one-dimensional optimal system is derived while the Lie invariants are applied
for the derivation of similarity transformations. We present different kinds of
oscillating solutions.
|
Mobile robots have disrupted the material handling industry which is
witnessing radical changes. The requirement for enhanced automation across
various industry segments often entails mobile robotic systems operating in
logistics facilities with little/no infrastructure. In such environments,
out-of-box low-cost robotic solutions are desirable. Wireless connectivity
plays a crucial role in successful operation of such mobile robotic systems. A
wireless mesh network of mobile robots is an attractive solution; however, a
number of system-level challenges create unique and stringent service
requirements. The focus of this paper is the role of Bluetooth mesh technology,
which is the latest addition to the Internet-of-Things (IoT) connectivity
landscape, in addressing the challenges of infrastructure-less connectivity for
mobile robotic systems. It articulates the key system-level design challenges
from communication, control, cooperation, coverage, security, and
navigation/localization perspectives, and explores different capabilities of
Bluetooth mesh technology for such challenges. It also provides performance
insights through real-world experimental evaluation of Bluetooth mesh while
investigating its differentiating features against competing solutions.
|
Numerical models of weather and climate critically depend on long-term
stability of integrators for systems of hyperbolic conservation laws. While
such stability is often obtained from (physical or numerical) dissipation
terms, physical fidelity of such simulations also depends on properly
preserving conserved quantities, such as energy, of the system. To address this
apparent paradox, we develop a variational integrator for the shallow water
equations that conserves energy, but dissipates potential enstrophy. Our
approach follows the continuous selective decay framework [F. Gay-Balmaz and D.
Holm. Selective decay by Casimir dissipation in inviscid fluids. Nonlinearity,
26(2):495, 2013], which enables dissipating an otherwise conserved quantity
while conserving the total energy. We use this in combination with the
variational discretization method [D. Pavlov, P. Mullen, Y. Tong, E. Kanso, J.
Marsden and M. Desbrun. Structure-preserving discretization of incompressible
fluids. Physica D: Nonlinear Phenomena, 240(6):443-458, 2011] to obtain a
discrete selective decay framework. This is applied to the shallow water
equations, both in the plane and on the sphere, to dissipate the potential
enstrophy. The resulting scheme significantly improves the quality of the
approximate solutions, enabling long-term integrations to be carried out.
|
Conceptual abstraction and analogy-making are key abilities underlying
humans' abilities to learn, reason, and robustly adapt their knowledge to new
domains. Despite of a long history of research on constructing AI systems with
these abilities, no current AI system is anywhere close to a capability of
forming humanlike abstractions or analogies. This paper reviews the advantages
and limitations of several approaches toward this goal, including symbolic
methods, deep learning, and probabilistic program induction. The paper
concludes with several proposals for designing challenge tasks and evaluation
measures in order to make quantifiable and generalizable progress in this area.
|
We consider the problem of offline reinforcement learning (RL) -- a
well-motivated setting of RL that aims at policy optimization using only
historical data. Despite its wide applicability, theoretical understandings of
offline RL, such as its optimal sample complexity, remain largely open even in
basic settings such as \emph{tabular} Markov Decision Processes (MDPs).
In this paper, we propose Off-Policy Double Variance Reduction (OPDVR), a new
variance reduction based algorithm for offline RL. Our main result shows that
OPDVR provably identifies an $\epsilon$-optimal policy with
$\widetilde{O}(H^2/d_m\epsilon^2)$ episodes of offline data in the
finite-horizon stationary transition setting, where $H$ is the horizon length
and $d_m$ is the minimal marginal state-action distribution induced by the
behavior policy. This improves over the best known upper bound by a factor of
$H$. Moreover, we establish an information-theoretic lower bound of
$\Omega(H^2/d_m\epsilon^2)$ which certifies that OPDVR is optimal up to
logarithmic factors. Lastly, we show that OPDVR also achieves rate-optimal
sample complexity under alternative settings such as the finite-horizon MDPs
with non-stationary transitions and the infinite horizon MDPs with discounted
rewards.
|
The loss and gain of volatile elements during planet formation is key for
setting their subsequent climate, geodynamics, and habitability. Two broad
regimes of volatile element transport in and out of planetary building blocks
have been identified: that occurring when the nebula is still present, and that
occurring after it has dissipated. Evidence for volatile element loss in
planetary bodies after the dissipation of the solar nebula is found in the high
Mn to Na abundance ratio of Mars, the Moon, and many of the solar system's
minor bodies. This volatile loss is expected to occur when the bodies are
heated by planetary collisions and short-lived radionuclides, and enter a
global magma ocean stage early in their history. The bulk composition of
exo-planetary bodies can be determined by observing white dwarfs which have
accreted planetary material. The abundances of Na, Mn, and Mg have been
measured for the accreting material in four polluted white dwarf systems.
Whilst the Mn/Na abundances of three white dwarf systems are consistent with
the fractionations expected during nebula condensation, the high Mn/Na
abundance ratio of GD362 means that it is not (>3 sigma). We find that heating
of the planetary system orbiting GD362 during the star's giant branch evolution
is insufficient to produce such a high Mn/Na. We, therefore, propose that
volatile loss occurred in a manner analogous to that of the solar system
bodies, either due to impacts shortly after their formation or from heating by
short-lived radionuclides. We present potential evidence for a magma ocean
stage on the exo-planetary body which currently pollutes the atmosphere of
GD362.
|
Patch lattices, introduced by G. Cz\'edli and E.T. Schmidt in 2013, are the
building stones for slim (and so necessarily finite and planar) semimodular
lattices with respect to gluing. Slim semimodular lattices were introduced by
G. Gr\"atzer and E. Knapp in 2007, and they have been intensively studied since
then. Outside lattice theory, these lattices played the main role in adding a
uniqueness part to the classical Jordan--H\"older theorem for groups by G.
Cz\'edli and E.T. Schmidt in 2011, and they also led to results in
combinatorial geometry. In this paper, we prove that slim patch lattices are
exactly the absolute retracts with more than two elements for the category of
slim semimodular lattices with length-preserving lattice embeddings as
morphisms. Also, slim patch lattices are the same as the maximal objects $L$ in
this category such that $|L|>2$. Furthermore, slim patch lattices are
characterized as the algebraically closed lattices $L$ in this category such
that $|L|>2$. Finally, we prove that if we consider $\{0,1\}$-preserving
lattice homomorphisms rather than length-preserving ones, then the absolute
retracts for the class of slim semimodular lattices are the at most 4-element
boolean lattices.
|
We study first order phase transitions in Randall-Sundrum models in the early
universe dual to confinement in large-$N$ gauge theories. The transition rate
to the confined phase is suppressed by a factor $\exp(-N^2)$, and may not
complete for $N \gg 1$, instead leading to an eternally inflating phase. To
avoid this fate, the resulting constraint on $N$ makes the RS effective field
theory only marginally under control. We present a mechanism where the IR brane
remains stabilized at very high temperature, so that the theory stays in the
confined phase at all times after inflation and reheating. We call this
mechanism avoided deconfinement. The mechanism involves adding new scalar
fields on the IR brane which provide a stablilizing contribution to the radion
potential at finite temperature, in a spirit similar to Weinberg's symmetry
non-restoration mechanism. Avoided deconfinement allows for a viable cosmology
for theories with parametrically large $N$. Early universe cosmological
phenomena such as WIMP freeze-out, axion abundance, baryogenesis, phase
transitions, and gravitational wave signatures are qualitatively modified.
|
This paper presents TS2Vec, a universal framework for learning
representations of time series in an arbitrary semantic level. Unlike existing
methods, TS2Vec performs contrastive learning in a hierarchical way over
augmented context views, which enables a robust contextual representation for
each timestamp. Furthermore, to obtain the representation of an arbitrary
sub-sequence in the time series, we can apply a simple aggregation over the
representations of corresponding timestamps. We conduct extensive experiments
on time series classification tasks to evaluate the quality of time series
representations. As a result, TS2Vec achieves significant improvement over
existing SOTAs of unsupervised time series representation on 125 UCR datasets
and 29 UEA datasets. The learned timestamp-level representations also achieve
superior results in time series forecasting and anomaly detection tasks. A
linear regression trained on top of the learned representations outperforms
previous SOTAs of time series forecasting. Furthermore, we present a simple way
to apply the learned representations for unsupervised anomaly detection, which
establishes SOTA results in the literature. The source code is publicly
available at https://github.com/yuezhihan/ts2vec.
|
In this paper we consider $ X(\bar\varphi)$ anisotropic symmetric space $
2\pi$ of periodic functions of $m$ variables, in particular, the generalized
Lorentz space $L_{\bar{\psi},\bar{\tau}}^{*}(\mathbb{T}^{m})$ and
Nikol'skii--Besov's class $S_{X(\bar{\varphi}),\bar{\theta}}^{\bar r}B$. The
article proves an embedding theorem for the Nikol'skii - Besov class in the
generalized Lorentz space and establishes an upper bound for the best
approximations by trigonometric polynomials with harmonic numbers from the
hyperbolic cross of functions from the class
$S_{X(\bar{\varphi}),\bar{\theta}}^{\bar r}B$.
|
Early detection and quantification of tumour growth would help clinicians to
prescribe more accurate treatments and provide better surgical planning.
However, the multifactorial and heterogeneous nature of lung tumour progression
hampers identification of growth patterns. In this study, we present a novel
method based on a deep hierarchical generative and probabilistic framework
that, according to radiological guidelines, predicts tumour growth, quantifies
its size and provides a semantic appearance of the future nodule. Unlike
previous deterministic solutions, the generative characteristic of our approach
also allows us to estimate the uncertainty in the predictions, especially
important for complex and doubtful cases. Results of evaluating this method on
an independent test set reported a tumour growth balanced accuracy of 74%, a
tumour growth size MAE of 1.77 mm and a tumour segmentation Dice score of 78%.
These surpassed the performances of equivalent deterministic and alternative
generative solutions (i.e. probabilistic U-Net, Bayesian test dropout and
Pix2Pix GAN) confirming the suitability of our approach.
|
Most studies for postselected weak measurement in optomechanical system focus
on using a single photon as a measured system. However, we find that using weak
coherent light instead of a single photon can also amplify the mirror's
position displacement of one photon. In the WVA regime (Weak Value
Amplification regime), the weak value of one photon can lie outside the
eigenvalue spectrum, proportional to the difference of the mirror's position
displacement between the successful and failed postselection and its successful
postselection probability is dependent of the mean photon number of the
coherent light and improved by adjusting it accordingly. Outside the WVA
regime, the amplification limit can reach the level of the vacuum fluctuations,
and when the mean photon number and the optomechanical coupling parameter are
selected appropriately, its successful postselection probability becomes
higher, which is beneficial to observe the maximum amplification value under
the current experimental conditions. This result breaks the constraint that it
is difficult to detect outside the WVA regime. This opens up a new regime for
the study of a single photon nonlinearity in optomechanical system.
|
We present a novel approach to the formation controlling of aerial robot
swarms that demonstrates the flocking behavior. The proposed method stems from
the Unmanned Aerial Vehicle (UAV) dynamics; thus, it prevents any unattainable
control inputs from being produced and subsequently leads to feasible
trajectories. By modeling the inter-agent relationships using a pairwise energy
function, we show that interacting robot swarms constitute a Markov Random
Field. Our algorithm builds on the Mean-Field Approximation and incorporates
the collective behavioral rules: cohesion, separation, and velocity alignment.
We follow a distributed control scheme and show that our method can control a
swarm of UAVs to a formation and velocity consensus with real-time collision
avoidance. We validate the proposed method with physical and high-fidelity
simulation experiments.
|
Determining which image regions to concentrate on is critical for
Human-Object Interaction (HOI) detection. Conventional HOI detectors focus on
either detected human and object pairs or pre-defined interaction locations,
which limits learning of the effective features. In this paper, we reformulate
HOI detection as an adaptive set prediction problem, with this novel
formulation, we propose an Adaptive Set-based one-stage framework (AS-Net) with
parallel instances and interaction branches. To attain this, we map a trainable
interaction query set to an interaction prediction set with a transformer. Each
query adaptively aggregates the interaction-relevant features from global
contexts through multi-head co-attention. Besides, the training process is
supervised adaptively by matching each ground truth with the interaction
prediction. Furthermore, we design an effective instance-aware attention module
to introduce instructive features from the instance branch into the interaction
branch. Our method outperforms previous state-of-the-art methods without any
extra human pose and language features on three challenging HOI detection
datasets. Especially, we achieve over $31\%$ relative improvement on a
large-scale HICO-DET dataset. Code is available at
https://github.com/yoyomimi/AS-Net.
|
Thermal evolution models suggest that the luminosities of both Uranus and
Neptune are inconsistent with the classical assumption of an adiabatic
interior. Such models commonly predict Uranus to be brighter and, recently,
Neptune to be fainter than observed. In this work, we investigate the influence
of a thermally conductive boundary layer on the evolution of Uranus- and
Neptune-like planets. This thermal boundary layer (TBL) is assumed to be
located deep in the planet, and be caused by a steep compositional gradient
between a H-He-dominated outer envelope and an ice-rich inner envelope. We
investigate the effect of TBL thickness, thermal conductivity, and the time of
TBL formation on the planet's cooling behaviour. The calculations were
performed with our recently developed tool based on the Henyey method for
stellar evolution. We make use of state-of-the-art equations of state for
hydrogen, helium, and water, as well as of thermal conductivity data for water
calculated via ab initio methods. We find that even a thin conductive layer of
a few kilometres has a significant influence on the planetary cooling. In our
models, Uranus' measured luminosity can only be reproduced if the planet has
been near equilibrium with the solar incident flux for an extended time. For
Neptune, we find a range of solutions with a near constant effective
temperature at layer thicknesses of 15 km or larger, similar to Uranus. In
addition, we find solutions for thin TBLs of few km and strongly enhanced
thermal conductivity. A $\sim$ 1$~$Gyr later onset of the TBL reduces the
present $\Delta T$ by an order of magnitude to only several 100 K. Our models
suggest that a TBL can significantly influence the present planetary luminosity
in both directions, making it appear either brighter or fainter than the
adiabatic case.
|
In this paper, we want to prove positive mass theorems for ALF and ALG
manifolds with model spaces $\mathbb R^{n-1}\times \mathbb S^1$ and $\mathbb
R^{n-2}\times \mathbb T^2$ respectively in dimensions no greater than $7$
(Theorem \ref{ALFPMT0}). { Different from the compatibility condition for spin
structure in \cite[Theorem 2]{minerbe2008a}, we show that some type of
incompressible condition for $\mathbb S^1$ and $\mathbb T^2$ is enough to
guarantee the nonnegativity of the mass.} As in the asymptotically flat case,
we reduce the desired positive mass theorems to those ones concerning
non-existence of positive scalar curvature metrics on closed manifolds coming
from generalize surgery to $n$-torus. { Finally, we investigate certain fill-in
problems and obtain an optimal bound for total mean curvature of admissible
fill-ins for flat product $2$-torus $\mathbb S^1(l_1)\times \mathbb S^1(l_2)$.}
|
Transient stability assessment (TSA) is a cornerstone for resilient
operations of today's interconnected power grids. This paper is a confluence of
quantum computing, data science and machine learning to potentially address the
power system TSA challenge. We devise a quantum TSA (qTSA) method to enable
scalable and efficient data-driven transient stability prediction for bulk
power systems, which is the first attempt to tackle the TSA issue with quantum
computing. Our contributions are three-fold: 1) A low-depth, high
expressibility quantum neural network for accurate and noise-resilient TSA; 2)
A quantum natural gradient descent algorithm for efficient qTSA training; 3) A
systematical analysis on qTSA's performance under various quantum factors. qTSA
underpins a foundation of quantum-enabled and data-driven power grid stability
analytics. It renders the intractable TSA straightforward and effortless in the
Hilbert space, and therefore provides stability information for power system
operations. Extensive experiments on quantum simulators and real quantum
computers verify the accuracy, noise-resilience, scalability and universality
of qTSA.
|
Fake news on social media has become a hot topic of research as it negatively
impacts the discourse of real news in the public. Specifically, the ongoing
COVID-19 pandemic has seen a rise of inaccurate and misleading information due
to the surrounding controversies and unknown details at the beginning of the
pandemic. The FakeNews task at MediaEval 2020 tackles this problem by creating
a challenge to automatically detect tweets containing misinformation based on
text and structure from Twitter follower network. In this paper, we present a
simple approach that uses BERT embeddings and a shallow neural network for
classifying tweets using only text, and discuss our findings and limitations of
the approach in text-based misinformation detection.
|
We measured $^{35}$Cl abundances in 52 M giants with metallicities between
-0.5 $<$ [Fe/H] $<$ 0.12. Abundances and atmospheric parameters were derived
using infrared spectra from CSHELL on the IRTF and from optical echelle
spectra. We measured Cl abundances by fitting a H$^{35}$Cl molecular feature at
3.6985 $\mu$m with synthetic spectra. We also measured the abundances of O, Ca,
Ti, and Fe using atomic absorption lines. We find that the [Cl/Fe] ratio for
our stars agrees with chemical evolution models of Cl and the [Cl/Ca] ratio is
broadly consistent with the solar ratio over our metallicity range. Both
indicate that Cl is primarily made in core-collapse supernovae with some
contributions from Type Ia SN. We suggest other potential nucleosynthesis
processes, such as the $\nu$-process, are not significant producers of Cl.
Finally, we also find our Cl abundances are consistent with H II and planetary
nebular abundances at a given oxygen abundance, although there is scatter in
the data.
|
H\"ormander's propagation of singularities theorem does not fully describe
the propagation of singularities in subelliptic wave equations, due to the
existence of doubly characteristic points. In the present paper, building upon
a visionary conference paper by R. Melrose \cite{Mel86}, we prove that
singularities of subelliptic wave equations only propagate along
null-bicharacteristics and abnormal extremal lifts of singular curves, which
are well-known curves in optimal control theory. We first revisit in depth the
ideas sketched by R. Melrose in \cite{Mel86}, notably providing a full proof of
its main statement. Making more explicit computations, we then explain how
sub-Riemannian geometry and abnormal extremals come into play. This result
shows that abnormal extremals have an important role in the classical-quantum
correspondence between sub-Riemannian geometry and subelliptic operators. As a
consequence, for $x\neq y$ and denoting by $K_G$ the wave kernel, we obtain
that the singular support of the distribution $t\mapsto K_G(t,x,y)$ is included
in the set of lengths of the normal geodesics joining $x$ and $y$, at least up
to the time equal to the minimal length of a singular curve joining $x$ and
$y$.
|
The world needs around 150 Pg of negative carbon emissions to mitigate
climate change. Global soils may provide a stable, sizeable reservoir to help
achieve this goal by sequestering atmospheric carbon dioxide as soil organic
carbon (SOC). In turn, SOC can support healthy soils and provide a multitude of
ecosystem benefits. To support SOC sequestration, researchers and policy makers
must be able to precisely measure the amount of SOC in a given plot of land.
SOC measurement is typically accomplished by taking soil cores selected at
random from the plot under study, mixing (compositing) some of them together,
and analyzing (assaying) the composited samples in a laboratory. Compositing
reduces assay costs, which can be substantial. Taking samples is also costly.
Given uncertainties and costs in both sampling and assay along with a desired
estimation precision, there is an optimal composite size that will minimize the
budget required to achieve that precision. Conversely, given a fixed budget,
there is a composite size that minimizes uncertainty. In this paper, we
describe and formalize sampling and assay for SOC and derive the optima for
three commonly used assay methods: dry combustion in an elemental analyzer,
loss-on-ignition, and mid-infrared spectroscopy. We demonstrate the utility of
this approach using data from a soil survey conducted in California. We give
recommendations for practice and provide software to implement our framework.
|
The search of close (a<=5 au) giant planet(GP) companions with radial
velocity(RV) around young stars and the estimate of their occurrence rates is
important to constrain the migration timescales. Furthermore, this search will
allow the giant planet occurrence rates to be computed at all separations via
the combination with direct imaging techniques. The RV search around young
stars is a challenge as they are generally faster rotators than older stars of
similar spectral types and they exhibit signatures of spots or pulsation in
their RV time series. Specific analyses are necessary to characterize, and
possibly correct for, this activity. Our aim is to search for planets around
young nearby stars and to estimate the GP occurrence rates for periods up to
1000 days. We used the SOPHIE spectrograph to observe 63 A-M young (<400 Myr)
stars. We used our SAFIR software to compute the RVs and other spectroscopic
observables. We then combined this survey with the HARPS YNS survey to compute
the companion occurrence rates on a total of 120 young A-M stars. We report one
new trend compatible with a planetary companion on HD109647. We also report
HD105693 and HD112097 as binaries, and we confirm the binarity of HD2454,
HD13531, HD17250A, HD28945, HD39587, HD131156, HD 142229, HD186704A, and HD
195943. We constrained for the first time the orbital parameters of HD195943B.
We refute the HD13507 single brown dwarf (BD) companion solution and propose a
double BD companion solution. Based on our sample of 120 young stars, we obtain
a GP occurrence rate of 1_{-0.3}^{+2.2}% for periods lower than 1000 days, and
we obtain an upper limit on BD occurrence rateof 0.9_{-0.9}^{+2}% in the same
period range. We report a possible lack of close (1<P<1000 days) GPs around
young FK stars compared to their older counterparts, with a confidence level of
90%.
|
The current status and future prospects of searches for axion-like particles
(ALPs) at colliders, mostly focused on the CERN LHC, are summarized.
Constraints on ALPs with masses above a few GeV that couple to photons, as well
as to Z or Higgs bosons, have been set at the LHC through searches for new
$a\to\gamma\gamma$ resonances in di-, tri-, and four-photon final states.
Inclusive and exclusive diphotons in proton-proton and lead-lead collisions,
pp, PbPb $\to a \to \gamma\gamma (+X)$, as well as exotic Z and Higgs boson
decays, pp $\to \mathrm{Z},\mathrm{H}\to a\gamma \to 3\gamma$ and pp $\to
\mathrm{H}\to aa \to 4\gamma$, have been analyzed. Exclusive searches in PbPb
collisions provide the best exclusion limits for ALP masses $m_a\approx 5-$100
GeV, whereas the other channels are the most competitive ones over $m_a\approx
100$ GeV$-$2.6 TeV. Integrated ALP production cross sections up to $\sim$100 nb
are excluded at 95% confidence level, corresponding to constraints on
axion-photon couplings down to $g_{a\gamma}\approx$ 0.05 TeV$^{-1}$, over broad
mass ranges. Factors of 10$-$100 improvements in these limits are expected at
the LHC approaching $g_{a\gamma}\approx 10^{-3}$ TeV$^{-1}$ over $m_a\approx 1$
GeV$-$5 TeV in the next decade.
|
Using only linear optical elements, the creation of dual-rail photonic
entangled states is inherently probabilistic. Known entanglement generation
schemes have low success probabilities, requiring large-scale multiplexing to
achieve near-deterministic operation of quantum information processing
protocols. In this paper, we introduce multiple techniques and methods to
generate photonic entangled states with high probability, which have the
potential to reduce the footprint of Linear Optical Quantum Computing (LOQC)
architectures drastically. Most notably, we are showing how to improve Bell
state preparation from four single photons to up to p=2/3, boost Type-I fusion
to 75% with a dual-rail Bell state ancilla and improve Type-II fusion beyond
the limits of Bell state discrimination.
|
Computational meshes arising from shape optimization routines commonly suffer
from decrease of mesh quality or even destruction of the mesh. In this work, we
provide an approach to regularize general shape optimization problems to
increase both shape and volume mesh quality. For this, we employ pre-shape
calculus (cf. arXiv:2012.09124). Existence of regularized solutions is
guaranteed. Further, consistency of modified pre-shape gradient systems is
established. We present pre-shape gradient system modifications, which permit
simultaneous shape optimization with mesh quality improvement. Optimal shapes
to the original problem are left invariant under regularization. The
computational burden of our approach is limited, since additional solution of
possibly larger (non-)linear systems for regularized shape gradients is not
necessary. We implement and compare pre-shape gradient regularization
approaches for a hard to solve 2D problem. As our approach does not depend on
the choice of metrics representing shape gradients, we employ and compare
several different metrics.
|
The Tibet AS$\gamma$ experiment has measured $\gamma$-ray flux of supernova
remnant G106.3+2.7 up to 100 TeV, suggesting it {being} potentially a
"PeVatron". Challenge arises when the hadronic scenario requires a hard proton
spectrum (with spectral index $\approx 1.8$), while {usual observations and
numerical simulations prefer} a soft proton spectrum {(with spectral index
$\geq 2$)}. In this paper, we explore an alternative scenario to explain the
$\gamma$-ray spectrum of G106.3+2.7 within the current understanding of
acceleration and escape processes. We consider that the cosmic ray {particles}
are scattered by the turbulence driven via Bell instability. The resulting
hadronic $\gamma$-ray spectrum is novel, dominating the contribution to the
emission above 10\,TeV, and can explain the bizarre broadband spectrum of
G106.3+2.7 in combination with leptonic emission from the remnant.
|
Online learning with expert advice is widely used in various machine learning
tasks. It considers the problem where a learner chooses one from a set of
experts to take advice and make a decision. In many learning problems, experts
may be related, henceforth the learner can observe the losses associated with a
subset of experts that are related to the chosen one. In this context, the
relationship among experts can be captured by a feedback graph, which can be
used to assist the learner's decision making. However, in practice, the nominal
feedback graph often entails uncertainties, which renders it impossible to
reveal the actual relationship among experts. To cope with this challenge, the
present work studies various cases of potential uncertainties, and develops
novel online learning algorithms to deal with uncertainties while making use of
the uncertain feedback graph. The proposed algorithms are proved to enjoy
sublinear regret under mild conditions. Experiments on real datasets are
presented to demonstrate the effectiveness of the novel algorithms.
|
The recursive expansion of tree level multitrace Einstein-Yang-Mills (EYM)
amplitudes induces a refined graphic expansion, by which any tree-level EYM
amplitude can be expressed as a summation over all possible refined graphs.
Each graph contributes a unique coefficient as well as a proper combination of
color-ordered Yang-Mills (YM) amplitudes. This expansion allows one to evaluate
EYM amplitudes through YM amplitudes, the latter have much simpler structures
in four dimensions than the former. In this paper, we classify the refined
graphs for the expansion of EYM amplitudes into $\text{N}^{\,k}$MHV sectors.
Amplitudes in four dimensions, which involve $k+2$ negative-helicity particles,
at most get non-vanishing contribution from graphs in $\text{N}^{\,k'(k'\leq
k)}$MHV sectors. By the help of this classification, we evaluate the
non-vanishing amplitudes with two negative-helicity particles in four
dimensions. We establish a correspondence between the refined graphs for
single-trace amplitudes with $(g^-_i,g^-_j)$ or $(h^-_i,g^-_j)$ configuration
and the spanning forests of the known Hodges determinant form. Inspired by this
correspondence, we further propose a symmetric formula of double-trace
amplitudes with $(g^-_i,g^-_j)$ configuration. By analyzing the cancellation
between refined graphs in four dimensions, we prove that any other tree
amplitude with two negative-helicity particles has to vanish.
|
Control barrier functions (CBFs) have recently become a powerful method for
rendering desired safe sets forward invariant in single- and multi-agent
systems. In the multi-agent case, prior literature has considered scenarios
where all agents cooperate to ensure that the corresponding set remains
invariant. However, these works do not consider scenarios where a subset of the
agents are behaving adversarially with the intent to violate safety bounds. In
addition, prior results on multi-agent CBFs typically assume that control
inputs are continuous and do not consider sampled-data dynamics. This paper
presents a framework for normally-behaving agents in a multi-agent system with
heterogeneous control-affine, sampled-data dynamics to render a safe set
forward invariant in the presence of adversarial agents. The proposed approach
considers several aspects of practical control systems including input
constraints, clock asynchrony and disturbances, and distributed calculation of
control inputs. Our approach also considers functions describing safe sets
having high relative degree with respect to system dynamics. The efficacy of
these results are demonstrated through simulations.
|
In this paper, we investigate over-the-air model aggregation in a federated
edge learning (FEEL) system. We introduce a Markovian probability model to
characterize the intrinsic temporal structure of the model aggregation series.
With this temporal probability model, we formulate the model aggregation
problem as to infer the desired aggregated update given all the past
observations from a Bayesian perspective. We develop a message passing based
algorithm, termed temporal-structure-assisted gradient aggregation (TSA-GA), to
fulfil this estimation task with low complexity and near-optimal performance.
We further establish the state evolution (SE) analysis to characterize the
behaviour of the proposed TSA-GA algorithm, and derive an explicit bound of the
expected loss reduction of the FEEL system under certain standard regularity
conditions. In addition, we develop an expectation maximization (EM) strategy
to learn the unknown parameters in the Markovian model. We show that the
proposed TSAGA algorithm significantly outperforms the state-of-the-art, and is
able to achieve comparable learning performance as the error-free benchmark in
terms of both convergence rate and final test accuracy.
|
In 2019, Reyes & Wright used the NASA Astrophysics Data System (ADS) to
initiate a comprehensive bibliography for SETI accessible to the public. Since
then, updates to the library have been incomplete, partly due to the difficulty
in managing the large number of false positive publications generated by
searching ADS using simple search terms. In preparation for a recent update,
the scope of the library was revised and reexamined. The scope now includes
social sciences and commensal SETI. Results were curated based on five SETI
keyword searches: "SETI", "technosignature", "Fermi Paradox," "Drake Equation",
and "extraterrestrial intelligence." These keywords returned 553 publications
that merited inclusion in the bibliography that were not previously present. A
curated library of false positive results is now concurrently maintained to
facilitate their exclusion from future searches. A search query and workflow
was developed to capture nearly all SETI-related papers indexed by ADS while
minimizing false positives. These tools will enable efficient, consistent
updates of the SETI library by future curators, and could be adopted for other
bibliography projects as well.
|
The article is devoted to the development of numerical methods for solving
saddle point problems and variational inequalities with simplified requirements
for the smoothness conditions of functionals. Recently there were proposed some
notable methods for optimization problems with strongly monotone operators. Our
focus here is on newly proposed techniques for solving strongly convex-concave
saddle point problems. One of the goals of the article is to improve the
obtained estimates of the complexity of introduced algorithms by using
accelerated methods for solving auxiliary problems. The second focus of the
article is introducing an analogue of the boundedness condition for the
operator in the case of arbitrary (not necessarily Euclidean) prox structure.
We propose an analogue of the mirror descent method for solving variational
inequalities with such operators, which is optimal in the considered class of
problems.
|
We consider a neutrinophilic $U(1)$ extension of the standard model (SM)
which couples only to SM isosinglet neutral fermions, charged under the new
group. The neutral fermions couple to the SM matter fields through Yukawa
interactions. The neutrinos in the model get their masses from a standard
inverse-seesaw mechanism while an added scalar sector is responsible for the
breaking of the gauged $U(1)$ leading to a light neutral gauge boson ($Z'$),
which has minimal interaction with the SM sector. We study the phenomenology of
having such a light $Z'$ in the context of neutrinophilic interactions as well
as the role of allowing kinetic mixing between the new $U(1)$ group with the SM
hypercharge group. We show that current experimental searches allow for a very
light $Z'$ if it does not couple to SM fields directly and highlight the search
strategies at the LHC. We observe that multilepton final states in the form of
$(4\ell + \slashed{E}_T)$ and $(3\ell + 2j + \slashed{E}_T)$ could be crucial
in discovering such a neutrinophilic gauge boson lying in a mass range of
$200$--$500$ GeV.
|
We show that the naive mean-field approximation correctly predicts the
leading term of the logarithmic lower tail probabilities for the number of
copies of a given subgraph in $G(n,p)$ and of arithmetic progressions of a
given length in random subsets of the integers in the entire range of densities
where the mean-field approximation is viable.
Our main technical result provides sufficient conditions on the maximum
degrees of a uniform hypergraph $\mathcal{H}$ that guarantee that the
logarithmic lower tail probabilities for the number of edges induced by a
binomial random subset of the vertices of $\mathcal{H}$ can be
well-approximated by considering only product distributions. This may be
interpreted as a weak, probabilistic version of the hypergraph container lemma
that is applicable to all sparser-than-average (and not only independent) sets.
|
Many studies are devoted to the design of radiomic models for a prediction
task. When no effective model is found, it is often difficult to know whether
the radiomic features do not include information relevant to the task or
because of insufficient data. We propose a downsampling method to answer that
question when considering a classification task into two groups. Using two
large patient cohorts, several experimental configurations involving different
numbers of patients were created. Univariate or multivariate radiomic models
were designed from each configuration. Their performance as reflected by the
Youden index (YI) and Area Under the receiver operating characteristic Curve
(AUC) was compared to the stable performance obtained with the highest number
of patients. A downsampling method is described to predict the YI and AUC
achievable with a large number of patients. Using the multivariate models
involving machine learning, YI and AUC increased with the number of patients
while they decreased for univariate models. The downsampling method better
estimated YI and AUC obtained with the largest number of patients than the YI
and AUC obtained using the number of available patients and identifies the lack
of information relevant to the classification task when no such information
exists.
|
The current role of data-driven science is constantly increasing its
importance within Astrophysics, due to the huge amount of multi-wavelength data
collected every day, characterized by complex and high-volume information
requiring efficient and as much as possible automated exploration tools.
Furthermore, to accomplish main and legacy science objectives of future or
incoming large and deep survey projects, such as JWST, LSST and Euclid, a
crucial role is played by an accurate estimation of photometric redshifts,
whose knowledge would permit the detection and analysis of extended and
peculiar sources by disentangling low-z from high-z sources and would
contribute to solve the modern cosmological discrepancies. The recent
photometric redshift data challenges, organized within several survey projects,
like LSST and Euclid, pushed the exploitation of multi-wavelength and
multi-dimensional data observed or ad hoc simulated to improve and optimize the
photometric redshifts prediction and statistical characterization based on both
SED template fitting and machine learning methodologies. But they also provided
a new impetus in the investigation on hybrid and deep learning techniques,
aimed at conjugating the positive peculiarities of different methodologies,
thus optimizing the estimation accuracy and maximizing the photometric range
coverage, particularly important in the high-z regime, where the spectroscopic
ground truth is poorly available. In such a context we summarize what learned
and proposed in more than a decade of research.
|
Using a neural network potential (ANI-1ccx) generated from quantum data on a
large data set of molecules and pairs of molecules, isothermal, constant volume
simulations demonstrate that the model can be as accurate as ab initio
molecular dynamics for simulations of pure liquid water and the aqueous
solvation of a methane molecule. No theoretical or experimental data for the
liquid phase is used to train the model, suggesting that the ANI-1ccx approach
is an effective method to link high level ab initio methods to potentials for
large scale simulations.
|
IEEE 802.11p standard defines wireless technology protocols that enable
vehicular transportation and manage traffic efficiency. A major challenge in
the development of this technology is ensuring communication reliability in
highly dynamic vehicular environments, where the wireless communication
channels are doubly selective, thus making channel estimation and tracking a
relevant problem to investigate. In this paper, a novel deep learning
(DL)-based weighted interpolation estimator is proposed to accurately estimate
vehicular channels especially in high mobility scenarios. The proposed
estimator is based on modifying the pilot allocation of the IEEE 802.11p
standard so that more transmission data rates are achieved. Extensive numerical
experiments demonstrate that the developed estimator significantly outperforms
the recently proposed DL-based frame-by-frame estimators in different vehicular
scenarios, while substantially reducing the overall computational complexity.
|
This work deals with a new generalization of $r$-Stirling numbers using
$l$-tuple of permutations and partitions called $(l,r)$-Stirling numbers of
both kinds. We study various properties of these numbers using combinatorial
interpretations and symmetric functions. Also, we give a limit representation
of the multiple zeta function using $(l,r)$-Stirling of the first kind.
|
Low-mass ($M_{\rm{500}}<5\times10^{14}{\rm{M_\odot}}$) galaxy clusters have
been largely unexplored in radio observations, due to the inadequate
sensitivity of existing telescopes. However, the upgraded GMRT (uGMRT) and the
Low Frequency ARray (LoFAR), with unprecedented sensitivity at low frequencies,
have paved the way to closely study less massive clusters than before. We have
started the first large-scale programme to systematically search for diffuse
radio emission from low-mass galaxy clusters, chosen from the Planck
Sunyaev-Zel'dovich cluster catalogue. We report here the detection of diffuse
radio emission from four of the 12 objects in our sample, shortlisted from the
inspection of the LoFAR Two Meter Sky Survey (LoTSS-I), followed up by uGMRT
Band 3 deep observations. The clusters PSZ2~G089 (Abell~1904) and PSZ2~G111
(Abell~1697) are detected with relic-like emission, while PSZ2~G106 is found to
have an intermediate radio halo and PSZ2~G080 (Abell~2018) seems to be a
halo-relic system. PSZ2~G089 and PSZ2~G080 are among the lowest-mass clusters
discovered with a radio relic and a halo-relic system, respectively. A high
($\sim30\%$) detection rate, with powerful radio emission ($P_{1.4\ {\rm
GHz}}\sim10^{23}~{\rm{W~Hz^{-1}}}$) found in most of these objects, opens up
prospects of studying radio emission in galaxy clusters over a wider mass
range, to much lower-mass systems.
|
Subsets and Splits