abstract
stringlengths 42
2.09k
|
---|
A key challenge in Imitation Learning (IL) is that optimal state actions
demonstrations are difficult for the teacher to provide. For example in
robotics, providing kinesthetic demonstrations on a robotic manipulator
requires the teacher to control multiple degrees of freedom at once. The
difficulty of requiring optimal state action demonstrations limits the space of
problems where the teacher can provide quality feedback. As an alternative to
state action demonstrations, the teacher can provide corrective feedback such
as their preferences or rewards. Prior work has created algorithms designed to
learn from specific types of noisy feedback, but across teachers and tasks
different forms of feedback may be required. Instead we propose that in order
to learn from a diversity of scenarios we need to learn from a variety of
feedback. To learn from a variety of feedback we make the following insight:
the teacher's cost function is latent and we can model a stream of feedback as
a stream of loss functions. We then use any online learning algorithm to
minimize the sum of these losses. With this insight we can learn from a
diversity of feedback that is weakly correlated with the teacher's true cost
function. We unify prior work into a general corrective feedback meta-algorithm
and show that regardless of feedback we can obtain the same regret bounds. We
demonstrate our approach by learning to perform a household navigation task on
a robotic racecar platform. Our results show that our approach can learn
quickly from a variety of noisy feedback.
|
We report on a joint experimental and theoretical study of photoelectron
circular dichroism (PECD) in methyloxirane. By detecting O 1s-photoelectrons in
coincidence with fragment ions, we deduce the molecule's orientation and
photoelectron emission direction in the laboratory frame. Thereby, we retrieve
a fourfold differential PECD clearly beyond 50%. This strong chiral asymmetry
is reproduced by ab initio electronic structure calculations. Providing such a
pronounced contrast makes PECD of fixed-in-space chiral molecules an even more
sensitive tool for chiral recognition in the gas phase.
|
Dark energy is the constituent with an enormous abundance of the present
universe, responsible for the universe's accelerated expansion. Therefore, it
is plausible that dark energy may interact within any compact astrophysical
objects. The author in Ref. [Phys. Rev. D 83, 127501 (2011)], constructs an
exact star solution consisting of an ordinary matter and phantom field from a
constant density star (CDS) known as Schwarzschild interior solution. The star
denotes a dark energy star (DES). The author claims that the phantom field
represents dark energy within the star. So far, the role of the phantom field
as dark energy in DES is not systematically studied yet. Related to this issue,
we analyze the energy condition of DES. We expect that DES shall violate the
strong energy condition (SEC) for a particular condition. We discover that SEC
is fully violated only when the compactness reaches the Buchdahl limit.
Furthermore, we also investigate the causal conditions and stabilities due to
the convective motion and gravitational cracking. We also find that those
conditions are violated. These results indicate that DES is not physically
stable. However, we may consider DES as an ultra-compact object of which we can
calculate the gravitational wave echo time and echo frequency and compare them
to those of CDS. We find that the contribution of the phantom field delays the
gravitational wave echoes. The effective potential of the perturbed DES is also
studied. The potential also enjoys a potential well like CDS but with a deeper
well. We also investigate the possibility that DES could form a gravastar when
$ C=1 $. It is found that gravastar produced from DES possesses no singularity
with a dS-like phase as the interior. These results could open more
opportunities for the observational study of dark energy in the near future,
mostly from the compact astrophysical objects.
|
We review the trade-offs between speed, fluctuations, and thermodynamic cost
involved with biological processes in nonequilibrium states, and discuss how
optimal these processes are in light of the universal bound set by the
thermodynamic uncertainty relation (TUR). The values of the uncertainty product
$\mathcal{Q}$ of TUR, which can be used as a measure of the precision of
enzymatic processes realized for a given thermodynamic cost, are suboptimal
when the substrate concentration $[S]$ is at the Michaelis constant
($K_\text{M}$), and some of the key biological processes are found to work
around this condition. We illustrate the utility of $\mathcal{Q}$ in assessing
how close the molecular motors and biomass producing machineries are to the TUR
bound, and for the cases of biomass production (or biological copying
processes) we discuss how their optimality quantified in terms of $\mathcal{Q}$
is balanced with the error rate in the information transfer process. We also
touch upon the trade-offs in other error-minimizing processes in biology, such
as gene regulation and chaperone-assisted protein folding. A spectrum of
$\mathcal{Q}$ recapitulating the biological processes surveyed here provides
glimpses into how biological systems are evolved to optimize and balance the
conflicting functional requirements.
|
We introduce a framework for Bayesian experimental design (BED) with implicit
models, where the data-generating distribution is intractable but sampling from
it is still possible. In order to find optimal experimental designs for such
models, our approach maximises mutual information lower bounds that are
parametrised by neural networks. By training a neural network on sampled data,
we simultaneously update network parameters and designs using stochastic
gradient-ascent. The framework enables experimental design with a variety of
prominent lower bounds and can be applied to a wide range of scientific tasks,
such as parameter estimation, model discrimination and improving future
predictions. Using a set of intractable toy models, we provide a comprehensive
empirical comparison of prominent lower bounds applied to the aforementioned
tasks. We further validate our framework on a challenging system of stochastic
differential equations from epidemiology.
|
Deploying convolutional neural networks (CNNs) for embedded applications
presents many challenges in balancing resource-efficiency and task-related
accuracy. These two aspects have been well-researched in the field of CNN
compression. In real-world applications, a third important aspect comes into
play, namely the robustness of the CNN. In this paper, we thoroughly study the
robustness of uncompressed, distilled, pruned and binarized neural networks
against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool,
LocalSearch and GenAttack). These new insights facilitate defensive training
schemes or reactive filtering methods, where the attack is detected and the
input is discarded and/or cleaned. Experimental results are shown for distilled
CNNs, agent-based state-of-the-art pruned models, and binarized neural networks
(BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets.
We present evaluation methods to simplify the comparison between CNNs under
different attack schemes using loss/accuracy levels, stress-strain graphs,
box-plots and class activation mapping (CAM). Our analysis reveals susceptible
behavior of uncompressed and pruned CNNs against all kinds of attacks. The
distilled models exhibit their strength against all white box attacks with an
exception of C&W. Furthermore, binary neural networks exhibit resilient
behavior compared to their baselines and other compressed variants.
|
The fusion probability for the production of superheavy nuclei in cold fusion
reactions was investigated and compared with recent experimental results for
$^{48}$Ca, $^{50}$Ti, and $^{54}$Cr incident on a $^{208}$Pb target.
Calculations were performed within the fusion-by-diffusion model (FbD) using
new nuclear data tables by Jachimowicz et al. It is shown that the experimental
data could be well explained within the framework of the FbD model. The
saturation of the fusion probability at bombarding energies above the
interaction barrier is reproduced. It emerges naturally from the physical
effect of the suppression of contributions of higher partial waves in fusion
reactions and is related to the critical angular momentum. The role of the
difference in values of the rotational energies in the fusion saddle point and
contact (sticking) configuration of the projectile-target system is discussed.
|
In this paper we show that if $\theta$ is a $T$-design of an association
scheme $(\Omega, \mathcal{R})$, and the Krein parameters $q_{i,j}^h$ vanish for
some $h \in T$ and all $i, j \in T$, then $\theta$ consists of precisely half
of the vertices of $(\Omega, \mathcal{R})$ or it is a $T'$-design, where
$|T'|>|T|$. We then apply this result to various problems in finite geometry.
In particular, we show for the first time that nontrivial $m$-ovoids of
generalised octagons of order $(s, s^2)$ are hemisystems, and hence no
$m$-ovoid of a Ree-Tits octagon can exist. We give short proofs of similar
results for (i) partial geometries with certain order conditions; (ii) thick
generalised quadrangles of order $(s,s^2)$; (iii) the dual polar spaces
$\rm{DQ}(2d, q)$, $\rm{DW}(2d-1,q)$ and $\rm{DH}(2d-1,q^2)$, for $d \ge 3$;
(iv) the Penttila-Williford scheme. In the process of (iv), we also consider a
natural generalisation of the Penttila-Williford scheme in $\rm{Q}^-(2n-1, q)$,
$n\geqslant 3$.
|
An inequality is derived for the average $t$-energy of pinned distance
measures, where $0 < t < 1$. This refines Mattila's theorem on distance sets to
pinned distance sets, and gives an analogue of Liu's theorem for pinned
distance sets of dimension smaller than 1.
|
We prove that with high probability over the choice of a random graph $G$
from the Erd\H{o}s-R\'enyi distribution $G(n,1/2)$, a natural
$n^{O(\varepsilon^2 \log n)}$-time, degree $O(\varepsilon^2 \log n)$
sum-of-squares semidefinite program cannot refute the existence of a valid
$k$-coloring of $G$ for $k = n^{1/2 +\varepsilon}$. Our result implies that the
refutation guarantee of the basic semidefinite program (a close variant of the
Lov\'asz theta function) cannot be appreciably improved by a natural $o(\log
n)$-degree sum-of-squares strengthening, and this is tight up to a $n^{o(1)}$
slack in $k$. To the best of our knowledge, this is the first lower bound for
coloring $G(n,1/2)$ for even a single round strengthening of the basic SDP in
any SDP hierarchy.
Our proof relies on a new variant of instance-preserving non-pointwise
complete reduction within SoS from coloring a graph to finding large
independent sets in it. Our proof is (perhaps surprisingly) short, simple and
does not require complicated spectral norm bounds on random matrices with
dependent entries that have been otherwise necessary in the proofs of many
similar results [BHK+16, HKP+17, KB19, GJJ+20, MRX20].
Our result formally holds for a constraint system where vertices are allowed
to belong to multiple color classes; we leave the extension to the formally
stronger formulation of coloring, where vertices must belong to unique colors
classes, as an outstanding open problem.
|
Quadrotors can achieve aggressive flight by tracking complex maneuvers and
rapidly changing directions. Planning for aggressive flight with trajectory
optimization could be incredibly fast, even in higher dimensions, and can
account for dynamics of the quadrotor, however, only provides a locally optimal
solution. On the other hand, planning with discrete graph search can handle
non-convex spaces to guarantee optimality but suffers from exponential
complexity with the dimension of search. We introduce a framework for
aggressive quadrotor trajectory generation with global reasoning capabilities
that combines the best of trajectory optimization and discrete graph search.
Specifically, we develop a novel algorithmic framework that interleaves these
two methods to complement each other and generate trajectories with provable
guarantees on completeness up to discretization. We demonstrate and
quantitatively analyze the performance of our algorithm in challenging
simulation environments with narrow gaps that create severe attitude
constraints and push the dynamic capabilities of the quadrotor. Experiments
show the benefits of the proposed algorithmic framework over standalone
trajectory optimization and graph search-based planning techniques for
aggressive quadrotor flight.
|
In many real-world problems, complex dependencies are present both among
samples and among features. The Kronecker sum or the Cartesian product of two
graphs, each modeling dependencies across features and across samples, has been
used as an inverse covariance matrix for a matrix-variate Gaussian
distribution, as an alternative to a Kronecker-product inverse covariance
matrix, due to its more intuitive sparse structure. However, the existing
methods for sparse Kronecker-sum inverse covariance estimation are limited in
that they do not scale to more than a few hundred features and samples and that
the unidentifiable parameters pose challenges in estimation. In this paper, we
introduce EiGLasso, a highly scalable method for sparse Kronecker-sum inverse
covariance estimation, based on Newton's method combined with
eigendecomposition of the two graphs for exploiting the structure of Kronecker
sum. EiGLasso further reduces computation time by approximating the Hessian
based on the eigendecomposition of the sample and feature graphs. EiGLasso
achieves quadratic convergence with the exact Hessian and linear convergence
with the approximate Hessian. We describe a simple new approach to estimating
the unidentifiable parameters that generalizes the existing methods. On
simulated and real-world data, we demonstrate that EiGLasso achieves two to
three orders-of-magnitude speed-up compared to the existing methods.
|
We report the signatures of dynamic spin fluctuations in the layered
honeycomb Li$_3$Cu$_2$SbO$_6$ compound, with a 3$d$ S = 1/2 $d^9$ Cu$^{2+}$
configuration, through muon spin rotation and relaxation ($\mu$SR) and neutron
scattering studies. Our zero-field (ZF) and longitudinal-field (LF)-$\mu$SR
results demonstrate the slowing down of the Cu$^{2+}$ spin fluctuations below
4.0 K. The saturation of the ZF relaxation rate at low temperature, together
with its weak dependence on the longitudinal field between 0 and 3.2 kG,
indicates the presence of dynamic spin fluctuations persisting even at 80 mK
without static order. Neutron scattering study reveals the gaped magnetic
excitations with three modes at 7.7, 13.5 and 33 meV. Our DFT calculations
reveal that the next nearest neighbors (NNN) AFM exchange ($J_{AFM}$ = 31 meV)
is stronger than the NN FM exchange ($J_{FM}$ = -21 meV) indicating the
importance of the orbital degrees of freedom. Our results suggest that the
physics of Li$_3$Cu$_2$SbO$_6$ can be explained by an alternating AFM chain
rather than the honeycomb lattice.
|
Conventional planar video streaming is the most popular application in mobile
systems and the rapid growth of 360 video content and virtual reality (VR)
devices are accelerating the adoption of VR video streaming. Unfortunately,
video streaming consumes significant system energy due to the high power
consumption of the system components (e.g., DRAM, display interfaces, and
display panel) involved in this process.
We propose BurstLink, a novel system-level technique that improves the energy
efficiency of planar and VR video streaming. BurstLink is based on two key
ideas. First, BurstLink directly transfers a decoded video frame from the host
system to the display panel, bypassing the host DRAM. To this end, we extend
the display panel with a double remote frame buffer (DRFB), instead of the
DRAM's double frame buffer, so that the system can directly update the DRFB
with a new frame while updating the panel's pixels with the current frame
stored in the DRFB. Second, BurstLink transfers a complete decoded frame to the
display panel in a single burst, using the maximum bandwidth of modern display
interfaces. Unlike conventional systems where the frame transfer rate is
limited by the pixel-update throughput of the display panel, BurstLink can
always take full advantage of the high bandwidth of modern display interfaces
by decoupling the frame transfer from the pixel update as enabled by the DRFB.
This direct and burst frame transfer of BurstLink significantly reduces energy
consumption in video display by reducing access to the host DRAM and increasing
the system's residency at idle power states.
We evaluate BurstLink using an analytical power model that we rigorously
validate on a real modern mobile system. Our evaluation shows that BurstLink
reduces system energy consumption for 4K planar and VR video streaming by 41%
and 33%, respectively.
|
Game-theoretic attribution techniques based on Shapley values are used
extensively to interpret black-box machine learning models, but their exact
calculation is generally NP-hard, requiring approximation methods for
non-trivial models. As the computation of Shapley values can be expressed as a
summation over a set of permutations, a common approach is to sample a subset
of these permutations for approximation. Unfortunately, standard Monte Carlo
sampling methods can exhibit slow convergence, and more sophisticated quasi
Monte Carlo methods are not well defined on the space of permutations. To
address this, we investigate new approaches based on two classes of
approximation methods and compare them empirically. First, we demonstrate
quadrature techniques in a RKHS containing functions of permutations, using the
Mallows kernel to obtain explicit convergence rates of $O(1/n)$, improving on
$O(1/\sqrt{n})$ for plain Monte Carlo. The RKHS perspective also leads to quasi
Monte Carlo type error bounds, with a tractable discrepancy measure defined on
permutations. Second, we exploit connections between the hypersphere
$\mathbb{S}^{d-2}$ and permutations to create practical algorithms for
generating permutation samples with good properties. Experiments show the above
techniques provide significant improvements for Shapley value estimates over
existing methods, converging to a smaller RMSE in the same number of model
evaluations.
|
In this paper, an extension of the random field Ginzburg-Landau model on the
hypercubic lattice is considered by adding $p$-spin ($p\geqslant 2$)
interactions coupled to general disorders. This new model is called the random
field mixed-spin Ginzburg-Landau model. We proved that, in the infinite volume
limit of this model, the variance of spin overlap vanishes.
|
Logarithmic potentials and many other potentials satisfy maximum principle.
The dyadic version of logarithmic potential can be easily introduced, it lives
on dyadic tree and also satisfies maximum principle. But its analog on bi-tree
does not have this property. We prove here that "on average" we can still have
something like maximum principle on bi-tree. We use the surrogate maximum
principle to prove embedding theorems of Carleson type on bi-disc.
|
Molecular structures of RNA molecules reconstructed from X-ray
crystallography frequently contain errors. Motivated by this problem we examine
clustering on a torus since RNA shapes can be described by dihedral angles. A
previously developed clustering method for torus data involves two tuning
parameters and we assess clustering results for different parameter values in
relation to the problem of so-called RNA clashes. This clustering problem is
part of the dynamically evolving field of statistics on manifolds. Statistical
problems on the torus highlight general challenges for statistics on manifolds.
Therefore, the torus PCA and clustering methods we propose make an important
contribution to directional statistics and statistics on manifolds in general.
|
To unravel the structures of C12H12O7 isomers, identified as light-absorbing
photooxidation products of syringol in atmospheric chamber experiments, we
apply a graph-based molecule generator and machine learning workflow. To
accomplish this in a bias-free manner, molecular graphs of the entire chemical
subspace of C12H12O7 were generated, assuming that the isomers contain two
C6-rings; this led to 260 million molecular graphs and 120 million stable
structures. Using quantum chemistry excitation energies and oscillator
strengths as training data, we predicted these quantities using kernel ridge
regression and simulated UV/Vis absorption spectra. Then we determined the
probability of the molecules to cause the experimental spectrum within the
errors of the different methods. Molecules whose spectra were likely to match
the experimental spectrum were clustered according to structural features,
resulting in clusters of > 500,000 molecules. While we identified several
features that correlate with a high probability to cause the experimental
spectrum, no clear composition of necessary features can be given. Thus, the
absorption spectrum is not sufficient to uniquely identify one specific isomer
structure. If more structural features were known from experimental data, the
number of structures could be reduced to a few tens of thousands candidates. We
offer a procedure to detect when sufficient fragmentation data has been
included to reduce the number of possible molecules. The most efficient
strategy to obtain valid candidates is obtained if structural data is applied
already at the bias-free molecule generation stage. The systematic enumeration,
however, is necessary to avoid mis-identification of molecules, while it
guarantees that there are no other molecules that would also fit the spectrum
in question.
|
The semileptonic decays of $\Lambda_{b}\to\Lambda_{c}^{(*)}\ell\nu_\ell$ and
$\Lambda_{c}\to\Lambda^{(*)}\ell\nu_\ell$ are studied in the light-front quark
model in this work. Instead of the quark-diquark approximation, we use the
three-body wave functions obtained by baryon spectroscopy. The ground states
$(1/2^{+})$, the $\lambda$-mode orbital excited states $(1/2^{-})$, and the
first radial excited states $(1/2^{+})$ of $\Lambda_{(c)}^{(*)}$ are
considered. The discussions are given for the form factors, partial widths,
branching fractions, leptonic forward-backward asymmetries, hadron
polarizations, lepton polarizations, and the lepton flavor universalities. Our
results are useful for the inputs of heavy baryon decays and understanding the
baryon structures, and helpful for experimental measurements.
|
We study the electronic properties of the heterobilayer of vanadium and iron
oxychlorides, VOCl and FeOCl, two layered air stable van der Waals insulating
oxides with different types of antiferromagnetic order in bulk: VOCl monolayers
are ferromagnetic (FM) whereas the FeOCl monolayers are antiferromagnetic (AF).
We use density functional theory (DFT) calculations, with Hubbard correction
that is found to be needed to describe correctly the insulating nature of these
compounds. We compute the magnetic anisotropy and propose a spin model
Hamiltonian. Our calculations show that interlayer coupling in weak and
ferromagnetic so that magnetic order of the monolayers is preserved in the
heterobilayers providing thereby a van der Waals heterostructure that combines
two monolayers with different magnetic order. Interlayer exchange should lead
both to exchange bias and to the emergence of hybrid collective modes that that
combine FM and AF magnons. The energy band of the heterobilayer show a type II
band alignment, and feature spin-splitting of the states of the AF layer due to
the breaking of the inversion symmetry.
|
We design multi-horizon forecasting models for limit order book (LOB) data by
using deep learning techniques. Unlike standard structures where a single
prediction is made, we adopt encoder-decoder models with sequence-to-sequence
and Attention mechanisms to generate a forecasting path. Our methods achieve
comparable performance to state-of-art algorithms at short prediction horizons.
Importantly, they outperform when generating predictions over long horizons by
leveraging the multi-horizon setup. Given that encoder-decoder models rely on
recurrent neural layers, they generally suffer from slow training processes. To
remedy this, we experiment with utilising novel hardware, so-called Intelligent
Processing Units (IPUs) produced by Graphcore. IPUs are specifically designed
for machine intelligence workload with the aim to speed up the computation
process. We show that in our setup this leads to significantly faster training
times when compared to training models with GPUs.
|
According to mechanistic theories of working memory (WM), information is
retained as persistent spiking activity of cortical neural networks. Yet, how
this activity is related to changes in the oscillatory profile observed during
WM tasks remains an open issue. We explore joint effects of input gamma-band
oscillations and noise on the dynamics of several firing rate models of WM. The
considered models have a metastable active regime, i.e. they demonstrate
long-lasting transient post-stimulus firing rate elevation. We start from a
single excitatory-inhibitory circuit and demonstrate that either gamma-band or
noise input could stabilize the active regime, thus supporting WM retention. We
then consider a system of two circuits with excitatory intercoupling. We find
that fast coupling allows for better stabilization by common noise compared to
independent noise and stronger amplification of this effect by in-phase gamma
inputs compared to anti-phase inputs. Finally, we consider a multi-circuit
system comprised of two clusters, each containing a group of circuits receiving
a common noise input and a group of circuits receiving independent noise. Each
cluster is associated with its own local gamma generator, so all its circuits
receive gamma-band input in the same phase. We find that gamma-band input
differentially stabilizes the activity of the "common-noise" groups compared to
the "independent-noise" groups. If the inter-cluster connections are fast, this
effect is more pronounced when the gamma-band input is delivered to the
clusters in the same phase rather than in the anti-phase. Assuming that the
common noise comes from a large-scale distributed WM representation, our
results demonstrate that local gamma oscillations can stabilize the activity of
the corresponding parts of this representation, with stronger effect for fast
long-range connections and synchronized gamma oscillations.
|
We present and describe the GPFDA package for R. The package provides
flexible functionalities for dealing with Gaussian process regression (GPR)
models for functional data. Multivariate functional data, functional data with
multidimensional inputs, and nonseparable and/or nonstationary covariance
structures can be modeled. In addition, the package fits functional regression
models where the mean function depends on scalar and/or functional covariates
and the covariance structure is modeled by a GPR model. In this paper, we
present the versatility of GPFDA with respect to mean function and covariance
function specifications and illustrate the implementation of estimation and
prediction of some models through reproducible numerical examples.
|
Risk modeling with EHR data is challenging due to a lack of direct
observations on the disease outcome, and the high dimensionality of the
candidate predictors. In this paper, we develop a surrogate assisted
semi-supervised-learning (SAS) approach to risk modeling with high dimensional
predictors, leveraging a large unlabeled data on candidate predictors and
surrogates of outcome, as well as a small labeled data with annotated outcomes.
The SAS procedure borrows information from surrogates along with candidate
predictors to impute the unobserved outcomes via a sparse working imputation
model with moment conditions to achieve robustness against mis-specification in
the imputation model and a one-step bias correction to enable interval
estimation for the predicted risk. We demonstrate that the SAS procedure
provides valid inference for the predicted risk derived from a high dimensional
working model, even when the underlying risk prediction model is dense and the
risk model is mis-specified. We present an extensive simulation study to
demonstrate the superiority of our SSL approach compared to existing supervised
methods. We apply the method to derive genetic risk prediction of type-2
diabetes mellitus using a EHR biobank cohort.
|
Using the Global Magneto-Ionic Medium Survey (GMIMS) Low-Band South (LBS)
southern sky polarization survey, covering 300 to 480 MHz at 81 arcmin
resolution, we reveal the brightest region in the Southern polarized sky at
these frequencies. The region, G150-50, covers nearly 20deg$^2$, near
(l,b)~(150 deg,-50 deg). Using GMIMS-LBS and complementary data at higher
frequencies (~0.6--30 GHz), we apply Faraday tomography and Stokes QU-fitting
techniques. We find that the magnetic field associated with G150-50 is both
coherent and primarily in the plane of the sky, and indications that the region
is associated with Radio Loop II. The Faraday depth spectra across G150-50 are
broad and contain a large-scale spatial gradient. We model the magnetic field
in the region as an expanding shell, and we can reproduce both the observed
Faraday rotation and the synchrotron emission in the GMIMS-LBS band. Using
QU-fitting, we find that the Faraday spectra are produced by several Faraday
dispersive sources along the line-of-sight. Alternatively, polarization horizon
effects that we cannot model are adding complexity to the high-frequency
polarized spectra. The magnetic field structure of Loop II dominates a large
fraction of the sky, and studies of the large-scale polarized sky will need to
account for this object. Studies of G150-50 with high angular resolution could
mitigate polarization horizon effects, and clarify the nature of G150-50.
|
It has been shown that deep learning models are vulnerable to adversarial
attacks. We seek to further understand the consequence of such attacks on the
intermediate activations of neural networks. We present an evaluation metric,
POP-N, which scores the effectiveness of projecting data to N dimensions under
the context of visualizing representations of adversarially perturbed inputs.
We conduct experiments on CIFAR-10 to compare the POP-2 score of several
dimensionality reduction algorithms across various adversarial attacks.
Finally, we utilize the 2D data corresponding to high POP-2 scores to generate
example visualizations.
|
In this paper, we investigate two types of $U(1)$-gauge field theories on
$G_2$-manifolds. One is the $U(1)$-Yang-Mills theory which admits the classical
instanton solutions, we show that $G_2$-manifolds emerge from the
anti-self-dual $U(1)$ instantons, which is an analogy of Yang's result for
Calabi-Yau manifolds. The other one is the higher-order $U(1)$-Chern-Simons
theory as a generalization of K\"{a}hler-Chern-Simons theory, by suitable
choice of gauge and regularization technique, we calculate the partition
function under semiclassical approximation.
|
In this paper we systematically consider the baryon ($B$) and lepton ($L$)
number violating dinucleon to dilepton decays ($pp\to \ell^+\ell^{\prime+},
pn\to \ell^+\bar\nu^\prime, nn\to \bar\nu\bar\nu^\prime$) with $\Delta B=\Delta
L=-2$ in the framework of effective field theory. We start by constructing a
basis of dimension-12 (dim-12) operators mediating such processes in the low
energy effective field theory (LEFT) below the electroweak scale. Then we
consider their standard model effective field theory (SMEFT) completions
upwards and their chiral realizations in baryon chiral perturbation theory
(B$\chi$PT) downwards. We work to the first nontrivial orders in each effective
field theory, collect along the way the matching conditions, and express the
decay rates in terms of the Wilson coefficients associated with the dim-12
operators in SMEFT and the low energy constants pertinent to B$\chi$PT. We find
the current experimental limits push the associated new physics scale larger
than $1-3$ TeV, which is still accessible to the future collider searches.
Through weak isospin symmetry, we find the current experimental limits on the
partial lifetimes of transitions $pp\to \ell^+\ell^{\prime+}, pn\to
\ell^+\bar\nu^\prime$ imply stronger limits on $nn\to\bar\nu\bar\nu^\prime$
than their existing lower bounds, which are improved by $2-3$ orders of
magnitude. Furthermore, assuming charged mode transitions are also dominantly
generated by the similar dim-12 SMEFT interactions, the experimental limits on
$pp\to e^+e^+,e^+\mu^+,\mu^+\mu^+$ lead to stronger limits on $pn\to
\ell^+_\alpha\bar\nu_\beta$ with $\alpha,\beta=e,\mu$ than their existing
bounds. Conversely, the same assumptions help us to set a lower bound on the
lifetime of the experimentally unsearched mode $pp\to e^+\tau^+$ from that of
$pn\to e^+\bar\nu_\tau$, i.e., $\Gamma^{-1}_{pp\to e^+\tau^+}\gtrsim 2\times
10^{34}~\rm yrs$.
|
Graph convolutional network (GCN) based approaches have achieved significant
progress for solving complex, graph-structured problems. GCNs incorporate the
graph structure information and the node (or edge) features through message
passing and computes 'deep' node representations. Despite significant progress
in the field, designing GCN architectures for heterogeneous graphs still
remains an open challenge. Due to the schema of a heterogeneous graph, useful
information may reside multiple hops away. A key question is how to perform
message passing to incorporate information of neighbors multiple hops away
while avoiding the well-known over-smoothing problem in GCNs. To address this
question, we propose our GCN framework 'Deep Heterogeneous Graph Convolutional
Network (DHGCN)', which takes advantage of the schema of a heterogeneous graph
and uses a hierarchical approach to effectively utilize information many hops
away. It first computes representations of the target nodes based on their
'schema-derived ego-network' (SEN). It then links the nodes of the same type
with various pre-defined metapaths and performs message passing along these
links to compute final node representations. Our design choices naturally
capture the way a heterogeneous graph is generated from the schema. The
experimental results on real and synthetic datasets corroborate the design
choice and illustrate the performance gains relative to competing alternatives.
|
We extend Araki's well-known results on the equivalence of the KMS condition
and the variational principle for equilibrium states of quantum lattice systems
with short-range interactions, to a large class of models possibly containing
mean-field interactions (representing an extreme form of long-range
interactions). This result is reminiscent of van Hemmen's work on equilibrium
states for mean-field models. The extension was made possible by our recent
outcomes on states minimizing the free energy density of mean-field models on
the lattice, as well as on the infinite volume dynamics for such models.
|
Verification of AI is a challenge that has engineering, algorithmic and
programming language components. For example, AI planners are deployed to model
actions of autonomous agents. They comprise a number of searching algorithms
that, given a set of specified properties, find a sequence of actions that
satisfy these properties. Although AI planners are mature tools from the
algorithmic and engineering points of view, they have limitations as
programming languages. Decidable and efficient automated search entails
restrictions on the syntax of the language, prohibiting use of higher-order
properties or recursion. This paper proposes a methodology for embedding plans
produced by AI planners into dependently-typed language Agda, which enables
users to reason about and verify more general and abstract properties of plans,
and also provides a more holistic programming language infrastructure for
modelling plan execution.
|
The prevalence of employing attention mechanisms has brought along concerns
on the interpretability of attention distributions. Although it provides
insights about how a model is operating, utilizing attention as the explanation
of model predictions is still highly dubious. The community is still seeking
more interpretable strategies for better identifying local active regions that
contribute the most to the final decision. To improve the interpretability of
existing attention models, we propose a novel Bilinear Representative
Non-Parametric Attention (BR-NPA) strategy that captures the task-relevant
human-interpretable information. The target model is first distilled to have
higher-resolution intermediate feature maps. From which, representative
features are then grouped based on local pairwise feature similarity, to
produce finer-grained, more precise attention maps highlighting task-relevant
parts of the input. The obtained attention maps are ranked according to the
`active level' of the compound feature, which provides information regarding
the important level of the highlighted regions. The proposed model can be
easily adapted in a wide variety of modern deep models, where classification is
involved. It is also more accurate, faster, and with a smaller memory footprint
than usual neural attention modules. Extensive experiments showcase more
comprehensive visual explanations compared to the state-of-the-art
visualization model across multiple tasks including few-shot classification,
person re-identification, fine-grained image classification. The proposed
visualization model sheds imperative light on how neural networks `pay their
attention' differently in different tasks.
|
The radio-wavelength detection of extensive air showers (EAS) initiated by
cosmic-ray interactions in the Earth's atmosphere is a promising technique for
investigating the origin of these particles and the physics of their
interactions. The Low Frequency Array (LOFAR) and the Owens Valley Long
Wavelength Array (OVRO-LWA) have both demonstrated that the dense cores of low
frequency radio telescope arrays yield detailed information on the radiation
ground pattern, which can be used to reconstruct key EAS properties and infer
the primary cosmic-ray composition. Here, we demonstrate a new observation mode
of the Murchison Widefield Array (MWA), tailored to the observation of the
sub-microsecond coherent bursts of radiation produced by EAS. We first show how
an aggregate 30.72 MHz bandwidth (3072x 10 kHz frequency channels) recorded at
0.1 ms resolution with the MWA's voltage capture system (VCS) can be
synthesised back to the full bandwidth Nyquist resolution of 16.3 ns. This
process, which involves `inverting' two sets of polyphase filterbanks, retains
90.5% of the signal-to-noise of a cosmic ray signal. We then demonstrate the
timing and positional accuracy of this mode by resolving the location of a
calibrator pulse to within 5 m. Finally, preliminary observations show that the
rate of nanosecond radio-frequency interference (RFI) events is 0.1 Hz, much
lower than that found at the sites of other radio telescopes that study cosmic
rays. We conclude that the identification of cosmic rays at the MWA, and hence
with the low-frequency component of the Square Kilometre Array, is feasible
with minimal loss of efficiency due to RFI.
|
Ionic charges were related through bulk modulus through linear regression.
These two parameters yield a straight regression line, when plotted but fall on
different positions due the variation in bulk modulus values. The calculated
values of bulk moduli reflecting elastic characteristics are in close agreement
with other available values. As, these values are only differ by average of 3%
from values of literature. Moreover, the regression resulted in a good values
of correlation coefficient (R=0.77) and Probability (P=0.001). These all show
the accuracy and reliability of the current work. The technique adopted in this
work will be helpful to material scientists for finding new materials with
referred elastic characteristics among structurally similar materials, also the
calculated data will act as reference for upcoming investigation of the studied
compounds.
|
The paradigm of variational quantum classifiers (VQCs) encodes
\textit{classical information} as quantum states, followed by quantum
processing and then measurements to generate classical predictions. VQCs are
promising candidates for efficient utilization of a near-term quantum device:
classifiers involving $M$-dimensional datasets can be implemented with only
$\lceil \log_2 M \rceil$ qubits by using an amplitude encoding. A general
framework for designing and training VQCs, however, has not been proposed, and
a fundamental understanding of its power and analytical relationships with
classical classifiers are not well understood. An encouraging specific
embodiment of VQCs, quantum circuit learning (QCL), utilizes an ansatz: it
expresses the quantum evolution operator as a circuit with a predetermined
topology and parametrized gates; training involves learning the gate parameters
through optimization. In this letter, we first address the open questions about
VQCs and then show that they, including QCL, fit inside the well-known kernel
method. Based on such correspondence, we devise a design framework of efficient
ansatz-independent VQCs, which we call the unitary kernel method (UKM): it
directly optimizes the unitary evolution operator in a VQC. Thus, we show that
the performance of QCL is bounded from above by the UKM. Next, we propose a
variational circuit realization (VCR) for designing efficient quantum circuits
for a given unitary operator. By combining the UKM with the VCR, we establish
an efficient framework for constructing high-performing circuits. We finally
benchmark the relatively superior performance of the UKM and the VCR via
extensive numerical simulations on multiple datasets.
|
LS 5039 is a high-mass gamma-ray binary hosting a compact object of unknown
type. NuSTAR observed LS 5039 during its entire 3.9 day binary period. We
performed a periodic signal search up to 1000 Hz which did not produce credible
period candidates. We do see the 9.05 s period candidate, originally reported
by Yoneda et al. 2020 using the same data, in the Fourier power spectrum, but
we find that the statistical significance of this feature is too low to claim
it as a real detection. We also did not find significant bursts or
quasi-periodic variability. The modulation with the orbital period is clearly
seen and remains unchanged over a decade long timescale when compared to the
earlier Suzaku light curve. The joint analysis of the NuSTAR and Suzaku XIS
data shows that the 0.7-70 keV spectrum can be satisfactory described by a
single absorbed power-law model with no evidence of cutoff at higher energies.
The slope of the spectrum anti-correlates with the flux during the binary
orbit. Therefore, if LS 5039 hosts a young neutron star, its X-ray pulsations
appear to be outshined by the intrabinary shock emission. The lack of spectral
lines and/or an exponential cutoff at higher energies suggests that the
putative neutron star is not actively accreting. Although a black hole scenario
still remains a possibility, the lack of variability or Fe K$\alpha$ lines,
which typically accompany accretion, makes it less likely.
|
We extend the recently developed rough path theory for Volterra equations
from (Harang and Tindel, 2019) to the case of more rough noise and/or more
singular Volterra kernels. It was already observed in (Harang and Tindel, 2019)
that the Volterra rough path introduced there did not satisfy any geometric
relation, similar to that observed in classical rough path theory. Thus, an
extension of the theory to more irregular driving signals requires a deeper
understanding of the specific algebraic structure arising in the Volterra rough
path. Inspired by the elements of "non-geometric rough paths" developed in
(Gubinelli, 2010) and (Hairer and Kelly, 2015) we provide a simple description
of the Volterra rough path and the controlled Volterra process in terms of
rooted trees, and with this description we are able to solve rough volterra
equations in driven by more irregular signals.
|
Single phonon excitations are sensitive probes of light dark matter in the
keV-GeV mass window. For anisotropic target materials, the signal depends on
the direction of the incoming dark matter wind and exhibits a daily modulation.
We discuss in detail the various sources of anisotropy, and carry out a
comparative study of 26 crystal targets, focused on sub-MeV dark matter
benchmarks. We compute the modulation reach for the most promising targets,
corresponding to the cross section where the daily modulation can be observed
for a given exposure, which allows us to combine the strength of DM-phonon
couplings and the amplitude of daily modulation. We highlight Al$_2$O$_3$
(sapphire), CaWO$_4$ and h-BN (hexagonal boron nitride) as the best polar
materials for recovering a daily modulation signal, which feature
$\mathcal{O}(1 - 100)\%$ variations of detection rates throughout the day,
depending on the dark matter mass and interaction. The directional nature of
single phonon excitations offers a useful handle to mitigate backgrounds, which
is crucial for fully realizing the discovery potential of near future
experiments.
|
The superspace ring $\Omega_n$ is a rank $n$ polynomial ring tensor a rank
$n$ exterior algebra. Using an extension of the Vandermonde determinant to
$\Omega_n$, the authors previously defined a family of doubly graded quotients
$\mathbb{W}_{n,k}$ of $\Omega_n$ which carry an action of the symmetric group
$\mathfrak{S}_n$ and satisfy a bigraded version of Poincar\'e Duality. In this
paper, we examine the duality modules $\mathbb{W}_{n,k}$ in greater detail. We
describe a monomial basis of $\mathbb{W}_{n,k}$ and give combinatorial formulas
for its bigraded Hilbert and Frobenius series. These formulas involve new
combinatorial objects called {\em ordered superpartitions}. These are ordered
set partitions $(B_1 \mid \cdots \mid B_k)$ of $\{1,\dots,n\}$ in which the
non-minimal elements of any block $B_i$ may be barred or unbarred.
|
This paper studies the problem of finding an anomalous arm in a multi-armed
bandit when (a) each arm is a finite-state Markov process, and (b) the arms are
restless. Here, anomaly means that the transition probability matrix (TPM) of
one of the arms (the odd arm) is different from the common TPM of each of the
non-odd arms. The TPMs are unknown to a decision entity that wishes to find the
index of the odd arm as quickly as possible, subject to an upper bound on the
error probability. We derive a problem instance-specific asymptotic lower bound
on the expected time required to find the odd arm index, where the asymptotics
is as the error probability vanishes. Further, we devise a policy based on the
principle of certainty equivalence, and demonstrate that under a continuous
selection assumption and a certain regularity assumption on the TPMs, the
policy achieves the lower bound arbitrarily closely. Thus, while the lower
bound is shown for all problem instances, the upper bound is shown only for
those problem instances satisfying the continuous selection and the regularity
assumptions. Our achievability analysis is based on resolving the
identifiability problem in the context of a certain lifted countable-state
controlled Markov process.
|
We introduce a novel approach to unsupervised and semi-supervised domain
adaptation for semantic segmentation. Unlike many earlier methods that rely on
adversarial learning for feature alignment, we leverage contrastive learning to
bridge the domain gap by aligning the features of structurally similar label
patches across domains. As a result, the networks are easier to train and
deliver better performance. Our approach consistently outperforms
state-of-the-art unsupervised and semi-supervised methods on two challenging
domain adaptive segmentation tasks, particularly with a small number of target
domain annotations. It can also be naturally extended to weakly-supervised
domain adaptation, where only a minor drop in accuracy can save up to 75% of
annotation cost.
|
The Simultaneous Localization and Mapping (SLAM) problem addresses the
possibility of a robot to localize itself in an unknown environment and
simultaneously build a consistent map of this environment. Recently, cameras
have been successfully used to get the environment's features to perform SLAM,
which is referred to as visual SLAM (VSLAM). However, classical VSLAM
algorithms can be easily induced to fail when either the motion of the robot or
the environment is too challenging. Although new approaches based on Deep
Neural Networks (DNNs) have achieved promising results in VSLAM, they still are
unable to outperform traditional methods. To leverage the robustness of deep
learning to enhance traditional VSLAM systems, we propose to combine the
potential of deep learning-based feature descriptors with the traditional
geometry-based VSLAM, building a new VSLAM system called LIFT-SLAM. Experiments
conducted on KITTI and Euroc datasets show that deep learning can be used to
improve the performance of traditional VSLAM systems, as the proposed approach
was able to achieve results comparable to the state-of-the-art while being
robust to sensorial noise. We enhance the proposed VSLAM pipeline by avoiding
parameter tuning for specific datasets with an adaptive approach while
evaluating how transfer learning can affect the quality of the features
extracted.
|
The idea that, after their evaporation, Planck-mass black holes might tunnel
into metastable white holes has recently been intensively studied. Those relics
have been considered as a dark matter candidate. We show that the model is
severely constrained and underline some possible detection paths. We also
investigate, in a more general setting, the way the initial black hole mass
spectrum would be distorted by both the bouncing effect and the Hawking
evaporation.
|
Multiview embedding is a way to model strange attractors that takes advantage
of the way measurements are often made in real chaotic systems, using
multidimensional measurements to make up for a lack of long timeseries.
Predictive multiview embedding adapts this approach to the problem of
predicting new values, and provides a natural framework for combining multiple
sources of information such as natural measurements and computer model runs for
potentially improved prediction. Here, using 18 month ahead prediction of
monthly averages, we show how predictive multiview embedding can be combined
with simple statistical approaches to explore predictability of four climate
variables by a GCM, build prediction bounds, explore the local manifold
structure of the attractor, and show that even though the GCM does not predict
a particular variable well, a hybrid model combining information from the GCM
and empirical data predicts that variable significantly better than the purely
empirical model.
|
Influence maximization is the problem of finding a small subset of nodes in a
network that can maximize the diffusion of information. Recently, it has also
found application in HIV prevention, substance abuse prevention, micro-finance
adoption, etc., where the goal is to identify the set of peer leaders in a
real-world physical social network who can disseminate information to a large
group of people. Unlike online social networks, real-world networks are not
completely known, and collecting information about the network is costly as it
involves surveying multiple people. In this paper, we focus on this problem of
network discovery for influence maximization. The existing work in this
direction proposes a reinforcement learning framework. As the environment
interactions in real-world settings are costly, so it is important for the
reinforcement learning algorithms to have minimum possible environment
interactions, i.e, to be sample efficient. In this work, we propose CLAIM -
Curriculum LeArning Policy for Influence Maximization to improve the sample
efficiency of RL methods. We conduct experiments on real-world datasets and
show that our approach can outperform the current best approach.
|
Connected and Automated Vehicles (CAVs) have real-time information from the
surrounding environment by using local on-board sensors, V2X
(Vehicle-to-Everything) communications, pre-loaded vehicle-specific lookup
tables, and map database. CAVs are capable of improving energy efficiency by
incorporating these information. In particular, Eco-Cruise and Eco-Lane
Selection on highways and/or motorways have immense potential to save energy,
because there are generally fewer traffic controllers and the vehicles keep
moving in general. In this paper, we present a cooperative and energy-efficient
lane-selection strategy named MultiCruise, where each CAV selects one among
multiple candidate lanes that allows the most energy-efficient travel.
MultiCruise incorporates an Eco-Cruise component to select the most
energy-efficient lane. The Eco-Cruise component calculates the driving
parameters and prospective energy consumption of the ego vehicle for each
candidate lane, and the Eco-Lane Selection component uses these values. As a
result, MultiCruise can account for multiple data sources, such as the road
curvature and the surrounding vehicles' velocities and accelerations. The
eco-autonomous driving strategy, MultiCruise, is tested, designed and verified
by using a co-simulation test platform that includes autonomous driving
software and realistic road networks to study the performance under realistic
driving conditions. Our experimental evaluations show that our eco-autonomous
MultiCruise saves up to 8.5% fuel consumption.
|
We apply the method of linear perturbations to the case of
Spin(7)-structures, showing that the only nontrivial perturbations are those
determined by a rank one nilpotent matrix.
We consider linear perturbations of the Bryant-Salamon metric on the spin
bundle over $S^4$ that retain invariance under the action of Sp(2), showing
that the metrics obtained in this way are isometric.
|
Given a homogeneous ideal $I \subseteq k[x_0,\dots,x_n]$, the Containment
problem studies the relation between symbolic and regular powers of $I$, that
is, it asks for which pair $m, r \in \mathbb{N}$, $I^{(m)} \subseteq I^r$
holds. In the last years, several conjectures have been posed on this problem,
creating an active area of current interests and ongoing investigations. In
this paper, we investigated the Stable Harbourne Conjecture and the Stable
Harbourne -- Huneke Conjecture and we show that they hold for the defining
ideal of a Complement of a Steiner configuration of points in
$\mathbb{P}^{n}_{k}$. We can also show that the ideal of a Complement of a
Steiner Configuration of points has expected resurgence, that is, its
resurgence is strictly less than its big height, and it also satisfies
Chudnovsky and Demailly's Conjectures. Moreover, given a hypergraph $H$, we
also study the relation between its colourability and the failure of the
containment problem for the cover ideal associated to $H$. We apply these
results in the case that $H$ is a Steiner System.
|
Electric vehicles can offer a low carbon emission solution to reverse rising
emission trends. However, this requires that the energy used to meet the demand
is green. To meet this requirement, accurate forecasting of the charging demand
is vital. Short and long-term charging demand forecasting will allow for better
optimisation of the power grid and future infrastructure expansions. In this
paper, we propose to use publicly available data to forecast the electric
vehicle charging demand. To model the complex spatial-temporal correlations
between charging stations, we argue that Temporal Graph Convolution Models are
the most suitable to capture the correlations. The proposed Temporal Graph
Convolutional Networks provide the most accurate forecasts for short and
long-term forecasting compared with other forecasting methods.
|
We study the effects of addition of Chern-Simons (CS) term in the minimal
Yang Mills (YM) matrix model composed of two $2 \times 2$ matrices with $SU(2)$
gauge and $SO(2)$ global symmetry. We obtain the Hamiltonian of this system in
appropriate coordinates and demonstrate that its dynamics is sensitive to the
values of both the CS coupling, $\kappa$, and the conserved conjugate momentum,
$p_\phi$, associated to the $SO(2)$ symmetry. We examine the behavior of the
emerging chaotic dynamics by computing the Lyapunov exponents and plotting the
Poincar\'{e} sections as these two parameters are varied and, in particular,
find that the largest Lyapunov exponents evaluated within a range of values of
$\kappa$ are above that is computed at $\kappa=0$, for $\kappa p_\phi < 0$. We
also give estimates of the critical exponents for the Lyapunov exponent as the
system transits from the chatoic to non-chaotic phase with $p_\phi$ approaching
to a critical value.
|
Single cell RNA sequencing (scRNA-seq) data makes studying the development of
cells possible at unparalleled resolution. Given that many cellular
differentiation processes are hierarchical, their scRNA-seq data is expected to
be approximately tree-shaped in gene expression space. Inference and
representation of this tree-structure in two dimensions is highly desirable for
biological interpretation and exploratory analysis. Our two contributions are
an approach for identifying a meaningful tree structure from high-dimensional
scRNA-seq data, and a visualization method respecting the tree-structure. We
extract the tree structure by means of a density based minimum spanning tree on
a vector quantization of the data and show that it captures biological
information well. We then introduce DTAE, a tree-biased autoencoder that
emphasizes the tree structure of the data in low dimensional space. We compare
to other dimension reduction methods and demonstrate the success of our method
experimentally. Our implementation relying on PyTorch and Higra is available at
github.com/hci-unihd/DTAE.
|
The imbalanced data classification remains a vital problem. The key is to
find such methods that classify both the minority and majority class correctly.
The paper presents the classifier ensemble for classifying binary,
non-stationary and imbalanced data streams where the Hellinger Distance is used
to prune the ensemble. The paper includes an experimental evaluation of the
method based on the conducted experiments. The first one checks the impact of
the base classifier type on the quality of the classification. In the second
experiment, the Hellinger Distance Weighted Ensemble (HDWE) method is compared
to selected state-of-the-art methods using a statistical test with two base
classifiers. The method was profoundly tested based on many imbalanced data
streams and obtained results proved the HDWE method's usefulness.
|
The celebrated Takens' embedding theorem concerns embedding an attractor of a
dynamical system in a Euclidean space of appropriate dimension through a
generic delay-observation map. The embedding also establishes a topological
conjugacy. In this paper, we show how an arbitrary sequence can be mapped into
another space as an attractive solution of a nonautonomous dynamical system.
Such mapping also entails a topological conjugacy and an embedding between the
sequence and the attractive solution spaces. This result is not a
generalization of Takens embedding theorem but helps us understand what exactly
is required by discrete-time state space models widely used in applications to
embed an external stimulus onto its solution space. Our results settle another
basic problem concerning the perturbation of an autonomous dynamical system. We
describe what exactly happens to the dynamics when exogenous noise perturbs
continuously a local irreducible attracting set (such as a stable fixed point)
of a discrete-time autonomous dynamical system.
|
Invariant object recognition is one of the most fundamental cognitive tasks
performed by the brain. In the neural state space, different objects with
stimulus variabilities are represented as different manifolds. In this
geometrical perspective, object recognition becomes the problem of linearly
separating different object manifolds. In feedforward visual hierarchy, it has
been suggested that the object manifold representations are reformatted across
the layers, to become more linearly separable. Thus, a complete theory of
perception requires characterizing the ability of linear readout networks to
classify object manifolds from variable neural responses.
A theory of the perceptron of isolated points was pioneered by E. Gardner who
formulated it as a statistical mechanics problem and analyzed it using replica
theory. In this thesis, we generalize Gardner's analysis and establish a theory
of linear classification of manifolds synthesizing statistical and geometric
properties of high dimensional signals. [..] Next, we generalize our theory
further to linear classification of general perceptual manifolds, such as point
clouds. We identify that the capacity of a manifold is determined that
effective radius, R_M, and effective dimension, D_M. Finally, we show
extensions relevant for applications to real data, incorporating correlated
manifolds, heterogenous manifold geometries, sparse labels and nonlinear
classifications. Then, we demonstrate how object-based manifolds transform in
standard deep networks.
This thesis lays the groundwork for a computational theory of neuronal
processing of objects, providing quantitative measures for linear separability
of object manifolds. We hope this theory will provide new insights into the
computational principles underlying processing of sensory representations in
biological and artificial neural networks.
|
We introduce a perturbation expansion for athermal systems that allows an
exact determination of displacement fields away from the crystalline state as a
response to disorder. We show that the displacement fields in energy minimized
configurations of particles interacting through central potentials with
microscopic disorder, can be obtained as a series expansion in the strength of
the disorder. We introduce a hierarchy of force balance equations that allows
an order-by-order determination of the displacement fields, with the solutions
at lower orders providing sources for the higher order solutions. This allows
the simultaneous force balance equations to be solved, within a hierarchical
perturbation expansion to arbitrary accuracy. We present exact results for an
isotropic defect introduced into the crystalline ground state at linear order
and second order in our expansion. We show that the displacement fields
produced by the defect display interesting self-similar properties at every
order. We derive a $|\delta r| \sim 1/r$ and $|\delta f| \sim 1/r^2$ decay for
the displacement fields and excess forces at large distances $r$ away from the
defect. Finally we derive non-linear corrections introduced by the interactions
between defects at second order in our expansion. We verify our exact results
with displacement fields obtained from energy minimized configurations of soft
disks.
|
We study the problem of simultaneous search for multiple targets over a
multidimensional unit cube and derive the fundamental resolution limit of
non-adaptive querying procedures using the 20 questions estimation framework.
The performance criterion that we consider is the achievable resolution, which
is defined as the maximal $L_\infty$ norm between the location vector and its
estimated version where the maximization is over the possible location vectors
of all targets. The fundamental resolution limit is then defined as the minimal
achievable resolution of any non-adaptive query procedure. We drive
non-asymptotic and second-order asymptotic bounds on the minimal achievable
resolution by relating the current problem to a data transmission problem over
a multiple access channel, using the information spectrum method by Han and
borrowing results from finite blocklength information theory for random access
channel coding. Our results extend the purely first-order asymptotic analyses
of Kaspi \emph{et al.} (ISIT 2015) for the one-dimensional case. Specifically,
we consider more general channels, derive the non-asymptotic and second-order
asymptotic results and establish a phase transition phenomenon.
|
Within vehicles, the Controller Area Network (CAN) allows efficient
communication between the electronic control units (ECUs) responsible for
controlling the various subsystems. The CAN protocol was not designed to
include much support for secure communication. The fact that so many critical
systems can be accessed through an insecure communication network presents a
major security concern. Adding security features to CAN is difficult due to the
limited resources available to the individual ECUs and the costs that would be
associated with adding the necessary hardware to support any additional
security operations without overly degrading the performance of standard
communication. Replacing the protocol is another option, but it is subject to
many of the same problems. The lack of security becomes even more concerning as
vehicles continue to adopt smart features. Smart vehicles have a multitude of
communication interfaces would an attacker could exploit to gain access to the
networks. In this work we propose a security framework that is based on
physically unclonable functions (PUFs) and lightweight cryptography (LWC). The
framework does not require any modification to the standard CAN protocol while
also minimizing the amount of additional message overhead required for its
operation. The improvements in our proposed framework results in major
reduction in the number of CAN frames that must be sent during operation. For a
system with 20 ECUs for example, our proposed framework only requires 6.5% of
the number of CAN frames that is required by the existing approach to
successfully authenticate every ECU.
|
Traditional statistics forbids use of test data (a.k.a. holdout data) during
training. Dwork et al. 2015 pointed out that current practices in machine
learning, whereby researchers build upon each other's models, copying
hyperparameters and even computer code -- amounts to implicitly training on the
test set. Thus error rate on test data may not reflect the true population
error. This observation initiated {\em adaptive data analysis}, which provides
evaluation mechanisms with guaranteed upper bounds on this difference. With
statistical query (i.e. test accuracy) feedbacks, the best upper bound is
fairly pessimistic: the deviation can hit a practically vacuous value if the
number of models tested is quadratic in the size of the test set.
In this work, we present a simple new estimate, {\em Rip van Winkle's Razor}.
It relies upon a new notion of \textquotedblleft information
content\textquotedblright\ of a model: the amount of information that would
have to be provided to an expert referee who is intimately familiar with the
field and relevant science/math, and who has been just been woken up after
falling asleep at the moment of the creation of the test data (like
\textquotedblleft Rip van Winkle\textquotedblright\ of the famous fairy tale).
This notion of information content is used to provide an estimate of the above
deviation which is shown to be non-vacuous in many modern settings.
|
We bring fresh insight into the ensemble properties of PbS colloidal quantum
dots with a critical review of the literature on semiconductors followed by
systematic comparisons between steady-state photocurrent and photoluminescence
measurements. Our experiments, performed with sufficiently low powers to
neglect nonlinear effects, indicate that the photoluminescence spectra have no
other noticeable contribution beside the radiative recombination of thermalized
photocarriers (i.e. photocarriers in thermodynamic quasi-equilibrium). A
phenomenological model based on the local Kirchhoff law is proposed that makes
it possible to identify the nature of the thermalized photocarriers and to
extract their temperatures from the measurements. Two regimes are observed: for
highly compact assemblies of PbS quantum dots stripped from organic ligands,
the thermalization concerns photocarriers distributed over a wide energy range.
With PbS quantum dots cross-linked with 1,2-ethanedithiol or longer organic
ligand chains, the thermalization concerns solely the fundamental exciton and
can quantitatively explain all the observations, including the precise Stokes
shift between the absorbance and luminescence maxima.
|
NASA's Stardust mission utilized a sample collector composed of aerogel and
aluminum foil to return cometary and interstellar particles to Earth. Analysis
of the aluminum foil begins with locating craters produced by hypervelocity
impacts of cometary and interstellar dust. Interstellar dust craters are
typically less than one micrometer in size and are sparsely distributed, making
them difficult to find. In this paper, we describe a convolutional neural
network based on the VGG16 architecture that achieves high specificity and
sensitivity in locating impact craters in the Stardust interstellar collector
foils. We evaluate its implications for current and future analyses of Stardust
samples.
|
We prove Veech's conjecture on the equivalence of Sarnak's conjecture on
M\"obius orthogonality with a Kolmogorov type property of Furstenberg systems
of the M\''obius function. This yields a combinatorial condition on the
M\"obius function itself which is equivalent to Sarnak's conjecture. As a
matter of fact, our arguments remain valid in a larger context: we characterize
all bounded arithmetic functions orthogonal to all topological systems whose
all ergodic measures yield systems from a fixed characteristic class (zero
entropy class is an example of such a characteristic class) with the
characterization persisting in the logarithmic setup. As a corollary, we obtain
that the logarithmic Sarnak's conjecture holds if and only if the logarithmic
M\''obius orthogonality is satisfied for all dynamical systems whose ergodic
measures yield nilsystems.
|
A new numerical approach is proposed for the simulation of coupled
three-dimensional and one-dimensional elliptic equations (3D-1D coupling)
arising from dimensionality reduction of 3D-3D problems with thin inclusions.
The method is based on a well posed mathematical formulation and results in a
numerical scheme with high robustness and flexibility in handling geometrical
complexities. This is achieved by means of a three-field approach to split the
1D problems from the bulk 3D problem, and then resorting to the minimization of
a properly designed functional to impose matching conditions at the interfaces.
Thanks to the structure of the functional, the method allows the use of
independent meshes for the various subdomains.
|
For real-time semantic segmentation, how to increase the speed while
maintaining high resolution is a problem that has been discussed and solved.
Backbone design and fusion design have always been two essential parts of
real-time semantic segmentation. We hope to design a light-weight network based
on previous design experience and reach the level of state-of-the-art real-time
semantic segmentation without any pre-training. To achieve this goal, a
encoder-decoder architectures are proposed to solve this problem by applying a
decoder network onto a backbone model designed for real-time segmentation tasks
and designed three different ways to fuse semantics and detailed information in
the aggregation phase. We have conducted extensive experiments on two semantic
segmentation benchmarks. Experiments on the Cityscapes and CamVid datasets show
that the proposed FRFNet strikes a balance between speed calculation and
accuracy. It achieves 72% Mean Intersection over Union (mIoU%) on the
Cityscapes test dataset with the speed of 144 on a single RTX 1080Ti card. The
Code is available at https://github.com/favoMJ/FRFNet.
|
Recently discovered ferromagnetism of the layered van der Waals material
VI$_3$ attracts much research attention. Despite substantial progress,in the
following important aspects no consensus has been reached: (i) a possible
deviation of the easy axis from the normal to the VI$_3$ layers, (ii) a
possible inequivalence of the V atoms, (iii) the value of the V magnetic
moments. The theoretical works differ in the conclusions on the conduction
nature of the system,the value and the role of the V orbital moments. To the
best of our knowledge there is no theoretical works addressing issues (i) and
(ii) and only one work dealing with the reduced value of the V moment. By
combining the symmetry arguments with density functional theory (DFT) and
DFT+$U$ calculations we have shown that the antidimerization distortion of the
crystal structure reported in Phys. Rev. B {\bf 99}, 041402(R) (2019) must lead
to the deviation of the easy axis from the normal to the VI$_3$ layers in close
correlation with the experimental results. The antidimerization accompanied by
the breaking the inversion symmetry leads to the inequivalence of the V atoms.
Our DFT+U calculations result in large value 0.8\mu_B$ of the V orbital moments
of the V atoms leading to reduced total V moment in agreement with a number of
experimental results and with the physical picture suggested in Phys. Rev. B bf
101, 100402(R) (2020). We obtained large intraatomic noncollinearity of the V
spin and orbital moments revealing strong competition between effects coursed
by the on-site electron correlation, spin-orbit coupling, and interatomic
hybridization since pure intraatomic effects lead to collinear spin and orbital
moments. Our calculations confirm the experimental results of strong
magnetoelastic coupling revealing itself in the strong dependence of the
magnetic properties on the distortion of the atomic structure.
|
We consider the Nemytskii operators $u\to |u|$ and $u\to u^{\pm}$ in a
bounded domain $\Omega$ with $C^2$ boundary. We give elementary proofs of the
boundedness in $H^s(\Omega)$ with $0\le s<3/2$.
|
The Central Sets Theorem was introduced by H. Furstenberg and then afterwards
several mathematicians have provided various versions and extensions of this
theorem. All of these theorems deal with central sets, and its origin from the
algebra of Stone-Cech compactification of arbitrary semigroup, say $\beta S$.
It can be proved that every closed subsemigroup of $\beta S$ is generated by a
filter. We will show that, under some restrictions, one can derive the Central
Sets Theorem for any closed subsemigroup of $\beta S$ . We will derive this
theorem using the corresponding filter and its algebra. Later we will also deal
with how the notions of largeness along filters are preserved under some well
behaved homomorphisms and give some consequences.
|
In this paper we conjecture combinatorial Rogers-Ramanujan type colored
partition identities related to standard representations of the affine Lie
algebra of type $C^{(1)}_\ell$, $\ell\geq2$, and we conjecture similar colored
partition identities with no obvious connection to representation theory of
affine Lie algebras.
|
Understanding the limits of phononic heat dissipation from a two-dimensional
layered material (2DLM) to its hexagonal boron nitride (h-BN) substrate and how
it varies with the structure of the 2DLM is important for the design and
thermal management of h-BN-supported nanoelectronic devices. We formulate an
elasticity-based theory to model the phonon-mediated heat dissipation between a
2DLM and its h-BN substrate. By treating the h-BN substrate as a semi-infinite
stack of harmonically coupled thin plates, we obtain semi-analytical
expressions for the thermal boundary conductance (TBC) and interfacial phonon
transmission spectrum. We evaluate the temperature-dependent TBC of the
$N$-layer 2DLM (graphene or MoS$_{2}$) on different common substrates (h-BN vs.
a-SiO$_{2}$) at different values of $N$. The results suggest that h-BN is
substantially more effective for heat dissipation from MoS$_{2}$ than
a-SiO$_{2}$ especially at large $N$. To understand the limitations of the our
stack model, we also compare its predictions in the $N=\infty$ limit to those
of the more exact Atomistic Green's Function model for the graphite-BN and
molybdenite-BN interfaces. Our stack model provides clear insights into the key
role of the flexural modes in the TBC and how the anisotropic elastic
properties of h-BN affect heat dissipation.
|
Shimizu introduced a region crossing change unknotting operation for knot
diagrams. As extensions, two integral region choice problems were proposed and
the existences of solutions of the problems were shown for all non-trivial knot
diagrams by Ahara and Suzuki, and Harada. We relate both integral region choice
problems with an Alexander numbering for regions of a link diagram, and give
alternative proofs of the existences of solutions for knot diagrams. We also
discuss the problems on link diagrams. For each of the problems on the diagram
of a two-component link, we give a necessary and sufficient condition that
there exists a solution.
|
By means of identical cubic elements, we generate a partition of a volume in
which a particle-based cosmological simulation is carried out. In each cubic
element, we determine the gas particles with a normalized density greater than
an arbitrarily chosen density threshold. By using a proximity parameter, we
calculate the neighboring cubic elements and generate a list of neighbors. By
imposing dynamic conditions on the gas particles, we identify gas clumps and
their neighbors, so that we calculate and fit some properties of the groups so
identified, including the mass, size and velocity dispersion, in terms of their
multiplicity (here defined simply as the number of member galaxies). Finally,
we report the value of the ratio of kinetic energy to gravitational energy of
such dense gas clumps, which will be useful as initial conditions in
simulations of gravitational collapse of gas clouds and clusters of gas clouds.
|
Boundary representation (B-rep) models are the standard way 3D shapes are
described in Computer-Aided Design (CAD) applications. They combine lightweight
parametric curves and surfaces with topological information which connects the
geometric entities to describe manifolds. In this paper we introduce BRepNet, a
neural network architecture designed to operate directly on B-rep data
structures, avoiding the need to approximate the model as meshes or point
clouds. BRepNet defines convolutional kernels with respect to oriented coedges
in the data structure. In the neighborhood of each coedge, a small collection
of faces, edges and coedges can be identified and patterns in the feature
vectors from these entities detected by specific learnable parameters. In
addition, to encourage further deep learning research with B-reps, we publish
the Fusion 360 Gallery segmentation dataset. A collection of over 35,000 B-rep
models annotated with information about the modeling operations which created
each face. We demonstrate that BRepNet can segment these models with higher
accuracy than methods working on meshes, and point clouds.
|
Low-mass compact galaxies (ultracompact dwarfs [UCDs] and compact ellipticals
[cEs]) populate the stellar size-mass plane between globular clusters and
early-type galaxies. Known to be formed either in-situ with an intrinsically
low mass or resulting from the stripping of a more massive galaxy, the presence
of a supermassive or an intermediate-mass black hole (BH) could help
discriminate between these possible scenarios. With this aim, we have performed
a multiwavelength search of active BH activity, i.e. active galactic nuclei
(AGN), in a sample of 937 low-mass compact galaxies (580 UCDs and 357 cEs).
This constitutes the largest study of AGN activity in these types of galaxies.
Based on their X-ray luminosity, radio luminosity and morphology, and/or
optical emission line diagnostic diagrams, we find a total of 11 cEs that host
an AGN. We also study for the first time the location of both low-mass compact
galaxies (UCDs and cEs) and dwarf galaxies hosting AGN on the BH-galaxy scaling
relations, finding that low-mass compact galaxies tend to be overmassive in the
BH mass-stellar mass plane but not as much in the BH mass-stellar velocity
dispersion correlation. This, together with available BH mass measurements for
some of the low-mass compact galaxies, supports a stripping origin for the
majority of these objects that would contribute to the scatter seen at the
low-mass end of the BH-galaxy scaling relations. However, the differences are
too large to be explained solely by this scatter, and thus our results suggest
that a flattening at such low-masses is also plausible, happening at a velocity
dispersion of ~20-40 km/s.
|
The first and second-order supersymmetry transformations can be used to
manipulate one or two energy levels of the initial spectrum when generating new
exactly solvable Hamiltonians from a given initial potential. In this paper, we
will construct the first and second-order supersymmetric partners of the
trigonometric Rosen-Morse potential. Firstly, it is identified a set of
solutions of the initial stationary Schr\"odinger equation which are
appropriate for implementing in a simple way non-singular transformations,
without inducing new singularities in the built potential. Then, the way the
spectral manipulation works is illustrated through several specific examples.
|
We define a subclass of quasiregular curves, called signed quasiregular
curves, which contains holomorphic curves and quasiregular mappings. As our
main result, we prove a growth theorem of Bonk-Heinonen type for signed
quasiregular curves. To obtain our main result, we prove that signed
quasiregular curves satisfy a weak reverse H\"older inequality and that this
weak reverse H\"older inequality implies the main result. We also obtain higher
integrability for signed quasiregular curves. Further, we prove a cohomological
value distribution result for signed quasiregular curves by using our main
result and equidistribution.
|
Visual object localization is the key step in a series of object detection
tasks. In the literature, high localization accuracy is achieved with the
mainstream strongly supervised frameworks. However, such methods require
object-level annotations and are unable to detect objects of unknown
categories. Weakly supervised methods face similar difficulties. In this paper,
a self-paced learning framework is proposed to achieve accurate object
localization on the rank list returned by instance search. The proposed
framework mines the target instance gradually from the queries and their
corresponding top-ranked search results. Since a common instance is shared
between the query and the images in the rank list, the target visual instance
can be accurately localized even without knowing what the object category is.
In addition to performing localization on instance search, the issue of
few-shot object detection is also addressed under the same framework. Superior
performance over state-of-the-art methods is observed on both tasks.
|
We investigate theoretically and numerically the light-matter interaction in
a two-level system (TLS) as a model system for excitation in a solid-state band
structure. We identify five clearly distinct excitation regimes, categorized
with well-known adiabaticity parameters: (1) the perturbative multiphoton
absorption regime for small driving field strengths, and four light
field-driven regimes, where intraband motion connects different TLS: (2) the
impulsive Landau-Zener (LZ) regime, (3) the non-impulsive LZ regime, (4) the
adiabatic regime and (5) the adiabatic-impulsive regime for large electric
field strengths. This categorization is tremendously helpful to understand the
highly complex excitation dynamics in any TLS, in particular when the driving
field strength varies, and naturally connects Rabi physics with Landau-Zener
physics. In addition, we find an insightful analytical expression for the
photon orders connecting the perturbative multiphoton regime with the light
field-driven regimes. Moreover, in the adiabatic-impulsive regime, adiabatic
motion and impulsive LZ transitions are equally important, leading to an
inversion symmetry breaking of the TLS when applying few-cycle laser pulses.
This categorization allows a deep understanding of driven TLS in a large
variety of settings ranging from cold atoms and molecules to solids and qubits,
and will help to find optimal driving parameters for a given purpose.
|
We construct examples of centrally harmonic spaces by generalizing work of
Copson and Ruse. We show that these examples are generically not centrally
harmonic at other points. We use this construction to exhibit manifolds which
are not conformally flat but such that their density function agrees with
Euclidean space.
|
Understanding how sea quarks behave inside a nucleon is one of the most
important physics goals of the proposed Electron-Ion Collider in China (EicC),
which is designed to have 3.5 GeV polarized electron beam (80% polarization)
colliding with 20 GeV polarized proton beam (70% polarization) at instantaneous
luminosity of $2 \times 10^{33} {\rm cm}^{-2} {\rm s}^{-1}$. A specific topic
at EicC is to understand the polarization of individual quarks inside a
longitudinally polarized nucleon. The potential of various future EicC data,
including the inclusive and semi-inclusive deep inelastic scattering data from
both doubly polarized electron-proton and electron-$^3{\rm He}$ collisions, to
reduce the uncertainties of parton helicity distributions is explored at the
next-to-leading order in QCD, using the Error PDF Updating Method Package ({\sc
ePump}) which is based on the Hessian profiling method. We show that the
semi-inclusive data are well able to provide good separation between flavour
distributions, and to constrain their uncertainties in the $x>0.005$ region,
especially when electron-$^3{\rm He}$ collisions, acting as effective
electron-neutron collisions, are taken into account. To enable this study, we
have generated a Hessian representation of the DSSV14 set of PDF replicas,
named DSSV14H PDFs.
|
The $K$-hull of a compact set $A\subset\mathbb{R}^d$, where $K\subset
\mathbb{R}^d$ is a fixed compact convex body, is the intersection of all
translates of $K$ that contain $A$. A set is called $K$-strongly convex if it
coincides with its $K$-hull. We propose a general approach to the analysis of
facial structure of $K$-strongly convex sets, similar to the well developed
theory for polytopes, by introducing the notion of $k$-dimensional faces, for
all $k=0,\dots,d-1$. We then apply our theory in the case when $A=\Xi_n$ is a
sample of $n$ points picked uniformly at random from $K$. We show that in this
case the set of $x\in\mathbb{R}^d$ such that $x+K$ contains the sample $\Xi_n$,
upon multiplying by $n$, converges in distribution to the zero cell of a
certain Poisson hyperplane tessellation. From this results we deduce
convergence in distribution of the corresponding $f$-vector of the $K$-hull of
$\Xi_n$ to a certain limiting random vector, without any normalisation, and
also the convergence of all moments of the $f$-vector.
|
The Auger Surface Detector consists of a large array of water Cherenkov
detector tanks each with a volume of 12,000 liters, for the detection of high
energy cosmic rays. The accuracy in the measurement of the integrated signal
amplitude of the detector unit has been studied using experimental air shower
data. It can be described as a Poisson-like term with a normalization constant
that depends on the zenith angle of the primary cosmic ray. This dependence
reflects the increasing contribution to the signal of the muonic component of
the shower, both due to the increasing muon/electromagnetic (e+- and gamma)
ratio and muon track length with zenith angle.
|
Muon beams of low emittance provide the basis for the intense,
well-characterised neutrino beams of a neutrino factory and for multi-TeV
lepton-antilepton collisions at a muon collider. The international Muon
Ionization Cooling Experiment (MICE) has demonstrated the principle of
ionization cooling, the technique by which it is proposed to reduce the
phase-space volume occupied by the muon beam at such facilities. This paper
documents the performance of the detectors used in MICE to measure the
muon-beam parameters, and the physical properties of the liquid hydrogen energy
absorber during running.
|
Is it possible to use natural language to intervene in a model's behavior and
alter its prediction in a desired way? We investigate the effectiveness of
natural language interventions for reading-comprehension systems, studying this
in the context of social stereotypes. Specifically, we propose a new language
understanding task, Linguistic Ethical Interventions (LEI), where the goal is
to amend a question-answering (QA) model's unethical behavior by communicating
context-specific principles of ethics and equity to it. To this end, we build
upon recent methods for quantifying a system's social stereotypes, augmenting
them with different kinds of ethical interventions and the desired model
behavior under such interventions. Our zero-shot evaluation finds that even
today's powerful neural language models are extremely poor ethical-advice
takers, that is, they respond surprisingly little to ethical interventions even
though these interventions are stated as simple sentences. Few-shot learning
improves model behavior but remains far from the desired outcome, especially
when evaluated for various types of generalization. Our new task thus poses a
novel language understanding challenge for the community.
|
We study a class of general U$(1)^\prime$ models to explain the observed dark
matter relic abundance and light neutrino masses. The model contains three
right handed neutrinos and three gauge singlet Majorana fermions to generate
the light neutrino mass via the inverse seesaw mechanism. We assign one pair of
degenerate sterile neutrinos to be the dark matter candidate whose relic
density is generated by the freeze-in mechanism. We consider different regimes
of the masses of the dark matter particle and the ${Z^\prime}$ gauge boson. The
production of the dark matter can occur at different reheating temperatures in
various scenarios depending on the masses of the ${Z^\prime}$ boson and the
dark matter candidate. We also note that if the mass of the sterile neutrino
dark matter is $\gtrsim 1 \rm{MeV}$ and if the $Z^\prime$ is heavier than the
dark matter, the decay of the dark matter candidate into positrons can explain
the long standing puzzle of the galactic $511\rm{keV}$ line in the Milky Way
center observed by the INTEGRAL satellite. We constrain the model parameters
from the dark matter analysis, vacuum stability and the collider searches of
heavy ${Z^\prime}$ at the LHC. For the case with light $Z^\prime$, we also
compare how far the parameter space allowed from dark matter relic density can
be probed by the future lifetime frontier experiments SHiP and FASERs in the
special case of $U(1)_{B-L}$ model.
|
We prove that for any non-symmetric irreducible divisible convex set, the
proximal limit set is the full projective boundary.
|
For graphical user interface (UI) design, it is important to understand what
attracts visual attention. While previous work on saliency has focused on
desktop and web-based UIs, mobile app UIs differ from these in several
respects. We present findings from a controlled study with 30 participants and
193 mobile UIs. The results speak to a role of expectations in guiding where
users look at. Strong bias toward the top-left corner of the display, text, and
images was evident, while bottom-up features such as color or size affected
saliency less. Classic, parameter-free saliency models showed a weak fit with
the data, and data-driven models improved significantly when trained
specifically on this dataset (e.g., NSS rose from 0.66 to 0.84). We also
release the first annotated dataset for investigating visual saliency in mobile
UIs.
|
The nonlocal Darboux transformation for the stationary axially symmetric
Schr\"odinger and Helmholtz equations is considered. Formulae for the nonlocal
Darboux transformation are obtained and its relation to the generalized Moutard
transformation is established. New examples of two - dimensional potencials and
exact solutions for the stationary axially symmetric Schr\"odinger and
Helmholtz equations are obtained as an application of the nonlocal Darboux
transformation.
|
A high-order quadrature algorithm is presented for computing integrals over
curved surfaces and volumes whose geometry is implicitly defined by the level
sets of (one or more) multivariate polynomials. The algorithm recasts the
implicitly defined geometry as the graph of an implicitly defined, multi-valued
height function, and applies a dimension reduction approach needing only
one-dimensional quadrature. In particular, we explore the use of Gauss-Legendre
and tanh-sinh methods and demonstrate that the quadrature algorithm inherits
their high-order convergence rates. Under the action of $h$-refinement with $q$
fixed, the quadrature schemes yield an order of accuracy of $2q$, where $q$ is
the one-dimensional node count; numerical experiments demonstrate up to 22nd
order. Under the action of $q$-refinement with the geometry fixed, the
convergence is approximately exponential, i.e., doubling $q$ approximately
doubles the number of accurate digits of the computed integral. Complex
geometry is automatically handled by the algorithm, including, e.g.,
multi-component domains, tunnels, and junctions arising from multiple
polynomial level sets, as well as self-intersections, cusps, and other kinds of
singularities. A variety of numerical experiments demonstrates the quadrature
algorithm on two- and three-dimensional problems, including: randomly generated
geometry involving multiple high-curvature pieces; challenging examples
involving high degree singularities such as cusps; adaptation to simplex
constraint cells in addition to hyperrectangular constraint cells; and boolean
operations to compute integrals on overlapping domains.
|
Automated three-dimensional (3D) object reconstruction is the task of
building a geometric representation of a physical object by means of sensing
its surface. Even though new single view reconstruction techniques can predict
the surface, they lead to incomplete models, specially, for non commons objects
such as antique objects or art sculptures. Therefore, to achieve the task's
goals, it is essential to automatically determine the locations where the
sensor will be placed so that the surface will be completely observed. This
problem is known as the next-best-view problem. In this paper, we propose a
data-driven approach to address the problem. The proposed approach trains a 3D
convolutional neural network (3D CNN) with previous reconstructions in order to
regress the \btxt{position of the} next-best-view. To the best of our
knowledge, this is one of the first works that directly infers the
next-best-view in a continuous space using a data-driven approach for the 3D
object reconstruction task. We have validated the proposed approach making use
of two groups of experiments. In the first group, several variants of the
proposed architecture are analyzed. Predicted next-best-views were observed to
be closely positioned to the ground truth. In the second group of experiments,
the proposed approach is requested to reconstruct several unseen objects,
namely, objects not considered by the 3D CNN during training nor validation.
Coverage percentages of up to 90 \% were observed. With respect to current
state-of-the-art methods, the proposed approach improves the performance of
previous next-best-view classification approaches and it is quite fast in
running time (3 frames per second), given that it does not compute the
expensive ray tracing required by previous information metrics.
|
Here we present detailed analysis of the distinct X-ray emission features
present within the Eastern radio lobe of the Pictor A galaxy, around the jet
termination region, utilising the data obtained from the Chandra X-ray
Observatory. Various emission features have been selected for the study based
on their enhanced X-ray surface brightness, including five sources that appear
point-like, as well as three extended regions, one characterised by a
filamentary morphology. For those, we perform a basic spectral analysis within
the 0.5-7keV range. We also investigate various correlations between the X-ray
emission features and the non-thermal radio emission, utilising the
high-resolution radio maps from the Very Large Array at GHz frequencies. The
main novel findings following from our analysis, regard the newly recognized
bright X-ray filament located upstream of the jet termination region, extending
for at least thirty kiloparsec (projected), and inclined with respect to the
jet axis. For this feature, we observe a clear anti-correlation between the
X-ray surface brightness and the polarized radio intensity, as well as a
decrease in the radio rotation measure with respect to the surroundings. We
speculate on the nature of the filament, in particular addressing a possibility
that it is related to the presence of a hot X-ray emitting thermal gas, only
partly mixed with the non-thermal radio/X-ray emitting electrons within the
lobe, combined with the reversals in the lobe's net magnetic field.
|
We prove bounds in the local $ L^2 $ range for exotic paraproducts motivated
by bilinear multipliers associated with convex sets. One result assumes an
exponential boundary curve. Another one assumes a higher order lacunarity
condition.
|
Pressure calibration for most diamond-anvil cell (DAC) experiments is mainly
based on the ruby scale, which is key to implement this powerful tool for
high-pressure study. However, the ruby scale can often hardly be used for
programmably-controlled DAC devices, especially the piezoelectric-driving
cells, where a continuous pressure calibration is required. In this work, we
present an effective pressure gauge for DACs made of manganin metal, based on
the four-probe resistivity measurements. Pressure dependence of its resistivity
is well established and shows excellent linear relations in the 0 - 30 GPa
pressure range with a slope of 23.4 (9) GPa for the first-cycle compression, in
contrast to that of multiple-cycle compression and decompression having a
nearly identical slope of 33.7 (4) GPa likely due to the strain effect. In
addition, such-established manganin scale can be used for continuously
monitoring the cell pressure of piezoelectric-driving DACs, and the reliability
of this method is also verified by the fixed-point method with a Bi pressure
standard. Realization of continuous pressure calibration for
programmably-controlled DACs would offer many opportunities for study of
dynamics, kinetics, and critical behaviors of pressure-induced phase
transitions.
|
We give a protocol for Asynchronous Distributed Key Generation (A-DKG) that
is optimally resilient (can withstand $f<\frac{n}{3}$ faulty parties), has a
constant expected number of rounds, has $\tilde{O}(n^3)$ expected communication
complexity, and assumes only the existence of a PKI. Prior to our work, the
best A-DKG protocols required $\Omega(n)$ expected number of rounds, and
$\Omega(n^4)$ expected communication.
Our A-DKG protocol relies on several building blocks that are of independent
interest. We define and design a Proposal Election (PE) protocol that allows
parties to retrospectively agree on a valid proposal after enough proposals
have been sent from different parties. With constant probability the elected
proposal was proposed by a non-faulty party. In building our PE protocol, we
design a Verifiable Gather protocol which allows parties to communicate which
proposals they have and have not seen in a verifiable manner. The final
building block to our A-DKG is a Validated Asynchronous Byzantine Agreement
(VABA) protocol. We use our PE protocol to construct a VABA protocol that does
not require leaders or an asynchronous DKG setup. Our VABA protocol can be used
more generally when it is not possible to use threshold signatures.
|
This paper develops an averaging technique based on the combination of the
eigenfunction expansion method and the collaboration method to investigate the
multiple scattering effect of the SH wave propagation in a porous medium. The
semi-analytical averaging technique is conducted using Monto Carlo method to
understand the macroscopic dispersion and attenuation phenomena of the stress
wave propagation in a porous solid caused by the multiple scattering effects.
The averaging technique is verified by finite element analysis. Finally, a
simple homogenized elastic model with damping is proposed to describe the
macroscopic dispersion and attenuation effects of SH waves in porous media.
|
We analyze the spectral stability of small-amplitude, periodic,
traveling-wave solutions of a Boussinesq-Whitham system. These solutions are
shown numerically to exhibit high-frequency instabilities when subject to
bounded perturbations on the real line. We use a formal perturbation method to
estimate the asymptotic behavior of these instabilities in the small-amplitude
regime. We compare these asymptotic results with direct numerical computations.
This is the second paper in a series of three that investigates high-frequency
instabilities of Stokes waves.
|
Crowdworker-constructed natural language inference (NLI) datasets have been
found to contain statistical artifacts associated with the annotation process
that allow hypothesis-only classifiers to achieve better-than-random
performance (Poliak et al., 2018; Gururanganet et al., 2018; Tsuchiya, 2018).
We investigate whether MedNLI, a physician-annotated dataset with premises
extracted from clinical notes, contains such artifacts (Romanov and Shivade,
2018). We find that entailed hypotheses contain generic versions of specific
concepts in the premise, as well as modifiers related to responsiveness,
duration, and probability. Neutral hypotheses feature conditions and behaviors
that co-occur with, or cause, the condition(s) in the premise. Contradiction
hypotheses feature explicit negation of the premise and implicit negation via
assertion of good health. Adversarial filtering demonstrates that performance
degrades when evaluated on the difficult subset. We provide partition
information and recommendations for alternative dataset construction strategies
for knowledge-intensive domains.
|
Cyber-defense systems are being developed to automatically ingest Cyber
Threat Intelligence (CTI) that contains semi-structured data and/or text to
populate knowledge graphs. A potential risk is that fake CTI can be generated
and spread through Open-Source Intelligence (OSINT) communities or on the Web
to effect a data poisoning attack on these systems. Adversaries can use fake
CTI examples as training input to subvert cyber defense systems, forcing the
model to learn incorrect inputs to serve their malicious needs.
In this paper, we automatically generate fake CTI text descriptions using
transformers. We show that given an initial prompt sentence, a public language
model like GPT-2 with fine-tuning, can generate plausible CTI text with the
ability of corrupting cyber-defense systems. We utilize the generated fake CTI
text to perform a data poisoning attack on a Cybersecurity Knowledge Graph
(CKG) and a cybersecurity corpus. The poisoning attack introduced adverse
impacts such as returning incorrect reasoning outputs, representation
poisoning, and corruption of other dependent AI-based cyber defense systems. We
evaluate with traditional approaches and conduct a human evaluation study with
cybersecurity professionals and threat hunters. Based on the study,
professional threat hunters were equally likely to consider our fake generated
CTI as true.
|
Recently, evidence has emerged for a field-induced even- to odd-parity
superconducting phase transition in CeRh$_2$As$_2$ [S. Khim et al., Science 373
1012 (2021)]. Here we argue that the P4/nmm non-symmorphic crystal structure of
CeRh$_2$As$_2$ plays a key role in enabling this transition by ensuring large
spin-orbit interactions near the Brillouin zone boundaries, which naturally
leads to the required near-degeneracy of the even- and odd-parity channels. We
further comment on the relevance of our theory to FeSe, which crystallizes in
the same structure.
|
This article discusses self-organization in cold atoms via light-mediated
interactions induced by feedback from a single retro-reflecting mirror.
Diffractive dephasing between the pump beam and the spontaneous sidebands
selects the lattice period. Spontaneous breaking of the rotational and
translational symmetry occur in the 2D plane transverse to the pump. We
elucidate how diffractive ripples couple sites on the self-induced atomic
lattice. The nonlinear phase shift of the atomic cloud imprinted onto the
optical beam is the parameter determining coupling strength. The interaction
can be tailored to operate either on external degrees of freedom leading to
atomic crystallization for thermal atoms and supersolids for a quantum
degenerate gas, or on internal degrees of freedom like populations of the
excited state or Zeeman sublevels. Using the light polarization degrees of
freedom on the Poincar{\'e} sphere (helicity and polarization direction),
specific irreducible tensor components of the atomic Zeeman states can be
coupled leading to spontaneous magnetic ordering of states of dipolar and
quadrupolar nature. The requirements for critical interaction strength are
compared for the different situations. Connections and extensions to
longitudinally pumped cavities, counterpropagating beam schemes and the CARL
instability are discussed.
|
We investigate transient nonlinear localization, namely the self-excitation
of energy bursts in an atomic lattice at finite temperature. As a basic model
we consider the diatomic Lennard-Jones chain. Numerical simulations suggest
that the effect originates from two different mechanisms. One is the thermal
excitation of genuine discrete breathers with frequency in the phonon gap. The
second is an effect of nonlinear coupling of fast, lighter particles with slow
vibrations of the heavier ones. The quadratic term of the force generate an
effective potential that can lead to transient grow of local energy on time
scales the can be relatively long for small mass ratios. This heuristics is
supported by a multiple-scale approximation based on the natural time-scale
separation. For illustration, we consider a simplified single-particle model
that allows for some insight of the localization dynamics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.