abstract
stringlengths 42
2.09k
|
---|
Recent results have demonstrated that samplers constructed with flow-based
generative models are a promising new approach for configuration generation in
lattice field theory. In this paper, we present a set of methods to construct
flow models for targets with multiple separated modes (i.e. theories with
multiple vacua). We demonstrate the application of these methods to modeling
two-dimensional real scalar field theory in its symmetry-broken phase. In this
context we investigate the performance of different flow-based sampling
algorithms, including a composite sampling algorithm where flow-based proposals
are occasionally augmented by applying updates using traditional algorithms
like HMC.
|
We study the propagation of light under a strong electric field in
Born-Infeld electrrdynamics. The nonlinear effect can be described by the
effective indices of refraction. Because the effective indices of refraction
depend on the background electric field, the path of light can be bent when the
background field is non-uniform. We compute the bending angle of light by a
Born-Infeld-type Coulomb charge in the weak lensing limit using the trajectory
equation based on geometric optics. We also compute the deflection angle of
light by the Einstein-Born-Infeld black hole using the geodesic equation and
confirm that the contribution of the electric charge to the total bending angle
agree.
|
In this paper, we investigate certain graded-commutative rings which are
related to the reciprocal plane compactification of the coordinate ring of a
complement of a hyperplane arrangement. We give a presentation of these rings
by generators and defining relations. This presentation was used by Holler and
I. Kriz to calculate the $\mathbb{Z}$-graded coefficients of localizations of
ordinary $RO((\mathbb{Z}/p)^n)$-graded equivariant cohomology at a given set of
representation spheres, and also more recently by the author in a
generalization to the case of an arbitrary finite group. We also give an
interpretation of these rings in terms of superschemes, which can be used to
further illuminate their structure.
|
We present a framework for quantum process tomography of two-ion interactions
that leverages modulations of the trapping potential and composite pulses from
a global laser beam to achieve individual-ion addressing. Tomographic analysis
of identity and delay processes reveals dominant error contributions from laser
decoherence and slow qubit frequency drift during the tomography experiment. We
use this framework on two co-trapped $^{40}$Ca$^+$ ions to analyze both an
optimized and an overpowered M{\o}lmer-S{\o}rensen gate and to compare the
results of this analysis to a less informative Bell-state tomography
measurement and to predictions based on a simplified noise model. These results
show that the technique is effective for the characterization of two-ion
quantum processes and for the extraction of meaningful information about the
errors present in the system. The experimental convenience of this method will
allow for more widespread use of process tomography for characterizing
entangling gates in trapped-ion systems.
|
Strain localization is responsible for mesh dependence in numerical analyses
concerning a vast variety of fields such as solid mechanics, dynamics,
biomechanics and geomechanics. Therefore, numerical methods that regularize
strain localization are paramount in the analysis and design of engineering
products and systems. In this paper we revisit the elasto-viscoplastic,
strain-softening, strain-rate hardening model as a means to avoid strain
localization on a mathematical plane in the case of a Cauchy continuum. Going
beyond previous works (de Borst and Duretz (2020); Needleman (1988); Sluys and
de Borst (1992); Wang et al. (1997)), we assume that both the frequency
{\omega} and the wave number k belong to the complex plane. Therefore, a
different expression for the dispersion relation is derived. We prove then that
under these conditions strain localization on a mathematical plane is possible.
The above theoretical results are corroborated by extensive numerical analyses,
where the total strain and plastic strain rate profiles exhibit mesh dependent
behavior.
|
In this paper, we propose a new strategy for learning inertial robotic
navigation models. The proposed strategy enhances the generalisability of
end-to-end inertial modelling, and is aimed at wheeled robotic deployments.
Concretely, the paper describes the following. (1) Using precision robotics, we
empirically characterise the effect of changing the sensor position during
navigation on the distribution of raw inertial signals, as well as the
corresponding impact on learnt latent spaces. (2) We propose neural
architectures and algorithms to assimilate knowledge from an indexed set of
sensor positions in order to enhance the robustness and generalisability of
robotic inertial tracking in the field. Our scheme of choice uses continuous
domain adaptation (DA) and optimal transport (OT). (3) In our evaluation,
continuous OT DA outperforms a continuous adversarial DA baseline, while also
showing quantifiable learning benefits over simple data augmentation. We will
release our dataset to help foster future research.
|
In this paper, for $1<p<\infty$, we obtain the $L^p$-boundedness of the
Hilbert transform $H^{\gamma}$ along a variable plane curve $(t,u(x_1,
x_2)\gamma(t))$, where $u$ is a Lipschitz function with small Lipschitz norm,
and $\gamma$ is a general curve satisfying some suitable smoothness and
curvature conditions.
|
We compute the non-planar contribution to the universal anomalous dimension
of twist-two operators in N=4 supersymmetric Yang-Mills theory at four loops
through Lorentz spin eighteen. Exploiting the results of this and our previous
calculations along with recent analytic results for the cusp anomalous
dimension and some expected analytic properties, we reconstruct a general
expression valid for arbitrary Lorentz spin. We study various properties of
this general result, such as its large-spin limit, its small-x limit, and
others. In particular, we present a prediction for the non-planar contribution
to the anomalous dimension of the single-magnon operator in the beta-deformed
version of the theory.
|
Deep neural networks have been well-known for their superb performance in
handling various machine learning and artificial intelligence tasks. However,
due to their over-parameterized black-box nature, it is often difficult to
understand the prediction results of deep models. In recent years, many
interpretation tools have been proposed to explain or reveal the ways that deep
models make decisions. In this paper, we review this line of research and try
to make a comprehensive survey. Specifically, we introduce and clarify two
basic concepts-interpretations and interpretability-that people usually get
confused. First of all, to address the research efforts in interpretations, we
elaborate the design of several recent interpretation algorithms, from
different perspectives, through proposing a new taxonomy. Then, to understand
the results of interpretation, we also survey the performance metrics for
evaluating interpretation algorithms. Further, we summarize the existing work
in evaluating models' interpretability using "trustworthy" interpretation
algorithms. Finally, we review and discuss the connections between deep models'
interpretations and other factors, such as adversarial robustness and data
augmentations, and we introduce several open-source libraries for
interpretation algorithms and evaluation approaches.
|
We have designed honeycomb lattices for microwave photons with a frequency
imbalance between the two sites in the unit cell. This imbalance is the
equivalent of a mass term that breaks the lattice inversion symmetry. At the
interface between two lattices with opposite imbalance, we observe topological
valley edge states. By imaging the spatial dependence of the modes along the
interface, we obtain their dispersion relation that we compare to the
predictions of an ab initio tight-binding model describing our microwave
photonic lattices.
|
The goal of this paper is to increase the membership list of the Chamaeleon
star forming region and the $\epsilon$ Cha moving group, in particular for
low-mass stars and substellar objects. We extended the search region
significantly beyond the dark clouds. Our sample has been selected based on
proper motions and colours obtained from Gaia and 2MASS. We present and discuss
the optical spectroscopic follow-up of 18 low-mass stellar objects in Cha I and
$\epsilon$ Cha. We characterize the properties of objects by deriving their
physical parameters, both from spectroscopy and photometry. We add three more
low-mass members to the list of Cha I, and increase the census of known
$\epsilon$ Cha members by more than 40%, confirming spectroscopically 13 new
members and relying on X-ray emission as youth indicator for 2 more. In most
cases the best-fitting spectral template is from objects in the TW Hya
association, indicating that $\epsilon$ Cha has a similar age. The first
estimate of the slope of the initial mass function in $\epsilon$ Cha down to
the sub-stellar regime is consistent with that of other young clusters. We
estimate our IMF to be complete down to $\approx 0.03$M$_{\odot}$. The IMF can
be represented by two power laws: for M $<$ 0.5 M$_{\odot}$ $\alpha = 0.42 \pm
0.11$ and for M $>$ 0.5 M$_{\odot}$ $\alpha = 1.44 \pm 0.12$. We find
similarities between $\epsilon$ Cha and the southernmost part of Lower
Centaurus Crux (LCC A0), both lying at similar distances and sharing the same
proper motions. This suggests that $\epsilon$ Cha and LCC A0 may have been born
during the same star formation event
|
We observe the density wave angular pattern speed OMEGA-p to be near 12 to 17
km / s / kpc, by the separation between a typical optical HII region (from the
spiral arm dust lane) and using a HII evolution time model to yield its
relative speed, and independently by the separation between a typical radio
maser (from the spiral arm dust lane) with a maser model.
|
An instance of the super-stable matching problem with incomplete lists and
ties is an undirected bipartite graph $G = (A \cup B, E)$, with an adjacency
list being a linearly ordered list of ties. Ties are subsets of vertices
equally good for a given vertex. An edge $(x,y) \in E \backslash M$ is a
blocking edge for a matching $M$ if by getting matched to each other neither of
the vertices $x$ and $y$ would become worse off. Thus, there is no disadvantage
if the two vertices would like to match up. A matching $M$ is super-stable if
there is no blocking edge with respect to $M$. It has previously been shown
that super-stable matchings form a distributive lattice and the number of
super-stable matchings can be exponential in the number of vertices. We give
two compact representations of size $O(m)$ that can be used to construct all
super-stable matchings, where $m$ denotes the number of edges in the graph. The
construction of the second representation takes $O(mn)$ time, where $n$ denotes
the number of vertices in the graph, and gives an explicit rotation poset
similar to the rotation poset in the classical stable marriage problem. We also
give a polyhedral characterisation of the set of all super-stable matchings and
prove that the super-stable matching polytope is integral, thus solving an open
problem stated in the book by Gusfield and Irving .
|
The relaxation of field-line tension during magnetic reconnection gives rise
to a universal Fermi acceleration process involving the curvature drift of
particles. However, the efficiency of this mechanism is limited by the trapping
of energetic particles within flux-ropes. Using 3D fully kinetic simulations,
we demonstrate that the flux-rope kink instability leads to strong field-line
chaos in weak-guide-field regimes where the Fermi mechanism is most efficient,
thus allowing particles to transport out of flux-ropes and undergo further
acceleration. As a consequence, both ions and electrons develop clear power-law
energy spectra which contain a significant fraction of the released energy. The
low-energy bounds are determined by the injection physics, while the
high-energy cutoffs are limited only by the system size. These results have
strong relevance to observations of nonthermal particle acceleration in space
and astrophysics.
|
Galactic Internet may already exist, if all stars are exploited as
gravitational lenses. In fact, the gravitational lens of the Sun is a
well-known astrophysical phenomenon predicted by Einstein's general theory of
relativity. It implies that, if we can send a probe along any radial direction
away from the Sun up to the minimal distance of 550 AU and beyond, the Sun's
mass will act as a huge magnifying lens, letting us "see" detailed radio maps
of whatever may lie on the other side of the Sun even at very large distances.
The 2009 book by this author, ref. [1], studies such future FOCAL space
missions to 550 AU and beyond. In this paper, however, we want to study another
possibility yet: how to create the future interstellar radio links between the
solar system and any future interstellar probe by utilizing the gravitational
lens of the Sun as a huge antenna. In particular, we study the Bit Error Rate
(BER) across interstellar distances with and without using the gravitational
lens effect of the Sun (ref. [2]). The conclusion is that only when we will
exploit the Sun as a gravitational lens we will be able to communicate with our
own probes (or with nearby Aliens) across the distances of even the nearest
stars to us in the Galaxy, and that at a reasonable Bit Error Rate. We also
study the radio bridge between the Sun and any other Star that is made up by
the two gravitational lenses of both the Sun and that Star. The alignment for
this radio bridge to work is very strict, but the power-saving is enormous, due
to the huge contributions of the two stars' lenses to the overall antenna gain
of the system. We study a few cases in detail. Finally, we find the information
channel capacity for each of those radio bridges, putting thus a physical
constraint to the amount of information transfer that will be possible even by
exploiting the stars as gravitational lenses
|
Gaia DR2 published positions, parallaxes and proper motions for an
unprecedented 1,331,909,727 sources, revolutionising the field of Galactic
dynamics. We complement this data with the Astrometry Spread Function (ASF),
the expected uncertainty in the measured positions, proper motions and parallax
for a non-accelerating point source. The ASF is a Gaussian function for which
we construct the 5D astrometric covariance matrix as a function of position on
the sky and apparent magnitude using the Gaia DR2 scanning law and demonstrate
excellent agreement with the observed data. This can be used to answer the
question `What astrometric covariance would Gaia have published if my star was
a non-accelerating point source?'.
The ASF will enable characterisation of binary systems, exoplanet orbits,
astrometric microlensing events and extended sources which add an excess
astrometric noise to the expected astrometry uncertainty. By using the ASF to
estimate the unit weight error (UWE) of Gaia DR2 sources, we demonstrate that
the ASF indeed provides a direct probe of the excess source noise.
We use the ASF to estimate the contribution to the selection function of the
Gaia astrometric sample from a cut on astrometric_sigma5d_max showing high
completeness for $G<20$ dropping to $<1\%$ in underscanned regions of the sky
for $G=21$.
We have added an ASF module to the Python package SCANNINGLAW
(https://github.com/gaiaverse/scanninglaw) through which users can access the
ASF.
|
We compare the capabilities of two approaches to approximating graph
isomorphism using linear algebraic methods: the \emph{invertible map tests}
(introduced by Dawar and Holm) and proof systems with algebraic rules, namely
\emph{polynomial calculus}, \emph{monomial calculus} and \emph{Nullstellensatz
calculus}. In the case of fields of characteristic zero, these variants are all
essentially equivalent to the the Weisfeiler-Leman algorithms. In positive
characteristic we show that the invertible map method can simulate the monomial
calculus and identify a potential way to extend this to the monomial calculus.
|
Relational semigroups with domain and range are a useful tool for modelling
nondeterministic programs. We prove that the representation class of
domain-range semigroups with demonic composition is not finitely axiomatisable.
We extend the result for ordered domain algebras and show that any relation
algebra reduct signature containing domain, range, converse, and composition,
but no negation, meet, nor join has the finite representation property. That is
any finite representable structure of such a signature is representable over a
finite base. We survey the results in the area of the finite representation
property.
|
Single-species reaction-diffusion equations, such as the Fisher-KPP and
Porous-Fisher equations, support travelling wave solutions that are often
interpreted as simple mathematical models of biological invasion. Such
travelling wave solutions are thought to play a role in various applications
including development, wound healing and malignant invasion. One criticism of
these single-species equations is that they do not explicitly describe
interactions between the invading population and the surrounding environment.
In this work we study a reaction-diffusion equation that describes malignant
invasion which has been used to interpret experimental measurements describing
the invasion of malignant melanoma cells into surrounding human skin tissues.
This model explicitly describes how the population of cancer cells degrade the
surrounding tissues, thereby creating free space into which the cancer cells
migrate and proliferate to form an invasion wave of malignant tissue that is
coupled to a retreating wave of skin tissue. We analyse travelling wave
solutions of this model using a combination of numerical simulation, phase
plane analysis and perturbation techniques. Our analysis shows that the
travelling wave solutions involve a range of very interesting properties that
resemble certain well-established features of both the Fisher-KPP and
Porous-Fisher equations, as well as a range of novel properties that can be
thought of as extensions of these well-studied single-species equations. Of
particular interest is that travelling wave solutions of the invasion model are
very well approximated by trajectories in the Fisher-KPP phase plane that are
normally disregarded. This observation establishes a previously unnoticed link
between coupled multi-species reaction diffusion models of invasion and a
different class of models of invasion that involve moving boundary problems.
|
Scientometric analysis of 146 and 59 research articles published in Indian
journal of Information Sources and Services (IJISS) and Pakistan Journal of
Library and Information Science has been carried out. Seven Volumes of the
IJISS containing 14 issues and Seven volumes of PJLIS containing 8 issues from
2011 - 2017 have been taken into consideration for the present study. The
number of contributions, authorship pattern & author productivity, average
citations, average length of articles, average keywords and collaborative
papers has been analyzed. Out of 146 of IJISS contributions, only 39 are single
authored and rest by multi authored with degree of collaboration 0.73 and week
collaboration among the authors and from 59 contributions of PJLIS only 18 are
single authored and rest by multi authored with degree of collaboration 0.69
and week collaboration among the authors. The study revealed that the author
productivity is 0.53 (IJISS) and 0.50 (PJLIS) and dominated by the Indian and
Pakistani authors.
|
Load-generation balance and system inertia are essential for maintaining
frequency in power systems. Power grids are equipped with Rate-of-Change-of
Frequency (ROCOF) and Load Shedding (LS) relays in order to keep
load-generation balance. With the increasing penetration of renewables, the
inertia of the power grids is declining, which results in a faster drop in
system frequency in case of load-generation imbalance. In this context, we
analyze the feasibility of launching False Data Injection (FDI) in order to
create False Relay Operations (FRO), which we refer to as FRO attack, in the
power systems with high renewables. We model the frequency dynamics of the
power systems and corresponding FDI attacks, including the impact of
parameters, such as synchronous generators inertia, and governors time constant
and droop, on the success of FRO attacks. We formalize the FRO attack as a
Constraint Satisfaction Problem (CSP) and solve using Satisfiability Modulo
Theories (SMT). Our case studies show that power grids with renewables are more
susceptible to FRO attacks and the inertia of synchronous generators plays a
critical role in reducing the success of FRO attacks in the power grids.
|
Motivated by the phenomenon of Coherent Perfect Absorption, we study the
shape of the deepest minima in the frequency-dependent single-channel
reflection of waves from a cavity with spatially uniform losses. We show that
it is largely determined by non-orthogonality factors $O_{nn}$ of the
eigenmodes associated with the non-selfadjoint effective Hamiltonian. For
cavities supporting chaotic ray dynamics we then use random matrix theory to
derive, fully non-perturbatively, the explicit probability density ${\cal
P}(O_{nn})$ of the non-orthogonality factors for systems with both broken and
preserved time reversal symmetry. The results imply that $O_{nn}$ are
heavy-tail distributed, with the universal tail ${\cal P}(O_{nn}\gg 1)\sim
O_{nn}^{-3}$.
|
Expanding an idea of Raoul Bott, we propose a construction of canonical bases
for unitary representations that comes from big torus actions on families of
Bott-Samelson manifolds. The construction depends only on the choices of a
maximal torus, a Borel subgroup ,and a reduced expression for the longest
element of the Weyl group. It relies on a conjectural vanishing of higher
cohomology of sheaves of holomorphic sections of certain line bundles on the
total spaces of the families, hence the question mark in the title.
|
We present an end-to-end model using streaming physiological time series to
accurately predict near-term risk for hypoxemia, a rare, but life-threatening
condition known to cause serious patient harm during surgery. Our proposed
model makes inference on both hypoxemia outcomes and future input sequences,
enabled by a joint sequence autoencoder that simultaneously optimizes a
discriminative decoder for label prediction, and two auxiliary decoders trained
for data reconstruction and forecast, which seamlessly learns future-indicative
latent representation. All decoders share a memory-based encoder that helps
capture the global dynamics of patient data. In a large surgical cohort of
73,536 surgeries at a major academic medical center, our model outperforms all
baselines and gives a large performance gain over the state-of-the-art
hypoxemia prediction system. With a high sensitivity cutoff at 80%, it presents
99.36% precision in predicting hypoxemia and 86.81% precision in predicting the
much more severe and rare hypoxemic condition, persistent hypoxemia. With
exceptionally low rate of false alarms, our proposed model is promising in
improving clinical decision making and easing burden on the health system.
|
An effective approach in meta-learning is to utilize multiple "train tasks"
to learn a good initialization for model parameters that can help solve unseen
"test tasks" with very few samples by fine-tuning from this initialization.
Although successful in practice, theoretical understanding of such methods is
limited. This work studies an important aspect of these methods: splitting the
data from each task into train (support) and validation (query) sets during
meta-training. Inspired by recent work (Raghu et al., 2020), we view such
meta-learning methods through the lens of representation learning and argue
that the train-validation split encourages the learned representation to be
low-rank without compromising on expressivity, as opposed to the non-splitting
variant that encourages high-rank representations. Since sample efficiency
benefits from low-rankness, the splitting strategy will require very few
samples to solve unseen test tasks. We present theoretical results that
formalize this idea for linear representation learning on a subspace
meta-learning instance, and experimentally verify this practical benefit of
splitting in simulations and on standard meta-learning benchmarks.
|
Characterizing multipartite quantum correlations beyond two parties is of
utmost importance for building cutting edge quantum technologies, although the
comprehensive picture is still missing. Here we investigate quantum
correlations (QCs) present in a multipartite system by exploring connections
between monogamy score (MS), localizable quantum correlations (LQC), and
genuine multipartite entanglement (GME) content of the state. We find that the
frequency distribution of GME for Dicke states with higher excitations
resembles that of random states. We show that there is a critical value of GME
beyond which all states become monogamous and it is investigated by considering
different powers of MS which provide various layers of monogamy relations.
Interestingly, such a relation between LQC and MS as well as GME does not hold.
States having a very low GME (low monogamy score, both positive and negative)
can localize a high amount of QCs in two parties. We also provide an upper
bound to the sum of bipartite QC measures including LQC for random states and
establish a gap between the actual upper bound and the algebraic maximum.
|
We present a novel weighted average model based on the mixture of experts
(MoE) concept to provide robustness in Federated learning (FL) against the
poisoned/corrupted/outdated local models. These threats along with the non-IID
nature of data sets can considerably diminish the accuracy of the FL model. Our
proposed MoE-FL setup relies on the trust between users and the server where
the users share a portion of their public data sets with the server. The server
applies a robust aggregation method by solving the optimization problem or the
Softmax method to highlight the outlier cases and to reduce their adverse
effect on the FL process. Our experiments illustrate that MoE-FL outperforms
the performance of the traditional aggregation approach for high rate of
poisoned data from attackers.
|
Primordial black holes (PBHs) as part of the Dark Matter (DM) would modify
the evolution of large-scale structures and the thermal history of the
universe. Future 21 cm forest observations, sensitive to small scales and the
thermal state of the Inter Galactic Medium (IGM), could probe the existence of
such PBHs. In this article, we show that the shot noise isocurvature mode on
small scales induced by the presence of PBHs can enhance the amount of low mass
halos, or minihalos, and thus, the number of 21 cm absorption lines. However,
if the mass of PBHs is as large as $M_{\rm PBH}\gtrsim 10 \, M_\odot$, with an
abundant enough fraction of PBHs as DM, $f_{\rm PBH}$, the IGM heating due to
accretion onto the PBHs counteracts the enhancement due to the isocurvature
mode, reducing the number of absorption lines instead. The concurrence of both
effects imprints distinctive signatures in the number of absorbers, allowing to
bound the abundance of PBHs. We compute the prospects for constraining PBHs
with future 21 cm forest observations, finding achievable competitive upper
limits on the abundance as low as $f_{\rm PBH} \sim 10^{-3}$ at $M_{\rm PBH}=
100 \, M_\odot$, or even lower at larger masses, in unexplored regions of the
parameter space by current probes. The impact of astrophysical X-ray sources on
the IGM temperature is also studied, which could potentially weaken the bounds.
|
Self-adaptive systems continuously adapt to changes in their execution
environment. Capturing all possible changes to define suitable behaviour
beforehand is unfeasible, or even impossible in the case of unknown changes,
hence human intervention may be required. We argue that adapting to unknown
situations is the ultimate challenge for self-adaptive systems. Learning-based
approaches are used to learn the suitable behaviour to exhibit in the case of
unknown situations, to minimize or fully remove human intervention. While such
approaches can, to a certain extent, generalize existing adaptations to new
situations, there is a number of breakthroughs that need to be achieved before
systems can adapt to general unknown and unforeseen situations. We posit the
research directions that need to be explored to achieve unanticipated
adaptation from the perspective of learning-based self-adaptive systems. At
minimum, systems need to define internal representations of previously unseen
situations on-the-fly, extrapolate the relationship to the previously
encountered situations to evolve existing adaptations, and reason about the
feasibility of achieving their intrinsic goals in the new set of conditions. We
close discussing whether, even when we can, we should indeed build systems that
define their own behaviour and adapt their goals, without involving a human
supervisor.
|
For a finite group $G$ and an inverse-closed generating set $C$ of $G$, let
$Aut(G;C)$ consist of those automorphisms of $G$ which leave $C$ invariant. We
define an $Aut(G;C)$-invariant normal subgroup $\Phi(G;C)$ of $G$ which has the
property that, for any $Aut(G;C)$-invariant normal set of generators for $G$,
if we remove from it all the elements of $\Phi(G;C)$, then the remaining set is
still an $Aut(G;C)$-invariant normal generating set for $G$. The subgroup
$\Phi(G;C)$ contains the Frattini subgroup $\Phi(G)$ but the inclusion may be
proper. The Cayley graph $Cay(G,C)$ is normal edge-transitive if $Aut(G;C)$
acts transitively on the pairs $\{c,c^{-1}\}$ from $C$. We show that, for a
normal edge-transitive Cayley graph $Cay(G,C)$, its quotient modulo $\Phi(G;C)$
is the unique largest normal quotient which is isomorphic to a subdirect
product of normal edge-transitive graphs of characteristically simple groups.
In particular, we may therefore view normal edge-transitive Cayley graphs of
characteristically simple groups as building blocks for normal edge-transitive
Cayley graphs whenever the subgroup $\Phi(G;C)$ is trivial. We explore several
questions which these results raise, some concerned with the set of all
inverse-closed generating sets for groups in a given family. In particular we
use this theory to classify all $4$-valent normal edge-transitive Cayley graphs
for dihedral groups; this involves a new construction of an infinite family of
examples, and disproves a conjecture of Talebi.
|
With the rapid development of E-commerce and the increase in the quantity of
items, users are presented with more items hence their interests broaden. It is
increasingly difficult to model user intentions with traditional methods, which
model the user's preference for an item by combining a single user vector and
an item vector. Recently, some methods are proposed to generate multiple user
interest vectors and achieve better performance compared to traditional
methods. However, empirical studies demonstrate that vectors generated from
these multi-interests methods are sometimes homogeneous, which may lead to
sub-optimal performance. In this paper, we propose a novel method of Diversity
Regularized Interests Modeling (DRIM) for Recommender Systems. We apply a
capsule network in a multi-interest extractor to generate multiple user
interest vectors. Each interest of the user should have a certain degree of
distinction, thus we introduce three strategies as the diversity regularized
separator to separate multiple user interest vectors. Experimental results on
public and industrial data sets demonstrate the ability of the model to capture
different interests of a user and the superior performance of the proposed
approach.
|
In the decade since 2010, successes in artificial intelligence have been at
the forefront of computer science and technology, and vector space models have
solidified a position at the forefront of artificial intelligence. At the same
time, quantum computers have become much more powerful, and announcements of
major advances are frequently in the news.
The mathematical techniques underlying both these areas have more in common
than is sometimes realized. Vector spaces took a position at the axiomatic
heart of quantum mechanics in the 1930s, and this adoption was a key motivation
for the derivation of logic and probability from the linear geometry of vector
spaces. Quantum interactions between particles are modelled using the tensor
product, which is also used to express objects and operations in artificial
neural networks.
This paper describes some of these common mathematical areas, including
examples of how they are used in artificial intelligence (AI), particularly in
automated reasoning and natural language processing (NLP). Techniques discussed
include vector spaces, scalar products, subspaces and implication, orthogonal
projection and negation, dual vectors, density matrices, positive operators,
and tensor products. Application areas include information retrieval,
categorization and implication, modelling word-senses and disambiguation,
inference in knowledge bases, and semantic composition.
Some of these approaches can potentially be implemented on quantum hardware.
Many of the practical steps in this implementation are in early stages, and
some are already realized. Explaining some of the common mathematical tools can
help researchers in both AI and quantum computing further exploit these
overlaps, recognizing and exploring new directions along the way.
|
Some implementations of variable neighborhood search based algorithms were
presented in \emph{C\'ecilia Daquin, Hamid Allaoui, Gilles Goncalves and
Tient\'e Hsu, Variable neighborhood search based algorithms for crossdock truck
assignment, RAIRO-Oper. Res., 55 (2021) 2291-2323}. This work is based on model
in \emph{Zhaowei Miao, Andrew Lim, Hong Ma, Truck dock assignment problem with
operational time constraint within crossdocks, European Journal of Operational
Research 192 (1), 2009, 105-115 }m which has been proven to be incorrect. We
reiterate and elaborate on the deficiencies in the latter and show that the
authors in the former were already aware of the deficiencies in the latter and
the proposed minor amendment does not overcome any of such deficiencies.
|
Gamma-ray data from the Fermi-Large Area Telescope reveal an unexplained,
apparently diffuse, signal from the Galactic bulge. The origin of this
"Galactic Center Excess" (GCE) has been debated with proposed sources
prominently including self-annihilating dark matter and a hitherto undetected
population of millisecond pulsars (MSPs). We use a binary population synthesis
forward model to demonstrate that an MSP population arising from the accretion
induced collapse of O-Ne white dwarfs in Galactic bulge binaries can naturally
explain the GCE. Synchrotron emission from MSP-launched cosmic ray electrons
and positrons seems also to explain the mysterious "haze" of hard-spectrum,
non-thermal microwave emission from the inner Galaxy detected in WMAP and
Planck data.
|
In HTTP Adaptive Streaming, video content is conventionally encoded by
adapting its spatial resolution and quantization level to best match the
prevailing network state and display characteristics. It is well known that the
traditional solution, of using a fixed bitrate ladder, does not result in the
highest quality of experience for the user. Hence, in this paper, we consider a
content-driven approach for estimating the bitrate ladder, based on
spatio-temporal features extracted from the uncompressed content. The method
implements a content-driven interpolation. It uses the extracted features to
train a machine learning model to infer the curvature points of the Rate-VMAF
curves in order to guide a set of initial encodings. We employ the VMAF quality
metric as a means of perceptually conditioning the estimation. When compared to
exhaustive encoding that produces the reference ladder, the estimated ladder is
composed by 74.3% of identical Rate-VMAF points with the reference ladder. The
proposed method offers a significant reduction of the number of encodes
required, 77.4%, at a small average Bj{\o}ntegaard Delta Rate cost, 1.12%.
|
Recently, In the year 2020, Altun et al. \cite{AL} introduced the notion of
$p$-proximal contractions and discussed about best proximity point results for
this class of mappings. Then in the year 2021, Gabeleh and Markin \cite{GB}
showed that the best proximity point theorem proved by Altun et al. in
\cite{AL} follows from the fixed point theory. In this short note, we show that
if the $p$-proximal contraction constant $k<\frac{1}{3}$ then the existence of
best proximity point for $p$-proximal contractions follows from the celebrated
Banach contraction principle.
|
Radiomic representations can quantify properties of regions of interest in
medical image data. Classically, they account for pre-defined statistics of
shape, texture, and other low-level image features. Alternatively, deep
learning-based representations are derived from supervised learning but require
expensive annotations from experts and often suffer from overfitting and data
imbalance issues. In this work, we address the challenge of learning
representations of 3D medical images for an effective quantification under data
imbalance. We propose a \emph{self-supervised} representation learning
framework to learn high-level features of 3D volumes as a complement to
existing radiomics features. Specifically, we demonstrate how to learn image
representations in a self-supervised fashion using a 3D Siamese network. More
importantly, we deal with data imbalance by exploiting two unsupervised
strategies: a) sample re-weighting, and b) balancing the composition of
training batches. When combining our learned self-supervised feature with
traditional radiomics, we show significant improvement in brain tumor
classification and lung cancer staging tasks covering MRI and CT imaging
modalities.
|
We study decoupling theory for functions on $\mathbb{R}$ with Fourier
transform supported in a neighborhood of short Dirichlet sequences $\{\log
n\}_{n=N+1}^{N+N^{1/2}}$, as well as sequences with similar convexity
properties. We utilize the wave packet structure of functions with frequency
support near an arithmetic progression.
|
This paper defines a methodology with in-depth data to identify the skills
needed by riders in the highest risk crash configurations to reduce casualty
rates. We present a case study using in-depth data of 803 powered-two-wheeler
crashes. Seven high-risk crash configuration based on the pre-crash
trajectories of the road-users involved were considered to investigate the
human errors as crash contributors. Primary crash contributing factor, evasive
manoeuvres performed, horizontal roadway alignment and speed-related factors
were identified, along with the most frequent configurations and those with the
greatest risk of severe injury. Straight Crossing Path/Lateral Direction was
the most frequent crash configuration and Turn Across Path/ Opposing Direction
that with the greatest risk of serious injury were identified. Multi-vehicle
crashes cannot be considered as a homogenous category of crashes to which the
same human failure is attributed, as different interactions between
motorcyclists and other road users are associated with both different types of
human error and different rider reactions. Human error in multiple-vehicle
crashes related to crossing paths configurations were different from errors
related to rear-end or head-on crashes. Multi-vehicle head-on crashes and
single-vehicle collisions frequently occur along curves. The involved collision
avoidance manoeuvres of the riders differed significantly among the highest
risk crash configurations. The most relevant lack of skills are identified and
linked to their most representative context. In most cases a combination of
different skills was required simultaneously to avoid the crash. The findings
underline the need to group accident cases, beyond the usual single-vehicle
versus multi-vehicle collision approach. Our methodology can also be applied to
support preventive actions based on riders training and eventually ADAS design.
|
We continue to study the optical properties of the solar gravitational lens
(SGL). The aim is prospective applications of the SGL for imaging purposes. We
investigate the solution of Maxwell's equations for the electromagnetic (EM)
field, obtained on the background of a static gravitational field of the Sun.
We now treat the Sun as an extended body with a gravitational field that can be
described using an infinite series of gravitational multipole moments. Studying
the propagation of monochromatic EM waves in this extended solar gravitational
field, we develop a wave-optical treatment of the SGL that allows us to study
the caustics formed in an image plane in the SGL's strong interference region.
We investigate the EM field in several important regions, namely i) the area in
the inner part of the caustic and close to the optical axis, ii) the region
outside the caustic, and iii) the region in the immediate vicinity of the
caustic, especially around its cusps and folds. We show that in the first two
regions the physical behavior of the EM field may be understood using the
method of stationary phase. However, in the immediate vicinity of the caustic
the method of stationary phase is inadequate and a wave-optical treatment is
necessary. Relying on the angular eikonal method, we develop a new approach to
describe the EM field accurately in all regions, including the immediate
vicinity of the caustics and especially near the cusps and folds. The method
allows us to investigate the EM field in this important region, which is
characterized by rapidly oscillating behavior. Our results are new and can be
used to describe gravitational lensing by realistic astrophysical objects, such
as stars, spiral and elliptical galaxies.
|
In two spatial dimensions, there are very few global existence results for
the Kuramoto-Sivashinsky equation. The majority of the few results in the
literature are strongly anisotropic, i.e. are results of thin-domain type. In
the spatially periodic case, the dynamics of the Kuramoto-Sivashinsky equation
are in part governed by the size of the domain, as this determines how many
linearly growing Fourier modes are present. The strongly anisotropic results
allow linearly growing Fourier modes in only one of the spatial directions. We
provide here the first proof of global solutions for the two-dimensional
Kuramoto-Sivashinsky equation with a linearly growing mode in both spatial
directions. We develop a new method to this end, categorizing wavenumbers as
low (linearly growing modes), intermediate (linearly decaying modes which serve
as energy sinks for the low modes), and high (strongly linearly decaying
modes). The low and intermediate modes are controlled by means of a Lyapunov
function, while the high modes are controlled with operator estimates in
function spaces based on the Wiener algebra.
|
Improving the predictive capability of molecular properties in ab initio
simulations is essential for advanced material discovery. Despite recent
progress making use of machine learning, utilizing deep neural networks to
improve quantum chemistry modelling remains severely limited by the scarcity
and heterogeneity of appropriate experimental data. Here we show how training a
neural network to replace the exchange-correlation functional within a
fully-differentiable three-dimensional Kohn-Sham density functional theory
(DFT) framework can greatly improve simulation accuracy. Using only eight
experimental data points on diatomic molecules, our trained
exchange-correlation networks enable improved prediction accuracy of
atomization energies across a collection of 104 molecules containing new bonds
and atoms that are not present in the training dataset.
|
Background and Objective:Computer-aided diagnosis (CAD) systems promote
diagnosis effectiveness and alleviate pressure of radiologists. A CAD system
for lung cancer diagnosis includes nodule candidate detection and nodule
malignancy evaluation. Recently, deep learning-based pulmonary nodule detection
has reached satisfactory performance ready for clinical application. However,
deep learning-based nodule malignancy evaluation depends on heuristic inference
from low-dose computed tomography volume to malignant probability, which lacks
clinical cognition. Methods:In this paper, we propose a joint radiology
analysis and malignancy evaluation network (R2MNet) to evaluate the pulmonary
nodule malignancy via radiology characteristics analysis. Radiological features
are extracted as channel descriptor to highlight specific regions of the input
volume that are critical for nodule malignancy evaluation. In addition, for
model explanations, we propose channel-dependent activation mapping to
visualize the features and shed light on the decision process of deep neural
network. Results:Experimental results on the LIDC-IDRI dataset demonstrate that
the proposed method achieved area under curve of 96.27% on nodule radiology
analysis and AUC of 97.52% on nodule malignancy evaluation. In addition,
explanations of CDAM features proved that the shape and density of nodule
regions were two critical factors that influence a nodule to be inferred as
malignant, which conforms with the diagnosis cognition of experienced
radiologists. Conclusion:Incorporating radiology analysis with nodule malignant
evaluation, the network inference process conforms to the diagnostic procedure
of radiologists and increases the confidence of evaluation results. Besides,
model interpretation with CDAM features shed light on the regions which DNNs
focus on when they estimate nodule malignancy probabilities.
|
We demonstrate that crystal defects can act as a probe of intrinsic
non-Hermitian topology. In particular, in point-gapped systems with periodic
boundary conditions, a pair of dislocations may induce a non-Hermitian skin
effect, where an extensive number of Hamiltonian eigenstates localize at only
one of the two dislocations. An example of such a phase are two-dimensional
systems exhibiting weak non-Hermitian topology, which are adiabatically related
to a decoupled stack of Hatano-Nelson chains. Moreover, we show that strong
two-dimensional point-gap topology may also result in a dislocation response,
even when there is no skin effect present with open boundary conditions. For
both cases, we directly relate their bulk topology to a stable dislocation
non-Hermitian skin effect. Finally, and in stark contrast to the Hermitian
case, we find that gapless non-Hermitian systems hosting bulk exceptional
points also give rise to a well-localized dislocation response.
|
After discussing the limitations inherent to all set-theoretic reflection
principles akin to those studied by A. L\'evy et. al. in the 1960's, we
introduce new principles of reflection based on the general notion of
\emph{Structural Reflection} and argue that they are in strong agreement with
the conception of reflection implicit in Cantor's original idea of the
unknowability of the \emph{Absolute}, which was subsequently developed in the
works of Ackermann, L\'evy, G\"odel, Reinhardt, and others. We then present a
comprehensive survey of results showing that different forms of the new
principles of Structural Reflection are equivalent to well-known large
cardinals axioms covering all regions of the large-cardinal hierarchy, thereby
justifying the naturalness of the latter.
|
In this paper, we investigate the task of hallucinating an authentic
high-resolution (HR) human face from multiple low-resolution (LR) video
snapshots. We propose a pure transformer-based model, dubbed VidFace, to fully
exploit the full-range spatio-temporal information and facial structure cues
among multiple thumbnails. Specifically, VidFace handles multiple snapshots all
at once and harnesses the spatial and temporal information integrally to
explore face alignments across all the frames, thus avoiding accumulating
alignment errors. Moreover, we design a recurrent position embedding module to
equip our transformer with facial priors, which not only effectively
regularises the alignment mechanism but also supplants notorious pre-training.
Finally, we curate a new large-scale video face hallucination dataset from the
public Voxceleb2 benchmark, which challenges prior arts on tackling unaligned
and tiny face snapshots. To the best of our knowledge, we are the first attempt
to develop a unified transformer-based solver tailored for video-based face
hallucination. Extensive experiments on public video face benchmarks show that
the proposed method significantly outperforms the state of the arts.
|
To support faster and more efficient networks, mobile operators and service
providers are bringing 5G millimeter wave (mmWave) networks indoors. However,
due to their high directionality, mmWave links are extremely vulnerable to
blockage by walls and human mobility. To address these challenges, we exploit
advances in artificially engineered metamaterials, introducing a wall-mounted
smart metasurface, called mmWall, that enables a fast mmWave beam relay through
the wall and redirects the beam power to another direction when a human body
blocks a line-of-sight path. Moreover, our mmWall supports multiple users and
fast beam alignment by generating multi-armed beams. We sketch the design of a
real-time system by considering (1) how to design a programmable,
metamaterial-based surface that refracts the incoming signal to one or more
arbitrary directions, and (2) how to split an incoming mmWave beam into
multiple outgoing beams and arbitrarily control the beam energy between these
beams. Preliminary results show the mmWall metasurface steers the outgoing beam
in a full 360-degrees, with an 89.8% single-beam efficiency and 74.5%
double-beam efficiency.
|
Given a set P of n points in the plane, the unit-disk graph G_{r}(P) with
respect to a parameter r is an undirected graph whose vertex set is P such that
an edge connects two points p, q \in P if the Euclidean distance between p and
q is at most r (the weight of the edge is 1 in the unweighted case and is the
distance between p and q in the weighted case). Given a value \lambda>0 and two
points s and t of P, we consider the following reverse shortest path problem:
computing the smallest r such that the shortest path length between s and t in
G_r(P) is at most \lambda. In this paper, we present an algorithm of O(\lfloor
\lambda \rfloor \cdot n \log n) time and another algorithm of O(n^{5/4}
\log^{7/4} n) time for the unweighted case, as well as an O(n^{5/4} \log^{5/2}
n) time algorithm for the weighted case. We also consider the L_1 version of
the problem where the distance of two points is measured by the L_1 metric; we
solve the problem in O(n \log^3 n) time for both the unweighted and weighted
cases.
|
In recent years, deep neural networks (DNNs) achieved state-of-the-art
performance on several computer vision tasks. However, the one typical drawback
of these DNNs is the requirement of massive labeled data. Even though few-shot
learning methods address this problem, they often use techniques such as
meta-learning and metric-learning on top of the existing methods. In this work,
we address this problem from a neuroscience perspective by proposing a
hypothesis named Ikshana, which is supported by several findings in
neuroscience. Our hypothesis approximates the refining process of conceptual
gist in the human brain while understanding a natural scene/image. While our
hypothesis holds no particular novelty in neuroscience, it provides a novel
perspective for designing DNNs for vision tasks. By following the Ikshana
hypothesis, we design a novel neural-inspired CNN architecture named
IkshanaNet. The empirical results demonstrate the effectiveness of our method
by outperforming several baselines on the entire and subsets of the Cityscapes
and the CamVid semantic segmentation benchmarks.
|
Deep learning (DL) has gained much attention and become increasingly popular
in modern data science. Computer scientists led the way in developing deep
learning techniques, so the ideas and perspectives can seem alien to
statisticians. Nonetheless, it is important that statisticians become involved
-- many of our students need this expertise for their careers. In this paper,
developed as part of a program on DL held at the Statistical and Applied
Mathematical Sciences Institute, we address this culture gap and provide tips
on how to teach deep learning to statistics graduate students. After some
background, we list ways in which DL and statistical perspectives differ,
provide a recommended syllabus that evolved from teaching two iterations of a
DL graduate course, offer examples of suggested homework assignments, give an
annotated list of teaching resources, and discuss DL in the context of two
research areas.
|
The problem of finding near-stationary points in convex optimization has not
been adequately studied yet, unlike other optimality measures such as
minimizing function value. Even in the deterministic case, the optimal method
(OGM-G, due to Kim and Fessler (2021)) has just been discovered recently. In
this work, we conduct a systematic study of the algorithmic techniques in
finding near-stationary points of convex finite-sums. Our main contributions
are several algorithmic discoveries: (1) we discover a memory-saving variant of
OGM-G based on the performance estimation problem approach (Drori and Teboulle,
2014); (2) we design a new accelerated SVRG variant that can simultaneously
achieve fast rates for both minimizing gradient norm and function value; (3) we
propose an adaptively regularized accelerated SVRG variant, which does not
require the knowledge of some unknown initial constants and achieves
near-optimal complexities. We put an emphasis on the simplicity and
practicality of the new schemes, which could facilitate future developments.
|
Digital pathology tasks have benefited greatly from modern deep learning
algorithms. However, their need for large quantities of annotated data has been
identified as a key challenge. This need for data can be countered by using
unsupervised learning in situations where data are abundant but access to
annotations is limited. Feature representations learned from unannotated data
using contrastive predictive coding (CPC) have been shown to enable classifiers
to obtain state of the art performance from relatively small amounts of
annotated computer vision data. We present a modification to the CPC framework
for use with digital pathology patches. This is achieved by introducing an
alternative mask for building the latent context and using a multi-directional
PixelCNN autoregressor. To demonstrate our proposed method we learn feature
representations from the Patch Camelyon histology dataset. We show that our
proposed modification can yield improved deep classification of histology
patches.
|
Exotic high-rank multipolar order parameters have been found to be
unexpectedly active in more and more correlated materials in recent years. Such
multipoles are usually dubbed as "Hidden Orders" since they are insensitive to
common experimental probes. Theoretically, it is also difficult to predict
multipolar orders via \textit{ab initio} calculations in real materials. Here,
we present an efficient method to predict possible multipoles in materials
based on linear response theory under random phase approximation. Using this
method, we successfully predict two pure meta-stable magnetic octupolar states
in monolayer $\alpha$-\ce{RuCl3}, which is confirmed by self-consistent
unrestricted Hartree-Fock calculations. We then demonstrate that these
octupolar states can be stabilized in monolayer $\alpha$-\ce{RuI3}, one of
which becomes the octupolar ground state. Furthermore, we also predict a
fingerprint of orthogonal magnetization pattern produced by the octupole
moment, which can be easily detected by experiment. The method and the example
presented in this work serve as a guidance for searching multipolar order
parameters in other correlated materials.
|
With sequentially stacked self-attention, (optional) encoder-decoder
attention, and feed-forward layers, Transformer achieves big success in natural
language processing (NLP), and many variants have been proposed. Currently,
almost all these models assume that the layer order is fixed and kept the same
across data samples. We observe that different data samples actually favor
different orders of the layers. Based on this observation, in this work, we
break the assumption of the fixed layer order in the Transformer and introduce
instance-wise layer reordering into the model structure. Our Instance-wise
Ordered Transformer (IOT) can model variant functions by reordered layers,
which enables each sample to select the better one to improve the model
performance under the constraint of almost the same number of parameters. To
achieve this, we introduce a light predictor with negligible parameter and
inference cost to decide the most capable and favorable layer order for any
input sequence. Experiments on 3 tasks (neural machine translation, abstractive
summarization, and code generation) and 9 datasets demonstrate consistent
improvements of our method. We further show that our method can also be applied
to other architectures beyond Transformer. Our code is released at Github.
|
This article shows that achieving capacity region of a 2-users weak Gaussian
Interference Channel (GIC) is equivalent to enlarging the core in a nested set
of Polymatroids (each equivalent to capacity region of a multiple-access
channel) through maximizing a minimum rate, then projecting along its
orthogonal span and continuing recursively. This formulation relies on defining
dummy private messages to capture the effect of interference in GIC. It follows
that relying on independent Gaussian random code-books is optimum, and the
corresponding solution corresponds to achieving the boundary in HK constraints.
|
In this paper, we represent the problem of selecting miners within a
blockchain-based system as a subset selection problem. We formulate the problem
of minimising blockchain energy consumption as an optimisation problem with two
conflicting objectives: energy consumption and trust. The proposed model is
compared across different algorithms to demonstrate its performance.
|
Edge computing offers the distinct advantage of harnessing compute
capabilities on resources located at the edge of the network to run workloads
of relatively weak user devices. This is achieved by offloading computationally
intensive workloads, such as deep learning from user devices to the edge. Using
the edge reduces the overall communication latency of applications as workloads
can be processed closer to where data is generated on user devices rather than
sending them to geographically distant clouds. Specialised hardware
accelerators, such as Graphics Processing Units (GPUs) available in the
cloud-edge network can enhance the performance of computationally intensive
workloads that are offloaded from devices on to the edge. The underlying
approach required to facilitate this is virtualization of GPUs. This paper
therefore sets out to investigate the potential of GPU accelerator
virtualization to improve the performance of deep learning workloads in a
cloud-edge environment. The AVEC accelerator virtualization framework is
proposed that incurs minimum overheads and requires no source-code modification
of the workload. AVEC intercepts local calls to a GPU on a device and forwards
them to an edge resource seamlessly. The feasibility of AVEC is demonstrated on
a real-world application, namely OpenPose using the Caffe deep learning
library. It is observed that on a lab-based experimental test-bed AVEC delivers
up to 7.48x speedup despite communication overheads incurred due to data
transfers.
|
As herd size on dairy farms continues to increase, automatic health
monitoring of cows is gaining in interest. Lameness, a prevalent health
disorder in dairy cows, is commonly detected by analyzing the gait of cows. A
cow's gait can be tracked in videos using pose estimation models because models
learn to automatically localize anatomical landmarks in images and videos. Most
animal pose estimation models are static, that is, videos are processed frame
by frame and do not use any temporal information. In this work, a static
deep-learning model for animal-pose-estimation was extended to a temporal model
that includes information from past frames. We compared the performance of the
static and temporal pose estimation models. The data consisted of 1059 samples
of 4 consecutive frames extracted from videos (30 fps) of 30 different dairy
cows walking through an outdoor passageway. As farm environments are prone to
occlusions, we tested the robustness of the static and temporal models by
adding artificial occlusions to the videos.The experiments showed that, on
non-occluded data, both static and temporal approaches achieved a Percentage of
Correct Keypoints ([email protected]) of 99%. On occluded data, our temporal approach
outperformed the static one by up to 32.9%, suggesting that using temporal data
was beneficial for pose estimation in environments prone to occlusions, such as
dairy farms. The generalization capabilities of the temporal model was
evaluated by testing it on data containing unknown cows (cows not present in
the training set). The results showed that the average [email protected] was of 93.8% on
known cows and 87.6% on unknown cows, indicating that the model was capable of
generalizing well to new cows and that they could be easily fine-tuned to new
herds. Finally, we showed that with harder tasks, such as occlusions and
unknown cows, a deeper architecture was more beneficial.
|
Document-level machine translation conditions on surrounding sentences to
produce coherent translations. There has been much recent work in this area
with the introduction of custom model architectures and decoding algorithms.
This paper presents a systematic comparison of selected approaches from the
literature on two benchmarks for which document-level phenomena evaluation
suites exist. We find that a simple method based purely on back-translating
monolingual document-level data performs as well as much more elaborate
alternatives, both in terms of document-level metrics as well as human
evaluation.
|
The possibility that rotating black holes could be natural particle
accelerators has been subject of intense debate. While it appears that for
extremal Kerr black holes arbitrarily high center of mass energies could be
achieved, several works pointed out that both theoretical as well as
astrophysical arguments would severely dampen the attainable energies. In this
work we study particle collisions near Kerr--Newman black holes, by reviewing
and extending previously proposed scenarios. Most importantly, we implement the
hoop conjecture for all cases and we discuss the astrophysical relevance of
these collisional Penrose processes. The outcome of this investigation is that
scenarios involving near-horizon target particles are in principle able to
attain, sub-Planckian, but still ultra high, center of mass energies of the
order of $10^{21}-10^{23}$ eV. Thus, these target particle collisional Penrose
processes could contribute to the observed spectrum of ultra high-energy cosmic
rays, even if the hoop conjecture is taken into account, and as such deserve
further scrutiny in realistic settings.
|
In this paper, we describe a method to tackle data sparsity and create
recommendations in domains with limited knowledge about user preferences. We
expand the variational autoencoder collaborative filtering from a single-domain
to a multi-domain setting. The intuition is that user-item interactions in a
source domain can augment the recommendation quality in a target domain. The
intuition can be taken to its extreme, where, in a cross-domain setup, the user
history in a source domain is enough to generate high-quality recommendations
in a target one. We thus create a Product-of-Experts (POE) architecture for
recommendations that jointly models user-item interactions across multiple
domains. The method is resilient to missing data for one or more of the
domains, which is a situation often found in real life. We present results on
two widely-used datasets - Amazon and Yelp, which support the claim that
holistic user preference knowledge leads to better recommendations.
Surprisingly, we find that in some cases, a POE recommender that does not
access the target domain user representation can surpass a strong VAE
recommender baseline trained on the target domain.
|
The KATRIN experiment is designed for a direct and model-independent
determination of the effective electron anti-neutrino mass via a high-precision
measurement of the tritium $\beta$-decay endpoint region with a sensitivity on
$m_\nu$ of 0.2$\,$eV/c$^2$ (90% CL). For this purpose, the $\beta$-electrons
from a high-luminosity windowless gaseous tritium source traversing an
electrostatic retarding spectrometer are counted to obtain an integral spectrum
around the endpoint energy of 18.6$\,$keV. A dominant systematic effect of the
response of the experimental setup is the energy loss of $\beta$-electrons from
elastic and inelastic scattering off tritium molecules within the source. We
determined the \linebreak energy-loss function in-situ with a pulsed
angular-selective and monoenergetic photoelectron source at various
tritium-source densities. The data was recorded in integral and differential
modes; the latter was achieved by using a novel time-of-flight technique.
We developed a semi-empirical parametrization for the energy-loss function
for the scattering of 18.6-keV electrons from hydrogen isotopologs. This model
was fit to measurement data with a 95% T$_2$ gas mixture at 30$\,$K, as used in
the first KATRIN neutrino mass analyses, as well as a D$_2$ gas mixture of 96%
purity used in KATRIN commissioning runs. The achieved precision on the
energy-loss function has abated the corresponding uncertainty of
$\sigma(m_\nu^2)<10^{-2}\,\mathrm{eV}^2$ [arXiv:2101.05253] in the KATRIN
neutrino-mass measurement to a subdominant level.
|
We present the results from a new search for candidate galaxies at z ~ 8.5-11
discovered over the 850 arcmin^2 area probed by the Cosmic Assembly
Near-Infrared Deep Extragalactic Legacy Survey (CANDELS). We use a photometric
redshift selection including both Hubble and Spitzer Space Telescope photometry
to robustly identify galaxies in this epoch at F160W < 26.6. We use a detailed
vetting procedure, including screening for persistence, stellar contamination,
inclusion of ground-based imaging, and followup space-based imaging to build a
robust sample of 11 candidate galaxies, three presented here for the first
time. The inclusion of Spitzer/IRAC photometry in the selection process reduces
contamination, and yields more robust redshift estimates than Hubble alone. We
constrain the evolution of the rest-frame ultraviolet luminosity function via a
new method of calculating the observed number densities without choosing a
prior magnitude bin size. We find that the abundance at our brightest probed
luminosities (M_UV=-22.3) is consistent with predictions from simulations which
assume that galaxies in this epoch have gas depletion times at least as short
as those in nearby starburst galaxies. Due to large Poisson and cosmic variance
uncertainties we cannot conclusively rule out either a smooth evolution of the
luminosity function continued from z=4-8, or an accelerate decline at z > 8. We
calculate that the presence of seven galaxies in a single field (EGS) is an
outlier at the 2-sigma significance level, implying the discovery of a
significant overdensity. These scenarios will be imminently testable to high
confidence within the first year of observations of the James Webb Space
Telescope.
|
This paper investigates the stability properties of the spectrum of the
classical Steklov problem under domain perturbation. We find conditions which
guarantee the spectral stability and we show their optimality. We emphasize the
fact that our spectral stability results also involve convergence of
eigenfunctions in a suitable sense according with the definition of connecting
system by \cite{Vainikko}. The convergence of eigenfunctions can be expressed
in terms of the $H^1$ strong convergence. The arguments used in our proofs are
based on an appropriate definition of compact convergence of the resolvent
operators associated with the Steklov problems on varying domains. In order to
show the optimality of our conditions we present alternative assumptions which
give rise to a degeneration of the spectrum or to a discontinuity of the
spectrum in the sense that the eigenvalues converge to the eigenvalues of a
limit problem which does not coincide with the Steklov problem on the limiting
domain.
|
In recent years, pi-conjugated polymers are attracting considerable interest
in view of their light-dependent torsional reorganization around the
pi-conjugated backbone, which determines peculiar light-emitting properties.
Motivated by the interest in designing conjugated polymers with tunable
photoswitchable pathways, we devised a computational framework to enhance the
sampling of the torsional conformational space and at the same time estimate
ground to excited-state free-energy differences. This scheme is based on a
combination of Hamiltonian Replica Exchange (REM), Parallel Bias metadynamics,
and free-energy perturbation theory. In our scheme, each REM replica samples an
intermediate unphysical state between the ground and the first two excited
states, which are characterized by TD-DFT simulations at the B3LYP/6-31G* level
of theory. We applied the method to a 5-mer of 9,9-dioctylfluorene and found
that upon irradiation this system can undergo a dihedral inversion from 155 to
-155 degrees crossing a barrier that decreases from 0.1 eV in the ground state
(S0) to 0.05 eV and 0.04 eV in the first (S1) and second (S2) excited states.
Furthermore, S1 and even more S2 were predicted to stabilize coplanar
dihedrals, with a local free-energy minimum located at +-44 degrees. The
presence of a free-energy barrier of 0.08 eV for the S1 and 0.12 eV for the S2
state can trap this conformation in a basin far from the global free-energy
minimum located at 155 degrees. The simulation results were compared with the
experimental emission spectrum, showing a quantitative agreement with the
predictions provided by our framework.
|
We explore variants of Erd\H os' unit distance problem concerning dot
products between successive pairs of points chosen from a large finite subset
of either $\mathbb F_q^d$ or $\mathbb Z_q^d,$ where $q$ is a power of an odd
prime. Specifically, given a large finite set of points $E$, and a sequence of
elements of the base field (or ring) $(\alpha_1,\ldots,\alpha_k)$, we give
conditions guaranteeing the expected number of $(k+1)$-tuples of distinct
points $(x_1,\dots, x_{k+1})\in E^{k+1}$ satisfying $x_j \cdot
x_{j+1}=\alpha_j$ for every $1\leq j \leq k$.
|
The generation of high-order harmonics in finite, hexagonal nanoribbons is
simulated. Ribbons with armchair and zig-zag edges are investigated by using a
tight-binding approach with only nearest neighbor hopping. By turning an
alternating on-site potential off or on, the system describes for example
graphene or hexagonal boron nitride, respectively. The incoming laser pulse is
linearly polarized along the ribbons. The emitted light has a polarization
component parallel to the polarization of the incoming field. The presence or
absence of a polarization component perpendicular to the polarization of the
incoming field can be explained by the symmetry of the ribbons. Characteristic
features in the harmonic spectra for the finite ribbons are analyzed with the
help of the band structure for the corresponding periodic systems.
|
Pions constitute nearly $70\%$ of final state particles in ultra high energy
collisions. They act as a probe to understand the statistical properties of
Quantum Chromodynamics (QCD) matter i.e. Quark Gluon Plasma (QGP) created in
such relativistic heavy ion collisions (HIC). Apart from this, direct photons
are the most versatile tools to study relativistic HIC. They are produced, by
various mechanisms, during the entire space-time history of the strongly
interacting system. Direct photons provide measure of jet-quenching when
compared with other quark or gluon jets. The $\pi^{0}$ decay into two photons
make the identification of non-correlated gamma coming from another process
cumbersome in the Electromagnetic Calorimeter. We investigate the use of deep
learning architecture for reconstruction and identification of single as well
as multi particles showers produced in calorimeter by particles created in high
energy collisions. We utilize the data of electromagnetic shower at calorimeter
cell-level to train the network and show improvements for identification and
characterization. These networks are fast and computationally inexpensive for
particle shower identification and reconstruction for current and future
experiments at particle colliders.
|
We propose a three-terminal structure to probe robust signatures of Majorana
zero modes. This structure consists of a quantum dot coupled to the normal
metal, s-wave superconducting and Majorana Y-junction leads. The zero-bias
differential conductance at zero temperature of the normal-metal lead peaks at
$2e^{2}/h$, which will be deflected after Majorana braiding. This quantized
conductance can entirely arise from the Majorana-induced crossed Andreev
reflection, protected by the energy gap of the superconducting lead. We find
that the effect of thermal broadening is significantly suppressed when the dot
is on resonance. In the case that the energy level of the quantum dot is much
larger than the superconducting gap, tunneling processes are dominated by
Majorana-induced crossed Andreev reflection. Particularly, a novel kind of
crossed Andreev reflection equivalent to the splitting of charge quanta $3e$
occurs after Majorana braiding.
|
Fuzzing is becoming more and more popular in the field of vulnerability
detection. In the process of fuzzing, seed selection strategy plays an
important role in guiding the evolution direction of fuzzing. However, the SOTA
fuzzers only focus on individual uncertainty, neglecting the multi-factor
uncertainty caused by both randomization and evolution. In this paper, we
consider seed selection in fuzzing as a large-scale online planning problem
under uncertainty. We propose \mytool which is a new intelligent seed selection
strategy. In Alpha-Fuzz, we leverage the MCTS algorithm to deal with the
effects of the uncertainty of randomization and evolution of fuzzing.
Especially, we analyze the role of the evolutionary relationship between seeds
in the process of fuzzing, and propose a new tree policy and a new default
policy to make the MCTS algorithm better adapt to the fuzzing. We compared
\mytool with four state-of-the-art fuzzers in 12 real-world applications and
LAVA-M data set. The experimental results show that \mytool could find more
bugs on lava-M and outperforms other tools in terms of code coverage and number
of bugs discovered in the real-world applications. In addition, we tested the
compatibility of \mytool, and the results showed that \mytool could improve the
performance of existing tools such as MOPT and QSYM.
|
The Baikal Gigaton Volume Detector (Baikal-GVD) is a km$^3$-scale neutrino
detector currently under construction in Lake Baikal, Russia. The detector
consists of several thousand optical sensors arranged on vertical strings, with
36 sensors per string. The strings are grouped into clusters of 8 strings each.
Each cluster can operate as a stand-alone neutrino detector. The detector
layout is optimized for the measurement of astrophysical neutrinos with
energies of $\sim$ 100 TeV and above. Events resulting from charged current
interactions of muon (anti-)neutrinos will have a track-like topology in
Baikal-GVD. A fast $\chi^2$-based reconstruction algorithm has been developed
to reconstruct such track-like events. The algorithm has been applied to data
collected in 2019 from the first five operational clusters of Baikal-GVD,
resulting in observations of both downgoing atmospheric muons and upgoing
atmospheric neutrinos. This serves as an important milestone towards
experimental validation of the Baikal-GVD design. The analysis is limited to
single-cluster data, favoring nearly-vertical tracks.
|
Using a three-dimensional active vertex model, we numerically study the
shapes of strained unsupported epithelial monolayers subject to active
junctional noise due to stochastic binding and unbinding of myosin. We find
that while uniaxial, biaxial, and isotropic in-plane compressive strains do
lead to the formation of longitudinal, herringbone-pattern, and labyrinthine
folds, respectively, the villus morphology characteristic of, e.g., the small
intestine appears only if junctional tension fluctuations are strong enough to
fluidize the tissue. Moreover, the fluidized epithelium features villi even in
absence of compressive strain provided that the apico-basal differential
tension is large enough. We analyze several details of the different epithelial
forms including the role of strain rate and the modulation of tissue thickness
across folds. Our results show that nontrivial morphologies can form even in
unsupported, non-patterned epithelia.
|
In this paper we study the magnetic charges of the free massless
Rarita-Schwinger field in four dimensional asymptotically flat space-time. This
is the first step towards extending the study of the dual BMS charges to
supergravity. The magnetic charges appear due to the addition of a boundary
term in the action. This term is similar to the theta term in Yang-Mills
theory. At null-infinity an infinite dimensional algebra is discovered, both
for the electric and magnetic charge.
|
Graded modal types systems and coeffects are becoming a standard formalism to
deal with context-dependent computations where code usage plays a central role.
The theory of program equivalence for modal and coeffectful languages, however,
is considerably underdeveloped if compared to the denotational and operational
semantics of such languages. This raises the question of how much of the theory
of ordinary program equivalence can be given in a modal scenario. In this work,
we show that coinductive equivalences can be extended to a modal setting, and
we do so by generalising Abramsky's applicative bisimilarity to coeffectful
behaviours. To achieve this goal, we develop a general theory of ternary
program relations based on the novel notion of a comonadic lax extension, on
top of which we define a modal extension of Abramsky's applicative bisimilarity
(which we dub modal applicative bisimilarity). We prove such a relation to be a
congruence, this way obtaining a compositional technique for reasoning about
modal and coeffectful behaviours. But this is not the end of the story: we also
establish a correspondence between modal program relations and program
distances. This correspondence shows that modal applicative bisimilarity and (a
properly extended) applicative bisimilarity distance coincide, this way
revealing that modal program equivalences and program distances are just two
sides of the same coin.
|
We study multivariate approximation in the average case setting with the
error measured in the weighted $L_2$ norm. We consider algorithms that use
standard information $\Lambda^{\rm std}$ consisting of function values or
general linear information $\Lambda^{\rm all}$ consisting of arbitrary
continuous linear functionals. We investigate the equivalences of various
notions of algebraic and exponential tractability for $\Lambda^{\rm std}$ and
$\Lambda^{\rm all}$ for the absolute error criterion, and show that the power
of $\Lambda^{\rm std}$ is the same as that of $\Lambda^{\rm all}$ for all
notions of algebraic and exponential tractability without any condition.
Specifically, we solve Open Problems 116-118 and almost solve Open Problem 115
as posed by E.Novak and H.Wo\'zniakowski in the book: Tractability of
Multivariate Problems, Volume III: Standard Information for Operators, EMS
Tracts in Mathematics, Z\"urich, 2012.
|
A topological pump enables robust transport of quantized particles when the
system parameters are varied in a cyclic process. In previous studies,
topological pump was achieved inhomogeneous systems guaranteed by a topological
invariant of the bulk band structure when time is included as an additional
synthetic dimension. Recently, bulk-boundary correspondence has been
generalized to the bulk-disclination correspondence, describing the emergence
of topological bounded states in the crystallographic defects protected by the
bulk topology. Here we show the topological pumping can happen between
different disclination states with different chiralities in an inhomogeneous
structure. Based on a generalized understanding of the charge pumping process,
we explain the topological disclination pump by tracing the motion of Wannier
centers in each unit cell. Besides, by constructing two disclination structures
and introducing a symmetry-breaking perturbation, we achieve a topological
pumping between different dislocation cores. Our result opens a route to study
the topological pumping in inhomogeneous topological crystalline systems and
provides a flexible platform for robust energy transport.
|
Person images captured by surveillance cameras are often occluded by various
obstacles, which lead to defective feature representation and harm person
re-identification (Re-ID) performance. To tackle this challenge, we propose to
reconstruct the feature representation of occluded parts by fully exploiting
the information of its neighborhood in a gallery image set. Specifically, we
first introduce a visible part-based feature by body mask for each person
image. Then we identify its neighboring samples using the visible features and
reconstruct the representation of the full body by an outlier-removable graph
neural network with all the neighboring samples as input. Extensive experiments
show that the proposed approach obtains significant improvements. In the
large-scale Occluded-DukeMTMC benchmark, our approach achieves 64.2% mAP and
67.6% rank-1 accuracy which outperforms the state-of-the-art approaches by
large margins, i.e.,20.4% and 12.5%, respectively, indicating the effectiveness
of our method on occluded Re-ID problem.
|
This paper studies system theoretic properties of the class of difference
inclusions of convex processes. We will develop a framework considering
eigenvalues and eigenvectors, weakly and strongly invariant cones, and a
decomposition of convex processes. This will allow us to characterize
reachability, stabilizability and (null-)controllability of nonstrict convex
processes in terms of spectral properties. These characterizations generalize
all previously known results regarding for instance linear processes and
specific classes of nonstrict convex processes.
|
In the international oil trade network (iOTN), trade shocks triggered by
extreme events may spread over the entire network along the trade links of the
central economies and even lead to the collapse of the whole system. In this
study, we focus on the concept of "too central to fail" and use traditional
centrality indicators as strategic indicators for simulating attacks on
economic nodes, and simulates various situations in which the structure and
function of the global oil trade network are lost when the economies suffer
extreme trade shocks. The simulation results show that the global oil trade
system has become more vulnerable in recent years. The regional aggregation of
oil trade is an essential source of iOTN's vulnerability. Maintaining global
oil trade stability and security requires a focus on economies with greater
influence within the network module of the iOTN. International organizations
such as OPEC and OECD established more trade links around the world, but their
influence on the iOTN is declining. We improve the framework of oil security
and trade risk assessment based on the topological index of iOTN, and provide a
reference for finding methods to maintain network robustness and trade
stability.
|
Audio-visual speech enhancement system is regarded to be one of promising
solutions for isolating and enhancing speech of desired speaker. Conventional
methods focus on predicting clean speech spectrum via a naive convolution
neural network based encoder-decoder architecture, and these methods a) not
adequate to use data fully and effectively, b) cannot process features
selectively. The proposed model addresses these drawbacks, by a) applying a
model that fuses audio and visual features layer by layer in encoding phase,
and that feeds fused audio-visual features to each corresponding decoder layer,
and more importantly, b) introducing soft threshold attention into the model to
select the informative modality softly. This paper proposes attentional
audio-visual multi-layer feature fusion model, in which soft threshold
attention unit are applied on feature mapping at every layer of decoder. The
proposed model demonstrates the superior performance of the network against the
state-of-the-art models.
|
In this article we propose a shooting algorithm for partially-affine optimal
control problems, this is, systems in which the controls appear both linearly
and nonlinearly in the dynamics. Since the shooting system generally has more
equations than unknowns, the algorithm relies on the Gauss-Newton method. As a
consequence, the convergence is locally quadratic provided that the derivative
of the shooting function is injective and Lipschitz continuous at the optimal
solution. We provide a proof of the convergence for the proposed algorithm
using recently developed second order sufficient conditions for weak optimality
of partially-affine problems. We illustrate the applicability of the algorithm
by solving an optimal treatment-vaccination epidemiological problem.
|
We optimize a selection of eigenvalues of the Laplace operator with Dirichlet
or Neumann boundary conditions by adjusting the shape of the domain on which
the eigenvalue problem is considered. Here, a phase-field function is used to
represent the shapes over which we minimize. The idea behind this method is to
modify the Laplace operator by introducing phase-field dependent coefficients
in order to extend the eigenvalue problem on a fixed design domain containing
all admissible shapes. The resulting shape and topology optimization problem
can then be formulated as an optimal control problem with PDE constraints in
which the phase-field function acts as the control. For this optimal control
problem, we establish first-order necessary optimality conditions and we
rigorously derive its sharp interface limit. Eventually, we present and discuss
several numerical simulations for our optimization problem.
|
Numerous early warning systems based on rainfall measurements have been
designed over the last decades to forecast the onset of rainfall-induced
shallow landslides. However, their use over large areas poses challenges due to
uncertainties related with the interaction among various controlling factors.
We propose a hybrid stochastic-mechanical approach to quantify the role of the
hydro-mechanical factors influencing slope stability and rank their importance.
The proposed methodology relies on a physically-based model of landslide
triggering, and a stochastic approach treating selected model parameters as
correlated aleatory variables. The features of the methodology are illustrated
by referencing data for Campania, an Italian region characterized by
landslide-prone volcanic deposits. Synthetic intensity-duration (ID) thresholds
are computed through Monte Carlo simulations. Several key variables are treated
as aleatoric, constraining their statistical properties through available
measurements. The variabilities of topographic features (e.g., slope angle),
physical and hydrological properties (e.g., porosity, dry unit weight
${\gamma}_d$, and saturated hydraulic conductivity, $K_s$), and pre-rainstorm
suction is evaluated to inspect its role on the resulting scatter of ID
thresholds. We find that: i) $K_s$ is most significant for high-intensity
storms; ii) in steep slopes, changes in pressure head greatly reduce the
timescale of landslide triggering, making the system heavily reliant on initial
conditions; iii) for events occurring at long failure times (gentle slopes
and/or low intensity storms), the significance of the evolving stress level
(through ${\gamma}_d$) is highest. The proposed approach can be translated to
other regions, expanded to encompass new aleatory variables, and combined with
other hydro-mechanical triggering models.
|
The design and performance of the inner detector trigger for the high level
trigger of the ATLAS experiment at the Large Hadron Collider during the 2016-18
data taking period is discussed. In 2016, 2017, and 2018 the ATLAS detector
recorded 35.6 fb$^{-1}$, 46.9 fb$^{-1}$, and 60.6 fb$^{-1}$ respectively of
proton-proton collision data at a centre-of-mass energy of 13 TeV. In order to
deal with the very high interaction multiplicities per bunch crossing expected
with the 13 TeV collisions the inner detector trigger was redesigned during the
long shutdown of the Large Hadron Collider from 2013 until 2015. An overview of
these developments is provided and the performance of the tracking in the
trigger for the muon, electron, tau and $b$-jet signatures is discussed. The
high performance of the inner detector trigger with these extreme interaction
multiplicities demonstrates how the inner detector tracking continues to lie at
the heart of the trigger performance and is essential in enabling the ATLAS
physics programme.
|
Two-dimensional (2D) semiconductors are promising candidates for scaled
transistors because they are immune to mobility degradation at the monolayer
limit. However, sub-10 nm scaling of 2D semiconductors, such as MoS2, is
limited by the contact resistance. In this work, we show for the first time a
statistical study of Au contacts to chemical vapor deposited monolayer MoS2
using transmission line model (TLM) structures, before and after dielectric
encapsulation. We report contact resistance values as low as 330 ohm-um, which
is the lowest value reported to date. We further study the effect of Al2O3
encapsulation on variability in contact resistance and other device metrics.
Finally, we note some deviations in the TLM model for short-channel devices in
the back-gated configuration and discuss possible modifications to improve the
model accuracy.
|
In an unpublished work of Fasel-Rao-Swan the notion of the relative Witt
group $W_E(R,I)$ is defined. In this article we will give the details of this
construction. Then we studied the injectivity of the relative Vaserstein symbol
$V_{R,I}: Um_3(R,I)/E_3(R,I)\rightarrow W_E(R,I)$. We established injectivity
of this symbol if $R$ is an affine non-singular algebra of dimension $3$ over a
perfect $C_1$-field and $I$ is a local complete intersection ideal of $R$. It
is believed that for a $3$-dimensional affine algebra non-singularity is not
necessary for establishing injectivity of the Vaserstein symbol . At the end of
the article we will give an example of a singular $3$-dimensional algebra over
a perfect $C_1$-field for which the Vaserstein symbol is injective.
|
Using a variant of Caccioppoli's inequality involving small weights, i.e.
weights of the form $(1+|\nabla u|^2)^{-\alpha/2}$ for some $\alpha > 0$, we
establish several Liouville-type theorems under general non-standard growth
conditions.
|
As the COVID-19 virus spread over the world, governments restricted mobility
to slow transmission. Public health measures had different intensities across
European countries but all had significant impact on peoples daily lives and
economic activities, causing a drop of CO2 emissions of about 10% for the whole
year 2020. Here, we analyze changes in natural gas use in the industry and
built environment sectors during the first half of year 2020 with daily gas
flows data from pipeline and storage facilities in Europe. We find that
reductions of industrial gas use reflect decreases in industrial production
across most countries. Surprisingly, natural gas use in buildings also
decreased despite most people being confined at home and cold spells in March
2020. Those reductions that we attribute to the impacts of COVID-19 remain of
comparable magnitude to previous variations induced by cold or warm climate
anomalies in the cold season. We conclude that climate variations played a
larger role than COVID-19 induced stay-home orders in natural gas consumption
across Europe.
|
We demonstrate the stability of ferromagnetic order of one unit cell thick
optimally doped manganite (La0.7Ba0.3MnO3, LBMO) epitaxially grown between two
layers of SrRuO3 (SRO) by using x-ray magnetic circular dichroism. At low
temperature LBMO shows an inverted hysteresis loop due to the strong
antiferromagnetic coupling to SRO. Moreover, above SRO TC the manganite still
exhibits magnetic remanence. Density Functional Theory calculations show that
coherent interfaces of LBMO with SRO hinder electronic confinement and the
strong magnetic coupling enables the increase of the LBMO TC. From the
structural point of view, interfacing with SRO enables LBMO to have octahedral
rotations similar to bulk. All these factors jointly contribute for stable
ferromagnetism up to 130 K for a one unit cell LBMO film.
|
We tackle the long-tailed visual recognition problem from the knowledge
distillation perspective by proposing a Distill the Virtual Examples (DiVE)
method. Specifically, by treating the predictions of a teacher model as virtual
examples, we prove that distilling from these virtual examples is equivalent to
label distribution learning under certain constraints. We show that when the
virtual example distribution becomes flatter than the original input
distribution, the under-represented tail classes will receive significant
improvements, which is crucial in long-tailed recognition. The proposed DiVE
method can explicitly tune the virtual example distribution to become flat.
Extensive experiments on three benchmark datasets, including the large-scale
iNaturalist ones, justify that the proposed DiVE method can significantly
outperform state-of-the-art methods. Furthermore, additional analyses and
experiments verify the virtual example interpretation, and demonstrate the
effectiveness of tailored designs in DiVE for long-tailed problems.
|
We prove several statements about arithmetic hyperbolicity of certain blow-up
varieties. As a corollary we obtain multiple examples of simply connected
quasi-projective varieties that are pseudo-arithmetically hyperbolic. This
generalizes results of Corvaja and Zannier obtained in dimension 2 to arbitrary
dimension. The key input is an application of the Ru-Vojta's strategy. We also
obtain the analogue results for function fields and Nevanlinna theory with the
goal to apply them in a future paper in the context of Campana's conjectures.
|
We show how one may classify all semisimple algebras containing the
$\mathfrak{su}(3)\oplus \mathfrak{su}(2) \oplus \mathfrak{u}(1)$ symmetry of
the Standard Model and acting on some given matter sector, enabling theories
beyond the Standard Model with unification (partial or total) of symmetries
(gauge or global) to be catalogued. With just a single generation of Standard
Model fermions plus a singlet neutrino, the only {gauge} symmetries correspond
to the well-known algebras $\mathfrak{su}(5),\mathfrak{so}(10),$ and
$\mathfrak{su}(4)\oplus \mathfrak{su}(2) \oplus \mathfrak{su}(2)$, but with two
or more generations a limited number of exotic symmetries mixing flavour,
colour, and electroweak degrees of freedom become possible. We provide a
complete catalogue in the case of 3 generations or fewer and outline how our
method generalizes to cases with additional matter.
|
With the rapid development of Internet of Things (IoT), massive machine-type
communication has become a promising application scenario, where a large number
of devices transmit sporadically to a base station (BS). Reconfigurable
intelligent surface (RIS) has been recently proposed as an innovative new
technology to achieve energy efficiency and coverage enhancement by
establishing favorable signal propagation environments, thereby improving data
transmission in massive connectivity. Nevertheless, the BS needs to detect
active devices and estimate channels to support data transmission in
RIS-assisted massive access systems, which yields unique challenges. This paper
shall consider an RIS-assisted uplink IoT network and aims to solve the
RIS-related activity detection and channel estimation problem, where the BS
detects the active devices and estimates the separated channels of the
RIS-to-device link and the RIS-to-BS link. Due to limited scattering between
the RIS and the BS, we model the RIS-to-BS channel as a sparse channel. As a
result, by simultaneously exploiting both the sparsity of sporadic transmission
in massive connectivity and the RIS-to-BS channels, we formulate the
RIS-related activity detection and channel estimation problem as a sparse
matrix factorization problem. Furthermore, we develop an approximate message
passing (AMP) based algorithm to solve the problem based on Bayesian inference
framework and reduce the computational complexity by approximating the
algorithm with the central limit theorem and Taylor series arguments. Finally,
extensive numerical experiments are conducted to verify the effectiveness and
improvements of the proposed algorithm.
|
Time series forecasting is a relevant task that is performed in several
real-world scenarios such as product sales analysis and prediction of energy
demand. Given their accuracy performance, currently, Recurrent Neural Networks
(RNNs) are the models of choice for this task. Despite their success in time
series forecasting, less attention has been paid to make the RNNs trustworthy.
For example, RNNs can not naturally provide an uncertainty measure to their
predictions. This could be extremely useful in practice in several cases e.g.
to detect when a prediction might be completely wrong due to an unusual pattern
in the time series. Whittle Sum-Product Networks (WSPNs), prominent deep
tractable probabilistic circuits (PCs) for time series, can assist an RNN with
providing meaningful probabilities as uncertainty measure. With this aim, we
propose RECOWN, a novel architecture that employs RNNs and a discriminant
variant of WSPNs called Conditional WSPNs (CWSPNs). We also formulate a
Log-Likelihood Ratio Score as better estimation of uncertainty that is tailored
to time series and Whittle likelihoods. In our experiments, we show that
RECOWNs are accurate and trustworthy time series predictors, able to "know when
they do not know".
|
We present a new model-based interpolation procedure for satisfiability
modulo theories (SMT). The procedure uses a new mode of interaction with the
SMT solver that we call solving modulo a model. This either extends a given
partial model into a full model for a set of assertions or returns an
explanation (a model interpolant) when no solution exists. This mode of
interaction fits well into the model-constructing satisfiability (MCSAT)
framework of SMT. We use it to develop an interpolation procedure for any
MCSAT-supported theory. In particular, this method leads to an effective
interpolation procedure for nonlinear real arithmetic. We evaluate the new
procedure by integrating it into a model checker and comparing it with
state-of-art model-checking tools for nonlinear arithmetic.
|
The Cox construction presents a toric variety as a quotient of affine space
by a torus. The category of coherent sheaves on the corresponding stack thus
has an evident description as invariants in a quotient of the category of
modules over a polynomial ring. Here we give the mirror to this description,
and in particular, a clean new proof of mirror symmetry for smooth toric
stacks.
|
This literature review focuses on the use of Natural Language Generation
(NLG) to automatically detect and generate persuasive texts. Extending previous
research on automatic identification of persuasion in text, we concentrate on
generative aspects through conceptualizing determinants of persuasion in five
business-focused categories: benevolence, linguistic appropriacy, logical
argumentation, trustworthiness, tools and datasets. These allow NLG to increase
an existing message's persuasiveness. Previous research illustrates key aspects
in each of the above mentioned five categories. A research agenda to further
study persuasive NLG is developed. The review includes analysis of
seventy-seven articles, outlining the existing body of knowledge and showing
the steady progress in this research field.
|
If one proposes to use the theory of Eisenstein cohomology to prove
algebraicity results for the special values of automorphic L-functions as in my
work with Harder for Rankin-Selberg L-functions, or its generalizations as in
my work with Bhagwat for L-functions for orthogonal groups and independently
with Krishnamurthy on Asai L-functions, then in a key step, one needs to prove
that the normalised standard intertwining operator between induced
representations for p-adic groups has a certain arithmetic property. The
principal aim of this article is to address this particular local problem in
the generality of the Langlands-Shahidi machinery. The main result of this
article is invoked in some of the works mentioned above, and I expect that it
will be useful in future investigations on the arithmetic properties of
automorphic L-functions.
|
In 2007, Grytczuk conjecture that for any sequence $(\ell_i)_{i\ge1}$ of
alphabets of size $3$ there exists a square-free infinite word $w$ such that
for all $i$, the $i$-th letter of $w$ belongs to $\ell_i$. The result of Thue
of 1906 implies that there is an infinite square-free word if all the $\ell_i$
are identical. On the other, hand Grytczuk, Przyby{\l}o and Zhu showed in 2011
that it also holds if the $\ell_i$ are of size $4$ instead of $3$.
In this article, we first show that if the lists are of size $4$, the number
of square-free words is at least $2.45^n$ (the previous similar bound was
$2^n$). We then show our main result: we can construct such a square-free word
if the lists are subsets of size $3$ of the same alphabet of size $4$. Our
proof also implies that there are at least $1.25^n$ square-free words of length
$n$ for any such list assignment. This proof relies on the existence of a set
of coefficients verified with a computer. We suspect that the full conjecture
could be resolved by this method with a much more powerful computer (but we
might need to wait a few decades for such a computer to be available).
|
In recent years, Quantum Computing witnessed massive improvements both in
terms of resources availability and algorithms development. The ability to
harness quantum phenomena to solve computational problems is a long-standing
dream that has drawn the scientific community's interest since the late '80s.
In such a context, we pose our contribution. First, we introduce basic concepts
related to quantum computations, and then we explain the core functionalities
of technologies that implement the Gate Model and Adiabatic Quantum Computing
paradigms. Finally, we gather, compare and analyze the current state-of-the-art
concerning Quantum Perceptrons and Quantum Neural Networks implementations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.