abstract
stringlengths 42
2.09k
|
---|
While synthetic bilingual corpora have demonstrated their effectiveness in
low-resource neural machine translation (NMT), adding more synthetic data often
deteriorates translation performance. In this work, we propose alternated
training with synthetic and authentic data for NMT. The basic idea is to
alternate synthetic and authentic corpora iteratively during training. Compared
with previous work, we introduce authentic data as guidance to prevent the
training of NMT models from being disturbed by noisy synthetic data.
Experiments on Chinese-English and German-English translation tasks show that
our approach improves the performance over several strong baselines. We
visualize the BLEU landscape to further investigate the role of authentic and
synthetic data during alternated training. From the visualization, we find that
authentic data helps to direct the NMT model parameters towards points with
higher BLEU scores and leads to consistent translation performance improvement.
|
We give algorithms to compute decompositions of a given polynomial, or more
generally mixed tensor, as sum of rank one tensors, and to establish whether
such a decomposition is unique. In particular, we present methods to compute
the decomposition of a general plane quintic in seven powers, and of a general
space cubic in five powers; the two decompositions of a general plane sextic of
rank nine, and the five decompositions of a general plane septic. Furthermore,
we give Magma implementations of all our algorithms.
|
In order to treat the multiple time scales of ocean dynamics in an efficient
manner, the baroclinic-barotropic splitting technique has been widely used for
solving the primitive equations for ocean modeling. Based on the framework of
strong stability-preserving Runge-Kutta approach, we propose two high-order
multirate explicit time-stepping schemes (SSPRK2-SE and SSPRK3-SE) for the
resulting split system in this paper. The proposed schemes allow for a large
time step to be used for the three-dimensional baroclinic (slow) mode and a
small time step for the two-dimensional barotropic (fast) mode, in which each
of the two mode solves just need to satisfy their respective CFL conditions for
numerical stability. Specifically, at each time step, the baroclinic velocity
is first computed by advancing the baroclinic mode and fluid thickness of the
system with the large time-step \textcolor{black}{and the assistance of some
intermediate approximations of the baroctropic mode obtained by substepping
with the small-time step}; then the barotropic velocity is corrected by using
the small time step to re-advance the barotropic mode under an improved
barotropic forcing produced by interpolation of the forcing terms from the
preceding baroclinic mode solves; lastly, the fluid thickness is updated by
coupling the baroclinic and barotropic velocities. Additionally, numerical
inconsistencies on the discretized sea surface height caused by the mode
splitting are relieved via a reconciliation process with carefully calculated
flux deficits. Two benchmark tests from the "MPAS-Ocean" platform are carried
out to numerically demonstrate the performance and parallel scalability of the
proposed SSPRK-SE schemes.
|
The performance of neural network models is often limited by the availability
of big data sets. To treat this problem, we survey and develop novel synthetic
data generation and augmentation techniques for enhancing low/zero-sample
learning in satellite imagery. In addition to extending synthetic data
generation approaches, we propose a hierarchical detection approach to improve
the utility of synthetic training samples. We consider existing techniques for
producing synthetic imagery--3D models and neural style transfer--as well as
introducing our own adversarially trained reskinning network, the
GAN-Reskinner, to blend 3D models. Additionally, we test the value of synthetic
data in a two-stage, hierarchical detection/classification model of our own
construction. To test the effectiveness of synthetic imagery, we employ it in
the training of detection models and our two stage model, and evaluate the
resulting models on real satellite images. All modalities of synthetic data are
tested extensively on practical, geospatial analysis problems. Our experiments
show that synthetic data developed using our approach can often enhance
detection performance, particularly when combined with some real training
images. When the only source of data is synthetic, our GAN-Reskinner often
boosts performance over conventionally rendered 3D models and in all cases the
hierarchical model outperforms the baseline end-to-end detection architecture.
|
The introduction of transient learning degrees of freedom into a system can
lead to novel material design and training protocols that guide a system into a
desired metastable state. In this approach, some degrees of freedom, which were
not initially included in the system dynamics, are first introduced and
subsequently removed from the energy minimization process once the desired
state is reached. Using this conceptual framework, we create stable jammed
packings that exist in exceptionally deep energy minima marked by the absence
of low-frequency quasilocalized modes; this added stability persists in the
thermodynamic limit. The inclusion of particle radii as transient degrees of
freedom leads to deeper and much more stable minima than does the inclusion of
particle stiffnesses. This is because particle radii couple to the jamming
transition whereas stiffnesses do not. Thus different choices for the added
degrees of freedom can lead to very different training outcomes.
|
A Non-Binary Snow Index for Multi-Component Surfaces (NBSI-MS) is proposed to
map snow/ice cover. The NBSI-MS is based on the spectral characteristics of
different Land Cover Types (LCTs) such as snow, water, vegetation, bare land,
impervious, and shadow surfaces. This index can increase the separability
between NBSI-MS values corresponding to snow from other LCTs and accurately
delineate the snow/ice cover in non-binary maps. To test the robustness of the
NBSI-MS, Greenland and France-Italy regions were examined where snow interacts
with highly diversified geographical ecosystem. Data recorded by Landsat 5 TM,
Landsat 8 OLI, and Sentinel-2A MSI satellites have been used. The NBSI-MS
performance was also compared against the well-known NDSI, NDSII-1, S3, and SWI
methods and evaluated based on Ground Reference Test Pixels (GRTPs) over
non-binarized results. The results show that the NBSI-MS achieves overall
accuracy (OA) ranging from 0.99 to 1 with kappa coefficient values in the same
range as OA. The precision assessment confirms the performance superiority of
the proposed NBSI-MS method for removing water and shadow surfaces over the
compared relevant indices.
|
Xova is a software package that implements baseline-dependent time and
channel averaging on Measurement Set data. The uv-samples along a baseline
track are aggregated into a bin until a specified decorrelation tolerance is
exceeded. The degree of decorrelation in the bin correspondingly determines the
amount of channel and timeslot averaging that is suitable for samples in the
bin. This necessarily implies that the number of channels and timeslots varies
per bin and the output data loses the rectilinear input shape of the input
data.
|
We consider the combination of uplink code-domain non-orthogonal multiple
access (NOMA) with massive multiple-input multiple-output (MIMO) and
reconfigurable intelligent surfaces (RISs). We assume a setup in which the base
station (BS) is capable of forming beams towards the RISs under line-of-sight
conditions, and where each RIS is covering a cluster of users. In order to
support multi-user transmissions within a cluster, code-domain NOMA via
spreading is utilized. We investigate the optimization of the RIS phase-shifts
such that a large number of users is supported. As it turns out, it is a
coupled optimization problem that depends on the detection order under
interference cancellation and the applied filtering at the BS. We propose to
decouple those variables by using sum-rate optimized phase-shifts as the
initial solution, allowing us to obtain a decoupled estimate of those
variables. Then, in order to determine the final phase-shifts, the problem is
relaxed into a semidefinite program that can be solved efficiently via convex
optimization algorithms. Simulation results show the effectiveness of our
approach in improving the detectability of the users.
|
The development of thermal energy storage materials is the most attractive
strategy to harvest the solar energy and increase the energy utilization
efficiency. Phase change materials (PCMs) have received much attention in this
research field for several decades. Herein, we reported a new kind of PCM micro
topological structure, design direction, and the ultra-flexible, form-stable
and smart PCMs, polyrotaxane. The structure of polyrotaxane was fully confirmed
by 1H nuclear magnetic resonance,attenuated total reflection-fourier transform
infrared and X-ray diffraction. Then the tensile properties,thermal stability
in the air, phase change energy storage and shape memory properties of the
films were systematically analyzed. The results showed that all the mechanical
performance, thermal stability in air and shape memory properties of
polyrotaxanes were enhanced significantly compared to those of polyethylene
oxide (PEO). The form stability at temperatures above the melting point of PEO
significantly increased with the {\alpha}-CD addition. Further with the high
phase transition enthalpy and excellent cycle performance, the polyrotaxane
films are therefore promising sustainable and advanced form-stable phase change
materials for thermal energy storage. Notably, its ultra-high flexibility,
remolding ability and excellent shape memory properties provide a convenient
way for the intelligent heat treatment packaging of complex and flexible
electronic devices. In addition, this is a totally novel insight for
polyrotaxane application and new design method for form-stable PCMs.
|
The scaleup of quantum computers operating in the microwave domain requires
advanced control electronics, and the use of integrated components that operate
at the temperature of the quantum devices is potentially beneficial. However,
such an approach requires ultra-low power dissipation and high signal quality
in order to ensure quantum-coherent operations. Here, we report an on-chip
device that is based on a Josephson junction coupled to a spiral resonator and
is capable of coherent continuous-wave microwave emission. We show that
characteristics of the device accurately follow a theory based on a
perturbative treatment of the capacitively shunted Josephson junction as a gain
element. The infidelity of typical quantum gate operations due to the phase
noise of this cryogenic 25-pW microwave source is less than 0.1% up to 10-ms
evolution times, which is below the infidelity caused by dephasing in
state-of-the-art superconducting qubits. Together with future cryogenic
amplitude and phase modulation techniques, our approach may lead to scalable
cryogenic control systems for quantum processors.
|
We study space-pass tradeoffs in graph streaming algorithms for parameter
estimation and property testing problems such as estimating the size of maximum
matchings and maximum cuts, weight of minimum spanning trees, or testing if a
graph is connected or cycle-free versus being far from these properties. We
develop a new lower bound technique that proves that for many problems of
interest, including all the above, obtaining a $(1+\epsilon)$-approximation
requires either $n^{\Omega(1)}$ space or $\Omega(1/\epsilon)$ passes, even on
highly restricted families of graphs such as bounded-degree planar graphs. For
multiple of these problems, this bound matches those of existing algorithms and
is thus (asymptotically) optimal.
Our results considerably strengthen prior lower bounds even for arbitrary
graphs: starting from the influential work of [Verbin, Yu; SODA 2011], there
has been a plethora of lower bounds for single-pass algorithms for these
problems; however, the only multi-pass lower bounds proven very recently in
[Assadi, Kol, Saxena, Yu; FOCS 2020] rules out sublinear-space algorithms with
exponentially smaller $o(\log{(1/\epsilon)})$ passes for these problems.
One key ingredient of our proofs is a simple streaming XOR Lemma, a generic
hardness amplification result, that we prove: informally speaking, if a
$p$-pass $s$-space streaming algorithm can only solve a decision problem with
advantage $\delta > 0$ over random guessing, then it cannot solve XOR of $\ell$
independent copies of the problem with advantage much better than
$\delta^{\ell}$. This result can be of independent interest and useful for
other streaming lower bounds as well.
|
We propose a novel algorithm for multi-player multi-armed bandits without
collision sensing information. Our algorithm circumvents two problems shared by
all state-of-the-art algorithms: it does not need as an input a lower bound on
the minimal expected reward of an arm, and its performance does not scale
inversely proportionally to the minimal expected reward. We prove a theoretical
regret upper bound to justify these claims. We complement our theoretical
results with numerical experiments, showing that the proposed algorithm
outperforms state-of-the-art in practice as well.
|
Here it is proposed a three-dimensional plasmonic nonvolatile memory crossbar
arrays that can ensure a dual-mode operation in electrical and optical domains.
This can be realized through plasmonics that serves as a bridge between
photonics and electronics as the metal electrode is part of the waveguide. The
proposed arrangement is based on low-loss long-range dielectric-loaded surface
plasmon polariton waveguide where a metal stripe is placed between a buffer
layer and ridge. To achieve a dual-mode operation the materials were defined
that can provide both electrical and optical modulation functionality.
|
Harnessing the potential wide-ranging quantum science applications of
molecules will require control of their interactions. Here, we use microwave
radiation to directly engineer and tune the interaction potentials between
ultracold calcium monofluoride (CaF) molecules. By merging two optical
tweezers, each containing a single molecule, we probe collisions in three
dimensions. The correct combination of microwave frequency and power creates an
effective repulsive shield, which suppresses the inelastic loss rate by a
factor of six, in agreement with theoretical calculations. The demonstrated
microwave shielding shows a general route to the creation of long-lived, dense
samples of ultracold molecules and evaporative cooling.
|
We provide an explicit characterization of the covariant isotropy group of
any Grothendieck topos, i.e. the group of (extended) inner automorphisms of any
sheaf over a small site. As a consequence, we obtain an explicit
characterization of the centre of a Grothendieck topos, i.e. the automorphism
group of its identity functor.
|
One of the fundamental task in graph data mining is to find a planted
community(dense subgraph), which has wide application in biology, finance, spam
detection and so on. For a real network data, the existence of a dense subgraph
is generally unknown. Statistical tests have been devised to testing the
existence of dense subgraph in a homogeneous random graph. However, many
networks present extreme heterogeneity, that is, the degrees of nodes or
vertexes don't concentrate on a typical value. The existing tests designed for
homogeneous random graph are not straightforwardly applicable to the
heterogeneous case. Recently, scan test was proposed for detecting a dense
subgraph in heterogeneous(inhomogeneous) graph(\cite{BCHV19}). However, the
computational complexity of the scan test is generally not polynomial in the
graph size, which makes the test impractical for large or moderate networks. In
this paper, we propose a polynomial-time test that has the standard normal
distribution as the null limiting distribution. The power of the test is
theoretically investigated and we evaluate the performance of the test by
simulation and real data example.
|
Domain experts often need to extract structured information from large
corpora. We advocate for a search paradigm called ``extractive search'', in
which a search query is enriched with capture-slots, to allow for such rapid
extraction. Such an extractive search system can be built around syntactic
structures, resulting in high-precision, low-recall results. We show how the
recall can be improved using neural retrieval and alignment. The goals of this
paper are to concisely introduce the extractive-search paradigm; and to
demonstrate a prototype neural retrieval system for extractive search and its
benefits and potential. Our prototype is available at
\url{https://spike.neural-sim.apps.allenai.org/} and a video demonstration is
available at \url{https://vimeo.com/559586687}.
|
We study the Symmetric Rendezvous Search Problem for a multi-robot system.
There are $n>2$ robots arbitrarily located on a line. Their goal is to meet
somewhere on the line as quickly as possible. The robots do not know the
initial location of any of the other robots or their own positions on the line.
The symmetric version of the problem requires the robots to execute the same
search strategy to achieve rendezvous. Therefore, we solve the problem in an
online fashion with a randomized strategy. In this paper, we present a
symmetric rendezvous algorithm which achieves a constant competitive ratio for
the total distance traveled by the robots. We validate our theoretical results
through simulations.
|
External knowledge (a.k.a. side information) plays a critical role in
zero-shot learning (ZSL) which aims to predict with unseen classes that have
never appeared in training data. Several kinds of external knowledge, such as
text and attribute, have been widely investigated, but they alone are limited
with incomplete semantics. Some very recent studies thus propose to use
Knowledge Graph (KG) due to its high expressivity and compatibility for
representing kinds of knowledge. However, the ZSL community is still in short
of standard benchmarks for studying and comparing different external knowledge
settings and different KG-based ZSL methods. In this paper, we proposed six
resources covering three tasks, i.e., zero-shot image classification (ZS-IMGC),
zero-shot relation extraction (ZS-RE), and zero-shot KG completion (ZS-KGC).
Each resource has a normal ZSL benchmark and a KG containing semantics ranging
from text to attribute, from relational knowledge to logical expressions. We
have clearly presented these resources including their construction,
statistics, data formats and usage cases w.r.t. different ZSL methods. More
importantly, we have conducted a comprehensive benchmarking study, with two
general and state-of-the-art methods, two setting-specific methods and one
interpretable method. We discussed and compared different ZSL paradigms w.r.t.
different external knowledge settings, and found that our resources have great
potential for developing more advanced ZSL methods and more solutions for
applying KGs for augmenting machine learning. All the resources are available
at https://github.com/China-UK-ZSL/Resources_for_KZSL.
|
We define a model of interactive communication where two agents with private
types can exchange information before a game is played. The model contains
Bayesian persuasion as a special case of a one-round communication protocol. We
define message complexity corresponding to the minimum number of interactive
rounds necessary to achieve the best possible outcome. Our main result is that
for bilateral trade, agents don't stop talking until they reach an efficient
outcome: Either agents achieve an efficient allocation in finitely many rounds
of communication; or the optimal communication protocol has infinite number of
rounds. We show an important class of bilateral trade settings where efficient
allocation is achievable with a small number of rounds of communication.
|
The Earth's N2-dominated atmosphere is a very special feature. Firstly, N2 as
main gas is unique on the terrestrial planets in the inner solar system and
gives a hint for tectonic activity. Studying the origins of atmospheric
nitrogen and its stability provides insights into the uniqueness of the Earth's
habitat. Secondly, the coexistence of N2 and O2 within an atmosphere is
unequaled in the entire solar system. Such a combination is strongly linked to
the existence of aerobic lifeforms. The availability of nitrogen on the
surface, in the ocean, and within the atmosphere can enable or prevent the
habitability of a terrestrial planet, since nitrogen is vitally required by all
known lifeforms. In the present work, the different origins of atmospheric
nitrogen, the stability of nitrogen dominated atmospheres, and the development
of early Earth's atmospheric N2 are discussed. We show why N2-O2-atmospheres
constitute a biomarker not only for any lifeforms but for aerobic lifeforms,
which was the first major step that led to higher developed life on Earth.
|
Risk capital allocations (RCAs) are an important tool in quantitative risk
management, where they are utilized to, e.g., gauge the profitability of
distinct business units, determine the price of a new product, and conduct the
marginal economic capital analysis. Nevertheless, the notion of RCA has been
living in the shadow of another, closely related notion, of risk measure (RM)
in the sense that the latter notion often shapes the fashion in which the
former notion is implemented. In fact, as the majority of the RCAs known
nowadays are induced by RMs, the popularity of the two are apparently very much
correlated. As a result, it is the RCA that is induced by the Conditional Tail
Expectation (CTE) RM that has arguably prevailed in scholarly literature and
applications. Admittedly, the CTE RM is a sound mathematical object and an
important regulatory RM, but its appropriateness is controversial in, e.g.,
profitability analysis and pricing. In this paper, we address the question as
to whether or not the RCA induced by the CTE RM may concur with alternatives
that arise from the context of profit maximization. More specifically, we
provide exhaustive description of all those probabilistic model settings, in
which the mathematical and regulatory CTE RM may also reflect the risk
perception of a profit-maximizing insurer.
|
According to Skolem's conjecture, if an exponential Diophantine equation is
not solvable, then it is not solvable modulo an appropriately chosen modulus.
Besides several concrete equations, the conjecture has only been proved for
rather special cases. In this paper we prove the conjecture for equations of
the form $x^n-by_1^{k_1}\dots y_\ell^{k_\ell}=\pm 1$, where
$b,x,y_1,\dots,y_\ell$ are fixed integers and $n,k_1,\dots,k_\ell$ are
non-negative integral unknowns. This result extends a recent theorem of Hajdu
and Tijdeman.
|
The COVID-19 pandemic has impacted lives and economies across the globe,
leading to many deaths. While vaccination is an important intervention, its
roll-out is slow and unequal across the globe. Therefore, extensive testing
still remains one of the key methods to monitor and contain the virus. Testing
on a large scale is expensive and arduous. Hence, we need alternate methods to
estimate the number of cases. Online surveys have been shown to be an effective
method for data collection amidst the pandemic. In this work, we develop
machine learning models to estimate the prevalence of COVID-19 using
self-reported symptoms. Our best model predicts the daily cases with a mean
absolute error (MAE) of 226.30 (normalized MAE of 27.09%) per state, which
demonstrates the possibility of predicting the actual number of confirmed cases
by utilizing self-reported symptoms. The models are developed at two levels of
data granularity - local models, which are trained at the state level, and a
single global model which is trained on the combined data aggregated across all
states. Our results indicate a lower error on the local models as opposed to
the global model. In addition, we also show that the most important symptoms
(features) vary considerably from state to state. This work demonstrates that
the models developed on crowd-sourced data, curated via online platforms, can
complement the existing epidemiological surveillance infrastructure in a
cost-effective manner. The code is publicly available at
https://github.com/parthpatwa/Can-Self-Reported-Symptoms-Predict-Daily-COVID-19-Cases.
|
The current manuscript highlights the preparation of NiFe2O4 nanoparticles by
adopting sol-gel auto combustion route. The prime focus of this study is to
investigate the impact of gamma irradiation on the microstructural,
morphological, functional, optical and magnetic characteristics. The resulted
NiFe2O4 products have been characterized employing numerous instrumental
equipments such as FESEM, XRD, UV visible spectroscopy, FTIR and PPMS for a
variety of gamma ray doses (0 kGy, 25 kGy and 100 kGy). FESEM micrographs
illustrate the aggregation of ferrite nanoparticles in pristine NiFe2O4 product
having an average particle size of 168 nm and the surface morphology is altered
after exposure to gamma-irradiation. XRD spectra have been analyzed employing
Rietveld method and the results of the XRD investigation reveal the desired
phases (cubic spinel phases) of NiFe2O4 with observing other transitional
phases. Several microstructural parameters such as bond length, bond angle,
hopping length etc. have been determined from the analysis of Rietveld method.
This study reports that the gamma irradiations demonstrate a great influence on
optical bandgap energy and it varies from 1.80 and 1.89 eV evaluated via K M
function. FTIR measurement depicts a proof for the persistence of Ni-O and Fe-O
stretching vibrations within the respective products and thus indicating the
successful development of NiFe2O4. The saturation magnetization (MS) of
pristine Ni ferrite product is noticed to be 28.08 emug-1. A considerable
increase in MS is observed in case of low gamma-dose (25 kGy) and a decrement
nature is disclosed after the result of high dose of gamma irradiation
(100kGy).
|
This paper extends the concept of informative selection, population
distribution and sample distribution to a spatial process context. These
notions were first defined in a context where the output of the random process
of interest consists of independent and identically distributed realisations
for each individual of a population. It has been showed that informative
selection was inducing a stochastic dependence among realisations on the
selected units. In the context of spatial processes, the "population" is a
continuous space and realisations for two different elements of the population
are not independent. We show how informative selection may induce a different
dependence among selected units and how the sample distribution differs from
the population distribution.
|
Fueled by the call for formative assessments, diagnostic classification
models (DCMs) have recently gained popularity in psychometrics. Despite their
potential for providing diagnostic information that aids in classroom
instruction and students' learning, empirical applications of DCMs to classroom
assessments have been highly limited. This is partly because how DCMs with
different estimation methods perform in small sample contexts is not yet
well-explored. Hence, this study aims to investigate the performance of
respondent classification and item parameter estimation with a comprehensive
simulation design that resembles classroom assessments using different
estimation methods. The key findings are the following: (1) although the marked
difference in respondent classification accuracy was not observed among the
maximum likelihood (ML), Bayesian, and nonparametric methods, the Bayesian
method provided slightly more accurate respondent classification in
parsimonious DCMs than the ML method, and in complex DCMs, the ML method
yielded the slightly better result than the Bayesian method; (2) while item
parameter recovery was poor in both Bayesian and ML methods, the Bayesian
method exhibited unstable slip values owing to the multimodality of their
posteriors under complex DCMs, and the ML method produced irregular estimates
that appear to be well-estimated due to a boundary problem under parsimonious
DCMs.
|
We study the constant term functor for $\mathbb{F}_p$-sheaves on the affine
Grassmannian in characteristic $p$ with respect to a Levi subgroup. Our main
result is that the constant term functor induces a tensor functor between
categories of equivariant perverse $\mathbb{F}_p$-sheaves. We apply this fact
to get information about the Tannakian monoids of the corresponding categories
of perverse sheaves. As a byproduct we also obtain geometric proofs of several
results due to Herzig on the mod $p$ Satake transform and the structure of the
space of mod $p$ Satake parameters.
|
This paper studies the expressive power of artificial neural networks (NNs)
with rectified linear units. To study them as a model of real-valued
computation, we introduce the concept of Max-Affine Arithmetic Programs and
show equivalence between them and NNs concerning natural complexity measures.
We then use this result to show that two fundamental combinatorial optimization
problems can be solved with polynomial-size NNs, which is equivalent to the
existence of very special strongly polynomial time algorithms. First, we show
that for any undirected graph with $n$ nodes, there is an NN of size
$\mathcal{O}(n^3)$ that takes the edge weights as input and computes the value
of a minimum spanning tree of the graph. Second, we show that for any directed
graph with $n$ nodes and $m$ arcs, there is an NN of size $\mathcal{O}(m^2n^2)$
that takes the arc capacities as input and computes a maximum flow. These
results imply in particular that the solutions of the corresponding parametric
optimization problems where all edge weights or arc capacities are free
parameters can be encoded in polynomial space and evaluated in polynomial time,
and that such an encoding is provided by an NN.
|
We present SMURF, a method for unsupervised learning of optical flow that
improves state of the art on all benchmarks by $36\%$ to $40\%$ (over the prior
best method UFlow) and even outperforms several supervised approaches such as
PWC-Net and FlowNet2. Our method integrates architecture improvements from
supervised optical flow, i.e. the RAFT model, with new ideas for unsupervised
learning that include a sequence-aware self-supervision loss, a technique for
handling out-of-frame motion, and an approach for learning effectively from
multi-frame video data while still only requiring two frames for inference.
|
This work describes and demonstrates the operation of a virtual X-ray
algorithm operating on finite-element post-processing results which allows for
higher polynomial orders in geometry representation as well as density
distribution. A nested hierarchy of oriented bounding boxes is used for
preselecting candidate elements undergoing a ray-casting procedure. The exact
intersection points of the ray with the finite element are not computed,
instead the ray is discretized by a sequence of points. The element-local
coordinates of each discretized point are determined using a local
Newton-iteration and the resulting densities are accumulated. This procedure
results in highly accurate virtual X-ray images of finite element models.
|
As the Data Science field continues to mature, and we collect more data, the
demand to store and analyze them will continue to increase. This increase in
data availability and demand for analytics will put a strain on data centers
and compute clusters-with implications for both energy costs and emissions. As
the world battles a climate crisis, it is prudent for organizations with data
centers to have a framework for combating increasing energy costs and emissions
to meet demand for analytics work. In this paper, I present a generalized
framework for organizations to audit data centers energy efficiency to
understand the resources required to operate a given data center and effective
steps organizations can take to improve data center efficiency and lower the
environmental impact.
|
We study the bar instability in collisionless, rotating, anisotropic, stellar
systems, using N-body simulations and also the matrix technique for calculation
of modes with the perturbed collisionless Boltzmann equation. These methods are
applied to spherical systems with an initial Plummer density distribution, but
modified kinematically in two ways: the velocity distribution is tangentially
anisotropic, using results of Dejonghe, and the system is set in rotation by
reversing the velocities of a fraction of stars in various regions of phase
space, a la Lynden-Bell. The aim of the N-body simulations is first to survey
the parameter space, and, using those results, to identify regions of phase
space (by radius and orbital inclination) which have the most important
influence on the bar instability. The matrix method is then used to identify
the resonant interactions in the system which have the greatest effect on the
growth rate of a bar. Complementary series of N-body simulations examine these
processes in relation to the evolving frequency distribution and the pattern
speed. Finally, the results are synthesised with an existing theoretical
framework, and used to consider the old question of constructing a stability
criterion.
|
Polygonal billiards are an example of pseudo-chaotic dynamics, a combination
of integrable evolution and sudden jumps due to conical singular points that
arise from the corners of the polygons. Such pseudo-chaotic behaviour, often
characterised by an algebraic separation of nearby trajectories, is believed to
be linked to the wild dependence that particle transport has on the fine
details of the billiard table. Here we address this relation through a detailed
numerical study of the statistics of displacement in a family of polygonal
channel billiards with parallel walls. We show that transport is characterised
by strong anomalous diffusion, with a mean square displacement that scales in
time faster than linear, and with a probability density of the displacement
exhibiting exponential tails and ballistic fronts. In channels of finite length
the distribution of first-passage times is characterised by fat tails, with a
mean first-passage time that diverges when the aperture angle is rational.
These findings have non trivial consequences for a variety of experiments.
|
A set of vertices $S\subseteq V(G)$ is a basis or resolving set of a graph
$G$ if for each $x,y\in V(G)$ there is a vertex $u\in S$ such that $d(x,u)\neq
d(y,u)$. A basis $S$ is a fault-tolerant basis if $S\setminus \{x\}$ is a basis
for every $x \in S$. The fault-tolerant metric dimension (FTMD) $\beta'(G)$ of
$G$ is the minimum cardinality of a fault-tolerant basis. It is shown that each
twin vertex of $G$ belongs to every fault-tolerant basis of $G$. As a
consequence, $\beta'(G) = n(G)$ iff each vertex of $G$ is a twin vertex, which
corrects a wrong characterization of graphs $G$ with $\beta'(G) = n(G)$ from
[Mathematics 7(1) (2019) 78]. This FTMD problem is reinvestigated for Butterfly
networks, Benes networks, and silicate networks. This extends partial results
from [IEEE Access 8 (2020) 145435--145445], and at the same time, disproves
related conjectures from the same paper.
|
Within the framework of the flux formulation of Double Field Theory (DFT) we
employ a generalised Scherk-Schwarz ansatz and discuss the classification of
the twists that in the presence of the strong constraint give rise to constant
generalised fluxes interpreted as gaugings. We analyse the various
possibilities of turning on the fluxes $H_{ijk}, F_{ij}{}^k, Q_i{}^{jk}$ and
$R^{ijk}$, and the solutions for the twists allowed in each case. While we do
not impose the DFT (or equivalently supergravity) equations of motion, our
results provide solution-generating techniques in supergravity when applied to
a background that does solve the DFT equations. At the same time, our results
give rise also to canonical transformations of 2-dimensional $\sigma$-models, a
fact which is interesting especially because these are integrability-preserving
transformations on the worldsheet. Both the solution-generating techniques of
supergravity and the canonical transformations of 2-dimensional $\sigma$-models
arise as maps that leave the generalised fluxes of DFT and their flat
derivatives invariant. These maps include the known
abelian/non-abelian/Poisson-Lie T-duality transformations, Yang-Baxter
deformations, as well as novel generalisations of them.
|
A didactical survey of the foundations of Algorithmic Information Theory.
These notes are short on motivation, history and background but introduce some
of the main techniques and concepts of the field.
The "manuscript" has been evolving over the years. Please, look at "Version
history" below to see what has changed when.
|
In multicentric calculus one takes a polynomial $p$ with distinct roots as a
new variable and represents complex valued functions by $\mathbb C^d$-valued
functions, where $d$ is the degree of $p$. An application is e.g. the
possibility to represent a piecewise constant holomorphic function as a
convergent power series, simultaneously in all components of $|p(z)| \le \rho$.
In this paper we study the necessary modifications needed, if we take a
rational function $r=p/q$ as the new variable instead. This allows to consider
functions defined in neighborhoods of any compact set as opposed to the
polynomial case where the domains $|p(z)| \le \rho$ are always polynomially
convex. Two applications are formulated. One giving a convergent power series
expression for Sylvester equations $AX-XB =C$ in the general case of $A,B$
being bounded operators in Banach spaces with distinct spectra. The other
application formulates a K-spectral result for bounded operators in Hilbert
spaces.
|
Image-only and pseudo-LiDAR representations are commonly used for monocular
3D object detection. However, methods based on them have shortcomings of either
not well capturing the spatial relationships in neighbored image pixels or
being hard to handle the noisy nature of the monocular pseudo-LiDAR point
cloud. To overcome these issues, in this paper we propose a novel
object-centric voxel representation tailored for monocular 3D object detection.
Specifically, voxels are built on each object proposal, and their sizes are
adaptively determined by the 3D spatial distribution of the points, allowing
the noisy point cloud to be organized effectively within a voxel grid. This
representation is proved to be able to locate the object in 3D space
accurately. Furthermore, prior works would like to estimate the orientation via
deep features extracted from an entire image or a noisy point cloud. By
contrast, we argue that the local RoI information from the object image patch
alone with a proper resizing scheme is a better input as it provides complete
semantic clues meanwhile excludes irrelevant interferences. Besides, we
decompose the confidence mechanism in monocular 3D object detection by
considering the relationship between 3D objects and the associated 2D boxes.
Evaluated on KITTI, our method outperforms state-of-the-art methods by a large
margin. The code will be made publicly available soon.
|
Optical atomic clocks have already overcome the eighteenth decimal digit of
instability and uncertainty demonstrating incredible control over external
perturbations of the clock transition frequency. At the same time there is an
increasing demand for atomic and ionic transitions with minimal sensitivity to
external fields, with practical operational wavelengths and robust readout
protocols. One of the goals is to simplify clock's operation maintaining its
relative uncertainty at low 10-18 level. It is especially important for
transportable and envisioned space-based optical clocks. We proved earlier that
the 1.14um inner-shell magnetic dipole transition in neutral thulium possesses
very low blackbody radiation shift compared to other neutrals. Here we
demonstrate operation of a bi-colour thulium optical clock with extraordinary
low sensitivity to the Zeeman shift due to a simultaneous interrogation of two
clock transitions and data processing. Our experiment shows suppression of the
quadratic Zeeman shift by at least three orders of magnitude. The effect of
tensor lattice Stark shift can be also reduced to below 10-18 in fractional
frequency units. All these features make thulium optical clock almost free from
hard-to-control systematic shifts. Together with convenient cooling and
trapping laser wavelengths, it provides great perspectives for thulium lattice
clock as a high-performance transportable system.
|
In the past decades, great progress has been made in the field of optical and
particle-based measurement techniques for experimental analysis of fluid flows.
Particle Image Velocimetry (PIV) technique is widely used to identify flow
parameters from time-consecutive snapshots of particles injected into the
fluid. The computation is performed as post-processing of the experimental data
via proximity measure between particles in frames of reference. However, the
post-processing step becomes problematic as the motility and density of the
particles increases, since the data emerges in extreme rates and volumes.
Moreover, existing algorithms for PIV either provide sparse estimations of the
flow or require large computational time frame preventing from on-line use. The
goal of this manuscript is therefore to develop an accurate on-line algorithm
for estimation of the fine-grained velocity field from PIV data. As the data
constitutes a pair of images, we employ computer vision methods to solve the
problem. In this work, we introduce a convolutional neural network adapted to
the problem, namely Volumetric Correspondence Network (VCN) which was recently
proposed for the end-to-end optical flow estimation in computer vision. The
network is thoroughly trained and tested on a dataset containing both synthetic
and real flow data. Experimental results are analyzed and compared to that of
conventional methods as well as other recently introduced methods based on
neural networks. Our analysis indicates that the proposed approach provides
improved efficiency also keeping accuracy on par with other state-of-the-art
methods in the field. We also verify through a-posteriori tests that our newly
constructed VCN schemes are reproducing well physically relevant statistics of
velocity and velocity gradients.
|
We study a predictive model for explaining the apparent deviation of the muon
anomalous magnetic moment from the Standard Model expectation. There are no new
scalars and hence no new hierarchy puzzles beyond those associated with the
Higgs; the only new particles at the TeV scale are vector-like singlet and
doublet leptons. Interestingly, this simple model provides a calculable example
violating the Wilsonian notion of naturalness: despite the absence of any
symmetries prohibiting its generation, the coefficient of the naively leading
dimension-six operator for $(g-2)$ vanishes at one-loop. While effective field
theorists interpret this either as a surprising UV cancellation of power
divergences, or as a delicate cancellation between matching UV and calculable
IR corrections to $(g-2)$ from parametrically separated scales, there is a
simple explanation in the full theory: the loop integrand is a total derivative
of a function vanishing in both the deep UV and IR. The leading contribution to
$(g-2)$ arises from dimension-eight operators, and thus the required masses of
new fermions are lower than naively expected, with a sizeable portion of
parameter space already covered by direct searches at the LHC. The viable
parameter space free of fine-tuning for the muon mass will be fully covered by
future direct LHC searches, and {\it all} of the parameter space can be probed
by precision measurements at planned future lepton colliders.
|
We introduce the problem of optimal congestion control in cache networks,
whereby \emph{both} rate allocations and content placements are optimized
\emph{jointly}. We formulate this as a maximization problem with non-convex
constraints, and propose solving this problem via (a) a Lagrangian barrier
algorithm and (b) a convex relaxation. We prove different optimality guarantees
for each of these two algorithms; our proofs exploit the fact that the
non-convex constraints of our problem involve DR-submodular functions.
|
In context of the Wolfram Physics Project, a certain class of abstract
rewrite systems known as "multiway systems" have played an important role in
discrete models of spacetime and quantum mechanics. However, as abstract
mathematical entities, these rewrite systems are interesting in their own
right. This paper undertakes the effort to establish computational properties
of multiway systems. Specifically, we investigate growth rates and growth
classes of string-based multiway systems. After introducing the concepts of
"growth functions", "growth rates" and "growth classes" to quantify a system's
state-space growth over "time" (successive steps of evolution) on different
levels of precision, we use them to show that multiway systems can, in a
specific sense, grow slower than all computable functions while never exceeding
the growth rate of exponential functions. In addition, we start developing a
classification scheme for multiway systems based on their growth class.
Furthermore, we find that multiway growth functions are not trivially regular
but instead "computationally diverse", meaning that they are capable of
computing or approximating various commonly encountered mathematical functions.
We discuss several implications of these properties as well as their physical
relevance. Apart from that, we present and exemplify methods for explicitly
constructing multiway systems to yield desired growth functions.
|
The metastable helium line at 1083 nm can be used to probe the extended upper
atmospheres of close-in exoplanets and thus provide insight into their
atmospheric mass loss, which is likely to be significant in sculpting their
population. We used an ultranarrowband filter centered on this line to observe
two transits of the low-density gas giant HAT-P-18b, using the 200" Hale
Telescope at Palomar Observatory, and report the detection of its extended
upper atmosphere. We constrain the excess absorption to be $0.46\pm0.12\%$ in
our 0.635 nm bandpass, exceeding the transit depth from the Transiting
Exoplanet Survey Satellite (TESS) by $3.9\sigma$. If we fit this signal with a
1D Parker wind model, we find that it corresponds to an atmospheric mass loss
rate between $8.3^{+2.8}_{-1.9} \times 10^{-5}$ $M_\mathrm{J}$/Gyr and
$2.63^{+0.46}_{-0.64} \times 10^{-3}$ $M_\mathrm{J}$/Gyr for thermosphere
temperatures ranging from 4000 K to 13000 K, respectively. With a J magnitude
of 10.8, this is the faintest system for which such a measurement has been made
to date, demonstrating the effectiveness of this approach for surveying mass
loss on a diverse sample of close-in gas giant planets.
|
In recent years, artificial neural networks (ANNs) have won numerous contests
in pattern recognition and machine learning. ANNS have been applied to problems
ranging from speech recognition to prediction of protein secondary structure,
classification of cancers, and gene prediction. Here, we intend to maximize the
chances of finding the Higgs boson decays to two $\tau$ leptons in the pseudo
dataset using a Machine Learning technique to classify the recorded events as
signal or background.
|
We combine infectious disease transmission and the non-pharmaceutical
intervention (NPI) response to disease incidence into one closed model
consisting of two coupled delay differential equations for the incidence rate
and the time-dependent reproduction number. The model contains three free
parameters, the initial reproduction number, the intervention strength, and the
response delay relative to the time of infection. The NPI response is modeled
by assuming that the rate of change of the reproduction number is proportional
to the negative deviation of the incidence rate from an intervention threshold.
This delay dynamical system exhibits damped oscillations in one part of the
parameter space, and growing oscillations in another, and these are separated
by a surface where the solution is a strictly periodic nonlinear oscillation.
For parameters relevant for the COVID-19 pandemic, the tipping transition from
damped to growing oscillations occurs for response delays of the order of one
week, and suggests that effective control and mitigation of successive epidemic
waves cannot be achieved unless NPIs are implemented in a precautionary manner,
rather than merely as a response to the present incidence rate.
|
Goal Recognition is the task of discerning the correct intended goal that an
agent aims to achieve, given a set of possible goals, a domain model, and a
sequence of observations as a sample of the plan being executed in the
environment. Existing approaches assume that the possible goals are formalized
as a conjunction in deterministic settings. In this paper, we develop a novel
approach that is capable of recognizing temporally extended goals in Fully
Observable Non-Deterministic (FOND) planning domain models, focusing on goals
on finite traces expressed in Linear Temporal Logic (LTLf) and (Pure) Past
Linear Temporal Logic (PLTLf). We empirically evaluate our goal recognition
approach using different LTLf and PLTLf goals over six common FOND planning
domain models, and show that our approach is accurate to recognize temporally
extended goals at several levels of observability.
|
Additive Manufacturing (AM) processes intended for large scale components
deposit large volumes of material to shorten process duration. This reduces the
resolution of the AM process, which is typically defined by the size of the
deposition nozzle. If the resolution limitation is not considered when
designing for Large-Scale Additive Manufacturing (LSAM), difficulties can arise
in the manufacturing process, which may require the adaptation of the
deposition parameters. This work incorporates the nozzle size constraint into
Topology Optimization (TO) in order to generate optimized designs suitable to
the process resolution. This article proposes and compares two methods, which
are based on existing TO techniques that enable control of minimum and maximum
member size, and of minimum cavity size. The first method requires the minimum
and maximum member size to be equal to the deposition nozzle size, thus design
features of uniform width are obtained in the optimized design. The second
method defines the size of the solid members sufficiently small for the
resulting structure to resemble a structural skeleton, which can be interpreted
as the deposition path. Through filtering and projection techniques, the thin
structures are thickened according to the chosen nozzle size. Thus, a topology
tailored to the size of the deposition nozzle is obtained along with a
deposition proposal. The methods are demonstrated and assessed using 2D and 3D
benchmark problems.
|
We reobserved in the $R_C$ and $i'_{Sloan}$ bands, during the years
2020-2021, seven Mira variables in Cassiopeia, for which historical
$i'_{Sloan}$ light curves were available from Asiago Observatory plates taken
in the years 1967-84. The aim was to check if any of them had undergone a
substantial change in the period or in the light curve shape. Very recent
public data form ZTF-DR5 were also used to expand our time base window. A
marked color change was detected for all the stars along their variability
cycle. The star V890 Cas showed a significant period decrease of 12\% from 483
to 428 days, one of the largest known to date. All the stars, save AV Cas,
showed a smaller variation amplitude in the recent CCD data, possibly due to a
photometric accuracy higher than that of the photographic plates.
|
Furstenberg--Zimmer structure theory refers to the extension of the dichotomy
between the compact and weakly mixing parts of a measure preserving dynamical
system and the algebraic and geometric descriptions of such parts to a
conditional setting, where such dichotomy is established relative to a factor
and conditional analogues of those algebraic and geometric descriptions are
sought. Although the unconditional dichotomy and the characterizations are
known for arbitrary systems, the relative situation is understood under certain
countability and separability hypotheses on the underlying groups and spaces.
The aim of this article is to remove these restrictions in the relative
situation and establish a Furstenberg--Zimmer structure theory in full
generality. To achieve this generalization we had to change our perspective
from systems defined on concrete probability spaces to systems defined on
abstract probability algebras, and we had to introduce novel tools to analyze
systems relative to a factor. However, the change of perspective and the new
tools lead to some simplifications in the arguments and the presentation which
we arrange in a self-contained manner. As an independent byproduct, we
establish a connection between the relative analysis of systems in ergodic
theory and the internal logic in certain Boolean topoi.
|
Very thin elastic sheets, even at zero temperature, exhibit nonlinear elastic
response by virtue of their dominant bending modes. Their behavior is even
richer at finite temperature. Here we use molecular dynamics (MD) to study the
vibrations of a thermally fluctuating two-dimensional elastic sheet with one
end clamped at its zero-temperature length. We uncover a tilt phase in which
the sheet fluctuates about a mean plane inclined with respect to the
horizontal, thus breaking reflection symmetry. We determine the phase behavior
as a function of the aspect ratio of the sheet and the temperature. We show
that tilt may be viewed as a type of transverse buckling instability induced by
clamping coupled to thermal fluctuations and develop an analytic model that
predicts the tilted and untilted regions of the phase diagram. Qualitative
agreement is found with the MD simulations. Unusual response driven by control
of purely geometric quantities like the aspect ratio, as opposed to external
fields, offers a very rich playground for two-dimensional mechanical
metamaterials.
|
We constructs a new network by superposition of hexahedron , which are
scale-free, highly sparse,disassortative ,and maximal planar graphs. The
network degree distribution, agglomeration coefficient and degree of
correlation are computed separately using the iterative method, and these
characteristics are found to be very rich. The method of network characteristic
analysis can be applied to some actual systems, so as to study the complexity
of real network system under the framework of complex network theory.
|
In this paper we explore conditions on variable symbols with respect to Haar
systems, defining Calder\'on-Zygmund type operators with respect to the dyadic
metrics associated to the Haar bases.We show that Petermichl's dyadic kernel
can be seen as a variable kernel singular integral and we extend it to dyadic
systems built on spaces of homogeneous type.
|
We consider the problem of statistical inference for a class of
partially-observed diffusion processes, with discretely-observed data and
finite-dimensional parameters. We construct unbiased estimators of the score
function, i.e. the gradient of the log-likelihood function with respect to
parameters, with no time-discretization bias. These estimators can be
straightforwardly employed within stochastic gradient methods to perform
maximum likelihood estimation or Bayesian inference. As our proposed
methodology only requires access to a time-discretization scheme such as the
Euler-Maruyama method, it is applicable to a wide class of diffusion processes
and observation models. Our approach is based on a representation of the score
as a smoothing expectation using Girsanov theorem, and a novel adaptation of
the randomization schemes developed in Mcleish [2011], Rhee and Glynn [2015],
Jacob et al. [2020a]. This allows one to remove the time-discretization bias
and burn-in bias when computing smoothing expectations using the conditional
particle filter of Andrieu et al. [2010]. Central to our approach is the
development of new couplings of multiple conditional particle filters. We prove
under assumptions that our estimators are unbiased and have finite variance.
The methodology is illustrated on several challenging applications from
population ecology and neuroscience.
|
Chronological age of healthy people is able to be predicted accurately using
deep neural networks from neuroimaging data, and the predicted brain age could
serve as a biomarker for detecting aging-related diseases. In this paper, a
novel 3D convolutional network, called two-stage-age-network (TSAN), is
proposed to estimate brain age from T1-weighted MRI data. Compared with
existing methods, TSAN has the following improvements. First, TSAN uses a
two-stage cascade network architecture, where the first-stage network estimates
a rough brain age, then the second-stage network estimates the brain age more
accurately from the discretized brain age by the first-stage network. Second,
to our knowledge, TSAN is the first work to apply novel ranking losses in brain
age estimation, together with the traditional mean square error (MSE) loss.
Third, densely connected paths are used to combine feature maps with different
scales. The experiments with $6586$ MRIs showed that TSAN could provide
accurate brain age estimation, yielding mean absolute error (MAE) of $2.428$
and Pearson's correlation coefficient (PCC) of $0.985$, between the estimated
and chronological ages. Furthermore, using the brain age gap between brain age
and chronological age as a biomarker, Alzheimer's disease (AD) and Mild
Cognitive Impairment (MCI) can be distinguished from healthy control (HC)
subjects by support vector machine (SVM). Classification AUC in AD/HC and
MCI/HC was $0.904$ and $0.823$, respectively. It showed that brain age gap is
an effective biomarker associated with risk of dementia, and has potential for
early-stage dementia risk screening. The codes and trained models have been
released on GitHub: https://github.com/Milan-BUAA/TSAN-brain-age-estimation.
|
This paper investigates task-oriented communication for edge inference, where
a low-end edge device transmits the extracted feature vector of a local data
sample to a powerful edge server for processing. It is critical to encode the
data into an informative and compact representation for low-latency inference
given the limited bandwidth. We propose a learning-based communication scheme
that jointly optimizes feature extraction, source coding, and channel coding in
a task-oriented manner, i.e., targeting the downstream inference task rather
than data reconstruction. Specifically, we leverage an information bottleneck
(IB) framework to formalize a rate-distortion tradeoff between the
informativeness of the encoded feature and the inference performance. As the IB
optimization is computationally prohibitive for the high-dimensional data, we
adopt a variational approximation, namely the variational information
bottleneck (VIB), to build a tractable upper bound. To reduce the communication
overhead, we leverage a sparsity-inducing distribution as the variational prior
for the VIB framework to sparsify the encoded feature vector. Furthermore,
considering dynamic channel conditions in practical communication systems, we
propose a variable-length feature encoding scheme based on dynamic neural
networks to adaptively adjust the activated dimensions of the encoded feature
to different channel conditions. Extensive experiments evidence that the
proposed task-oriented communication system achieves a better rate-distortion
tradeoff than baseline methods and significantly reduces the feature
transmission latency in dynamic channel conditions.
|
In this manuscript, we consider a scenario in which a spin-1/2 quanton goes
through a superposition of co-rotating and counter-rotating geodetic circular
paths, which play the role of the paths of a Mach-Zehnder interferometer in a
stationary and axisymmetric spacetime. Since the spin of the particle plays the
role of a quantum clock, as the quanton moves in a superposed path it gets
entangled with the momentum (or the path), and this will cause the
interferometric visibility (or the internal quantum coherence) to drop, since,
in stationary axisymmetric spacetimes there is a difference in proper time
elapsed along the two trajectories. However, as we show here, the proper time
of each path will couple to the corresponding local Wigner rotation, and the
effect in the spin of the superposed particle will be a combination of both.
Besides, we discuss a general framework to study the local Wigner rotations of
spin-1/2 particles in general stationary axisymmetric spacetimes for circular
orbits.
|
The apparent clustering in longitude of perihelion $\varpi$ and ascending
node $\Omega$ of extreme trans-Neptunian objects (ETNOs) has been attributed to
the gravitational effects of an unseen 5-10 Earth-mass planet in the outer
solar system. To investigate how selection bias may contribute to this
clustering, we consider 14 ETNOs discovered by the Dark Energy Survey, the
Outer Solar System Origins Survey, and the survey of Sheppard and Trujillo.
Using each survey's published pointing history, depth, and TNO tracking
selections, we calculate the joint probability that these objects are
consistent with an underlying parent population with uniform distributions in
$\varpi$ and $\Omega$. We find that the mean scaled longitude of perihelion and
orbital poles of the detected ETNOs are consistent with a uniform population at
a level between $17\%$ and $94\%$, and thus conclude that this sample provides
no evidence for angular clustering.
|
This paper presents DLL, a fast direct map-based localization technique using
3D LIDAR for its application to aerial robots. DLL implements a point cloud to
map registration based on non-linear optimization of the distance of the points
and the map, thus not requiring features, neither point correspondences. Given
an initial pose, the method is able to track the pose of the robot by refining
the predicted pose from odometry. Through benchmarks using real datasets and
simulations, we show how the method performs much better than Monte-Carlo
localization methods and achieves comparable precision to other
optimization-based approaches but running one order of magnitude faster. The
method is also robust under odometric errors. The approach has been implemented
under the Robot Operating System (ROS), and it is publicly available.
|
Considering an example of the long-range Kitaev model, we are looking for a
correlation length in a model with long range interactions whose correlation
functions away from a critical point have power-law tails instead of the usual
exponential decay. It turns out that quasiparticle spectrum depends on a
distance from the critical point in a way that allows to identify the standard
correlation length exponent, $\nu$. The exponent implicitly defines a
correlation length $\xi$ that diverges when the critical point is approached.
We show that the correlation length manifests itself also in the correlation
function but not in its exponential tail because there is none. Instead $\xi$
is a distance that marks a crossover between two different algebraic decays
with different exponents. At distances shorter than $\xi$ the correlator decays
with the same power law as at the critical point while at distances longer than
$\xi$ it decays faster, with a steeper power law. For this correlator it is
possible to formulate the usual scaling hypothesis with $\xi$ playing the role
of the scaling distance. The correlation length also leaves its mark on the
subleading anomalous fermionic correlator but, interestingly, there is a regime
of long range interactions where its short distance critical power-law decay is
steeper than its long distance power law tail.
|
We present a numerical study on a disordered artificial spin-ice system which
interpolates between the long-range ordered square ice and the fully degenerate
shakti ice. Starting from the square-ice geometry, disorder is implemented by
adding vertical/horizontal magnetic islands to the center of some randomly
chosen square plaquettes of the array, at different densities. When no island
is added we have ordered square ice. When all square plaquettes have been
modified we obtain shakti ice, which is disordered yet in a topological phase
corresponding to the Rys F-model. In between, geometrical frustration due to
these additional center spins disrupts the long-range Ising order of
square-ice, giving rise to a spin-glass regime at low temperatures. The
artificial spin system proposed in our work provides an experimental platform
to study the interplay between quenched disorder and geometrical frustration.
|
Low power spintronic devices based on the propagation of pure magnonic spin
currents in antiferromagnetic insulator materials offer several distinct
advantages over ferromagnetic components including higher frequency magnons and
a stability against disturbing external magnetic fields. In this work, we make
use of the insulating antiferromagnetic phase of iron oxide, the mineral
hematite $\alpha$-Fe$_2$O$_3$ to investigate the long distance transport of
thermally generated magnonic spin currents. We report on the excitation of
magnons generated by the spin Seebeck effect, transported both parallel and
perpendicular to the antiferromagnetic easy-axis under an applied magnetic
field. Making use of an atomistic hematite toy model, we calculate the
transport characteristics from the deviation of the antiferromagnetic ordering
from equilibrium under an applied field. We resolve the role of the magnetic
order parameters in the transport, and experimentally we find significant
thermal spin transport without the need for a net magnetization.
|
We study the finite convergence of iterative methods for solving convex
feasibility problems. Our key assumptions are that the interior of the solution
set is nonempty and that certain overrelaxation parameters converge to zero,
but with a rate slower than any geometric sequence. Unlike other works in this
area, which require divergent series of overrelaxations, our approach allows us
to consider some summable series. By employing quasi-Fej\'{e}rian analysis in
the latter case, we obtain additional asymptotic convergence guarantees, even
when the interior of the solution set is empty.
|
A non-equilibrium model for laser-induced plasmas is used to describe how
nano-second temporal mode-beating affects plasma kernel formation and growth in
quiescent air. The chemically reactive Navier-Stokes equations describe the
hydrodynamics, and non-equilibrium effects are modeled based on a
two-temperature model. Inverse Bremsstrahlung and multiphoton ionization are
self-consistently taken into account via a coupled solution of the equations
governing plasma dynamics and beam propagation and attenuation (i.e., Radiative
Transfer Equation). This strategy, despite the additional challenges it may
bring, allows to minimize empiricism and enables for more accurate simulations
since it does not require an artificial plasma seed to trigger breakdown. The
benefits of this methodology are demonstrated by the good agreement between the
predicted and the experimental plasma boundary evolution and absorbed energy.
The same goes for the periodic plasma kernel structures which, as suggested by
experiments and confirmed by the simulations discussed here, are linked to the
modulating frequency.
|
We introduce a neural relighting algorithm for captured indoors scenes, that
allows interactive free-viewpoint navigation. Our method allows illumination to
be changed synthetically, while coherently rendering cast shadows and complex
glossy materials. We start with multiple images of the scene and a 3D mesh
obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is
well-explained as the sum of a view-independent diffuse component and a
view-dependent glossy term concentrated around the mirror reflection direction.
We design a convolutional network around input feature maps that facilitate
learning of an implicit representation of scene materials and illumination,
enabling both relighting and free-viewpoint navigation. We generate these input
maps by exploiting the best elements of both image-based and physically-based
rendering. We sample the input views to estimate diffuse scene irradiance, and
compute the new illumination caused by user-specified light sources using path
tracing. To facilitate the network's understanding of materials and synthesize
plausible glossy reflections, we reproject the views and compute mirror images.
We train the network on a synthetic dataset where each scene is also
reconstructed with MVS. We show results of our algorithm relighting real indoor
scenes and performing free-viewpoint navigation with complex and realistic
glossy reflections, which so far remained out of reach for view-synthesis
techniques.
|
The temperature dependence of quantum Hall conductivities is studied in the
context of the AdS/CMT paradigm using a model with a bulk theory consisting of
(3+1)-dimensional Einstein-Maxwell action coupled to a dilaton and an axion,
with a negative cosmological constant. We consider a solution which has a
Lifshitz like geometry with a dyonic black-brane in the bulk. There is an
$Sl(2,R)$ action in the bulk corresponding to electromagnetic duality, which
maps between classical solutions, and is broken to $Sl(2,Z)$ by Dirac
quantisation of dyons. This bulk $Sl(2,Z)$ action translates to an action of
the modular group on the 2-dimensional transverse conductivities. The
temperature dependence of the infra-red conductivities is then linked to
modular forms via gradient flow and the resulting flow diagrams show remarkable
agreement with existing experimental data on the temperature flow of both
integral and fractional quantum Hall conductivities.
|
Accurate characterisation of small defects remains a challenge in
non-destructive testing (NDT). In this paper, a principle-component
parametric-manifold mapping approach is applied to single-frequency
eddy-current defect characterisation problems for surface breaking defects in a
planar half-space. A broad 1-8 MHz frequency-range FE-circuit model &
calibration approach is developed & validated to simulate eddy-current scans of
surface-breaking notch defects. This model is used to generate parametric
defect databases for surface breaking defects in an aluminium planar half-space
and defect characterisation of experimental measurements performed.
Parametric-manifold mapping was conducted in N-dimensional principle component
space, reducing the dimensionality of the characterisation problem. In a study
characterising slot depth, the model & characterisation approach is shown to
accurately invert the depth with greater accuracy than a simple amplitude
inversion method with normalised percentage characterisation errors of 38% and
17% respectively measured at 2.0 MHz across 5 slot depths between 0.26 - 2.15
mm. The approach is used to characterise the depth of a sloped slot
demonstrating good accuracy up to ~2.0 mm in depth over a broad range of
sub-resonance frequencies, indicating applications in geometric feature
inversion. Finally the technique is applied to finite rectangular notch defects
of surface extents smaller than the diameter of the inspection coil
(sub-aperture) over a range of frequencies. The results highlight the
limitations in characterising these defects and indicate how the inherent
instabilities in resonance can severely limit characterisation at these
frequencies.
|
In this work, we estimate how much bulk viscosity driven by Urca processes is
likely to affect the gravitational wave signal of a neutron star coalescence.
In the late inspiral, we show that bulk viscosity affects the binding energy at
fourth post-Newtonian (PN) order. Even though this effect is enhanced by the
square of the gravitational compactness, the coefficient of bulk viscosity is
likely too small to lead to observable effects in the waveform during the late
inspiral, when only considering the orbital motion itself. In the post-merger,
however, the characteristic time-scales and spatial scales are different,
potentially leading to the opposite conclusion. We post-process data from a
state-of-the-art equal-mass binary neutron star merger simulation to estimate
the effects of bulk viscosity (which was not included in the simulation
itself). In that scenario, we find that bulk viscosity can reach high values in
regions of the merger. We compute several estimates of how much it might
directly affect the global dynamics of the considered merger scenario, and find
that it could become significant. Even larger effects could arise in different
merger scenarios or in simulations that include non-linear effects. This
assessment is reinforced by a quantitative comparison with relativistic
heavy-ion collisions where such effects have been explored extensively.
|
Large-scale trademark retrieval is an important content-based image retrieval
task. A recent study shows that off-the-shelf deep features aggregated with
Regional-Maximum Activation of Convolutions (R-MAC) achieve state-of-the-art
results. However, R-MAC suffers in the presence of background clutter/trivial
regions and scale variance, and discards important spatial information. We
introduce three simple but effective modifications to R-MAC to overcome these
drawbacks. First, we propose the use of both sum and max pooling to minimise
the loss of spatial information. We also employ domain-specific unsupervised
soft-attention to eliminate background clutter and unimportant regions.
Finally, we add multi-resolution inputs to enhance the scale-invariance of
R-MAC. We evaluate these three modifications on the million-scale METU dataset.
Our results show that all modifications bring non-trivial improvements, and
surpass previous state-of-the-art results.
|
This is the first one in a series of papers classifying the factorizations of
almost simple groups with nonsolvable factors. In this paper we deal with
almost simple linear groups.
|
In the crowded environment of bio-inspired population-based metaheuristics,
the Salp Swarm Optimization (SSO) algorithm recently appeared and immediately
gained a lot of momentum. Inspired by the peculiar spatial arrangement of salp
colonies, which are displaced in long chains following a leader, this algorithm
seems to provide an interesting optimization performance. However, the original
work was characterized by some conceptual and mathematical flaws, which
influenced all ensuing papers on the subject. In this manuscript, we perform a
critical review of SSO, highlighting all the issues present in the literature
and their negative effects on the optimization process carried out by this
algorithm. We also propose a mathematically correct version of SSO, named
Amended Salp Swarm Optimizer (ASSO) that fixes all the discussed problems. We
benchmarked the performance of ASSO on a set of tailored experiments, showing
that it is able to achieve better results than the original SSO. Finally, we
performed an extensive study aimed at understanding whether SSO and its
variants provide advantages compared to other metaheuristics. The experimental
results, where SSO cannot outperform simple well-known metaheuristics, suggest
that the scientific community can safely abandon SSO.
|
Neural network architectures in natural language processing often use
attention mechanisms to produce probability distributions over input token
representations. Attention has empirically been demonstrated to improve
performance in various tasks, while its weights have been extensively used as
explanations for model predictions. Recent studies (Jain and Wallace, 2019;
Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) have showed that it cannot
generally be considered as a faithful explanation (Jacovi and Goldberg, 2020)
across encoders and tasks. In this paper, we seek to improve the faithfulness
of attention-based explanations for text classification. We achieve this by
proposing a new family of Task-Scaling (TaSc) mechanisms that learn
task-specific non-contextualised information to scale the original attention
weights. Evaluation tests for explanation faithfulness, show that the three
proposed variants of TaSc improve attention-based explanations across two
attention mechanisms, five encoders and five text classification datasets
without sacrificing predictive performance. Finally, we demonstrate that TaSc
consistently provides more faithful attention-based explanations compared to
three widely-used interpretability techniques.
|
Phishing attacks have evolved and increased over time and, for this reason,
the task of distinguishing between a legitimate site and a phishing site is
more and more difficult, fooling even the most expert users. The main proposals
focused on addressing this problem can be divided into four approaches:
List-based, URL based, content-based, and hybrid. In this state of the art, the
most recent techniques using web content-based and hybrid approaches for
Phishing Detection are reviewed and compared.
|
Context: The energy-limited (EL) atmospheric escape approach is used to
estimate mass-loss rates for a broad range of planets that host
hydrogen-dominated atmospheres as well as for performing atmospheric evolution
calculations. Aims: We aim to study the applicability range of the EL
approximation. Methods: We revise the EL formalism and its assumptions. We also
compare its results with those of hydrodynamic simulations, employing a grid
covering planets with masses, radii, and equilibrium temperatures ranging
between 1 $M_{\oplus}$ and 39 $M_{\oplus}$, 1 $R_{\oplus}$ and 10 $R_{\oplus}$,
and 300 and 2000 K, respectively. Results: Within the grid boundaries, we find
that the EL approximation gives a correct order of magnitude estimate for
mass-loss rates for about 76% of the planets, but there can be departures from
hydrodynamic simulations by up to three orders of magnitude in individual
cases. Furthermore, we find that planets for which the mass-loss rates are
correctly estimated by the EL approximation to within one order of magnitude
have intermediate gravitational potentials as well as low-to-intermediate
equilibrium temperatures and irradiation fluxes of extreme ultraviolet and
X-ray radiation. However, for planets with low or high gravitational
potentials, or high equilibrium temperatures and irradiation fluxes, the
approximation fails in most cases. Conclusions: The EL approximation should not
be used for planetary evolution calculations that require computing mass-loss
rates for planets that cover a broad parameter space. In this case, it is very
likely that the EL approximation would at times return mass-loss rates of up to
several orders of magnitude above or below those predicted by hydrodynamic
simulations. For planetary atmospheric evolution calculations, interpolation
routines or approximations based on grids of hydrodynamic models should be used
instead.
|
Recently, much work has been done to investigate Galois module structure of
local field extensions, particularly through the use of Galois scaffolds. Given
a totally ramified $p$-extension of local fields $L/K$, a Galois Scaffold gives
us a $K$-basis for $K[G]$ whose effect on the valuation of elements of $L$ is
easy to determine. In 2013, N.P. Byott and G.G. Elder gave sufficient
conditions for the existence of Galois scaffolds for cyclic extensions of
degree $p^2$ in characteristic $p$. We take their work and adapt it to cyclic
extensions of degree $p^2$ in characteristic $0$.
|
The discoveries of high-temperature superconductivity in H3S and LaH10 have
excited the search for superconductivity in compressed hydrides. In contrast to
rapidly expanding theoretical studies, high-pressure experiments on hydride
superconductors are expensive and technically challenging. Here we
experimentally discover superconductivity in two new phases,Fm-3m-CeH10 (SC-I
phase) and P63/mmc-CeH9 (SC-II phase) at pressures that are much lower (<100
GPa) than those needed to stabilize other polyhydride superconductors.
Superconductivity was evidenced by a sharp drop of the electrical resistance to
zero, and by the decrease of the critical temperature in deuterated samples and
in an external magnetic field. SC-I has Tc=115 K at 95 GPa, showing expected
decrease on further compression due to decrease of the electron-phonon coupling
(EPC) coefficient {\lambda} (from 2.0 at 100 GPa to 0.8 at 200 GPa). SC-II has
Tc = 57 K at 88 GPa, rapidly increasing to a maximum Tc ~100 K at 130 GPa, and
then decreasing on further compression. This maximum of Tc is due to a maximum
of {\lambda} at the phase transition from P63/mmc-CeH9 into a symmetry-broken
modification C2/c-CeH9. The pressure-temperature conditions of synthesis affect
the actual hydrogen content, and the actual value of Tc. Anomalously low
pressures of stability of cerium superhydrides make them appealing for studies
of superhydrides and for designing new superhydrides with even lower pressures
of stability.
|
We consider a linear non-local heat equation in a bounded domain
$\Omega\subset\mathbb{R}^d$, $d\geq 1$, with Dirichlet boundary conditions,
where the non-locality is given by the presence of an integral kernel.
Motivated by several applications in biological systems, in the present paper
we study some optimal control problems from a theoretical and numerical point
of view. In particular, we will employ the classical low-regret approach of
J.-L. Lions for treating the problem of incomplete data and provide a simple
computational implementation of the method. The effectiveness of the results
are illustrated by several examples.
|
Gardner conjectured that if two bounded measurable sets $A,B \subset
\mathbb{R}^n$ are equidecomposable by a set of isometries $\Gamma$ generating
an amenable group then $A$ and $B$ admit a measurable equidecomposition by all
isometries. Cie\'sla and Sabok asked if there is a measurable equidecomposition
using isometries only in the group generated by $\Gamma$. We answer this
question negatively.
|
Given a Riemannian manifold $M,$ and an open interval $I\subset\mathbb{R},$
we characterize nontrivial totally umbilical hypersurfaces of the product
$M\times I$ -- as well as of warped products $I\times_\omega M$ -- as those
which are local graphs built on isoparametric families of totally umbilical
hypersurfaces of $M.$ By means of this characterization, we fully extend to
$\mathbb{S}^n\times\mathbb{R}$ and $\mathbb{H}^n\times\mathbb{R}$ the results
by Souam and Toubiana on the classification of totally umbilical hypersurfaces
of $\mathbb{S}^2\times\mathbb{R}$ and $\mathbb{H}^2\times\mathbb{R}.$ It is
also shown that an analogous classification holds for arbitrary warped products
$I\times_\omega\mathbb{S}^n$ and $I\times_\omega\mathbb{H}^n.$
|
Zero-shot learning, the task of learning to recognize new classes not seen
during training, has received considerable attention in the case of 2D image
classification. However, despite the increasing ubiquity of 3D sensors, the
corresponding 3D point cloud classification problem has not been meaningfully
explored and introduces new challenges. In this paper, we identify some of the
challenges and apply 2D Zero-Shot Learning (ZSL) methods in the 3D domain to
analyze the performance of existing models. Then, we propose a novel approach
to address the issues specific to 3D ZSL. We first present an inductive ZSL
process and then extend it to the transductive ZSL and Generalized ZSL (GZSL)
settings for 3D point cloud classification. To this end, a novel loss function
is developed that simultaneously aligns seen semantics with point cloud
features and takes advantage of unlabeled test data to address some known
issues (e.g., the problems of domain adaptation, hubness, and data bias). While
designed for the particularities of 3D point cloud classification, the method
is shown to also be applicable to the more common use-case of 2D image
classification. An extensive set of experiments is carried out, establishing
state-of-the-art for ZSL and GZSL on synthetic (ModelNet40, ModelNet10, McGill)
and real (ScanObjectNN) 3D point cloud datasets.
|
We prove that it is consistent that Club Stationary Reflection and the
Special Aronszajn Tree Property simultaneously hold on $\omega_2$, thereby
contributing to the study of the tension between compactness and incompactness
in set theory. The poset which produces the final model follows the collapse of
a weakly compact cardinal first with an iteration of club adding (with
anticipation) and second with an iteration specializing Aronszajn trees.
In the first part of the paper, we prove a general theorem about specializing
Aronszajn trees after forcing with what we call $\mathcal{F}_{WC}$-Strongly
Proper posets. This type of poset, of which the Levy collapse is a degenerate
example, uses systems of exact residue functions to create many strongly
generic conditions. We prove a new result about stationary set preservation by
quotients of this kind of poset; as a corollary, we show that the original
Laver-Shelah model satisfies a strong stationary reflection principle, though
it fails to satisfy the full Club Stationary Reflection. In the second part, we
show that the composition of collapsing and club adding (with anticipation) is
an $\mathcal{F}_{WC}$-Strongly Proper poset. After proving a new result about
Aronszajn tree preservation, we show how to obtain the final model.
|
In this article, we prove Talenti's comparison theorem for Poisson equation
on complete noncompact Riemannian manifold with nonnegative Ricci curvature.
Furthermore, we obtain the Faber-Krahn inequality for the first eigenvalue of
Dirichlet Laplacian, $L^1$- and $L^\infty$-moment spectrum, especially
Saint-Venant theorem for torsional rigidity and a reverse H\"older inequality
for eigenfunctions of Dirichlet Laplacian.
|
IoT systems have been facing increasingly sophisticated technical problems
due to the growing complexity of these systems and their fast deployment
practices. Consequently, IoT managers have to judiciously detect failures
(anomalies) in order to reduce their cyber risk and operational cost. While
there is a rich literature on anomaly detection in many IoT-based systems,
there is no existing work that documents the use of ML models for anomaly
detection in digital agriculture and in smart manufacturing systems. These two
application domains pose certain salient technical challenges. In agriculture
the data is often sparse, due to the vast areas of farms and the requirement to
keep the cost of monitoring low. Second, in both domains, there are multiple
types of sensors with varying capabilities and costs. The sensor data
characteristics change with the operating point of the environment or machines,
such as, the RPM of the motor. The inferencing and the anomaly detection
processes therefore have to be calibrated for the operating point.
In this paper, we analyze data from sensors deployed in an agricultural farm
with data from seven different kinds of sensors, and from an advanced
manufacturing testbed with vibration sensors. We evaluate the performance of
ARIMA and LSTM models for predicting the time series of sensor data. Then,
considering the sparse data from one kind of sensor, we perform transfer
learning from a high data rate sensor. We then perform anomaly detection using
the predicted sensor data. Taken together, we show how in these two application
domains, predictive failure classification can be achieved, thus paving the way
for predictive maintenance.
|
Ultraperipheral collisions of high energy protons are a source of
approximately real photons colliding with each other. Photon fusion can result
in production of yet unknown charged particles in very clean events. The
cleanliness of such an event is due to the requirement that the protons survive
during the collision. Finite sizes of the protons reduce the probability of
such outcome compared to point-like particles. We calculate the survival
factors and cross sections for the production of heavy charged particles at the
Large Hadron Collider.
|
Animals are capable of extreme agility, yet understanding their complex
dynamics, which have ecological, biomechanical and evolutionary implications,
remains challenging. Being able to study this incredible agility will be
critical for the development of next-generation autonomous legged robots. In
particular, the cheetah (acinonyx jubatus) is supremely fast and maneuverable,
yet quantifying its whole-body 3D kinematic data during locomotion in the wild
remains a challenge, even with new deep learning-based methods. In this work we
present an extensive dataset of free-running cheetahs in the wild, called
AcinoSet, that contains 119,490 frames of multi-view synchronized high-speed
video footage, camera calibration files and 7,588 human-annotated frames. We
utilize markerless animal pose estimation to provide 2D keypoints. Then, we use
three methods that serve as strong baselines for 3D pose estimation tool
development: traditional sparse bundle adjustment, an Extended Kalman Filter,
and a trajectory optimization-based method we call Full Trajectory Estimation.
The resulting 3D trajectories, human-checked 3D ground truth, and an
interactive tool to inspect the data is also provided. We believe this dataset
will be useful for a diverse range of fields such as ecology, neuroscience,
robotics, biomechanics as well as computer vision.
|
We consider gauged U(1) extensions of the standard model of particle physics
with three right-handed sterile neutrinos and a singlet scalar. The neutrinos
obtain mass via the type I seesaw mechanism. We compute the one loop
corrections to the elements of the tree level mass matrix of the light
neutrinos and show explicitly the cancellation of the gauge dependent terms. We
present a general formula for the gauge independent, finite one-loop
corrections for arbitrary number new U(1) groups, new complex scalars and
sterile neutrinos. We estimate the size of the corrections relative to the tree
level mass matrix in a particular extension, the super-weak model.
|
Reliably assessing the error in an estimated vehicle position is integral for
ensuring the vehicle's safety in urban environments. Many existing approaches
use GNSS measurements to characterize protection levels (PLs) as probabilistic
upper bounds on the position error. However, GNSS signals might be reflected or
blocked in urban environments, and thus additional sensor modalities need to be
considered to determine PLs. In this paper, we propose a novel approach for
computing PLs by matching camera image measurements to a LiDAR-based 3D map of
the environment. We specify a Gaussian mixture model probability distribution
of position error using deep neural network-based data-driven models and
statistical outlier weighting techniques. From the probability distribution, we
compute the PLs by evaluating the position error bound using numerical
line-search methods. Through experimental validation with real-world data, we
demonstrate that the PLs computed from our method are reliable bounds on the
position error in urban environments.
|
\begin{abstract} By following the paradigm of the global quantisation,
instead of the analysis under changes of coordinates, in this work we establish
a global analysis for the explicit computation of the Dixmier trace and the
Wodzicki residue of (elliptic and subelliptic) pseudo-differential operators on
compact Lie groups. The regularised determinant for the Dixmier trace is also
computed. We obtain these formulae in terms of the global symbol of the
corresponding operators. In particular, our approach links the Dixmier trace
and Wodzicki residue to the representation theory of the group. Although we
start by analysing the case of compact Lie groups, we also compute the Dixmier
trace and its regularised determinant on arbitrary closed manifolds $M$, for
the class of invariant pseudo-differential operators in terms of their
matrix-valued symbols. This analysis includes e.g. the family of positive and
elliptic pseudo-differential operators on $M$.
|
Recently, a universal formula for the Nicolai map in terms of a coupling flow
functional differential operator was found. We present the full perturbative
expansion of this operator in Yang-Mills theories where supersymmetry is
realized off-shell. Given this expansion, we develop a straightforward method
to compute the explicit Nicolai map to any order in the gauge coupling. Our
work extends the previously known construction method from the Landau gauge to
arbitrary gauges and from the gauge hypersurface to the full gauge-field
configuration space. As an example, we present the map in the axial gauge to
the second order.
|
In this paper, we investigate dynamic resource scheduling (i.e., joint user,
subchannel, and power scheduling) for downlink multi-channel non-orthogonal
multiple access (MC-NOMA) systems over time-varying fading channels.
Specifically, we address the weighted average sum rate maximization problem
with quality-of-service (QoS) constraints. In particular, to facilitate fast
resource scheduling, we focus on developing a very low-complexity algorithm. To
this end, by leveraging Lagrangian duality and the stochastic optimization
theory, we first develop an opportunistic MC-NOMA scheduling algorithm whereby
the original problem is decomposed into a series of subproblems, one for each
time slot. Accordingly, resource scheduling works in an online manner by
solving one subproblem per time slot, making it more applicable to practical
systems. Then, we further develop a heuristic joint subchannel assignment and
power allocation (Joint-SAPA) algorithm with very low computational complexity,
called Joint-SAPA-LCC, that solves each subproblem. Finally, through
simulation, we show that our Joint-SAPA-LCC algorithm provides good performance
comparable to the existing Joint-SAPA algorithms despite requiring much lower
computational complexity. We also demonstrate that our opportunistic MC-NOMA
scheduling algorithm in which the Joint-SAPA-LCC algorithm is embedded works
well while satisfying given QoS requirements.
|
The charmonium-like exotic states $Y(4230)$ and the less known $Y(4320)$,
produced in $e^+e^-$ collisions, are sources of positive parity exotic hadrons
in association with photons or pseudoscalar mesons. We analyze the radiative
and pion decay channels in the compact tetraquark scheme, with a method that
proves to work equally well in the most studied $D^*\to \gamma/\pi+D$ decays.
The decay of the vector $Y$ into a pion and a $Z_c$ state requires a flip of
charge conjugation and isospin that is described appropriately in the formalism
used. Rates however are found to depend on the fifth power of pion momentum
which would make the final states $\pi Z_c(4020)$ strongly suppressed with
respect to $\pi Z_c(3900)$. The agreement with BES III data would be improved
considering the $\pi Z_c(4020)$ events to be fed by the tail of the $Y(4320)$
resonance under the $Y(4230)$. These results should renovate the interest in
further clarifying the emerging experimental picture in this mass region.
|
We propose a fully convolutional multi-person pose estimation framework using
dynamic instance-aware convolutions, termed FCPose. Different from existing
methods, which often require ROI (Region of Interest) operations and/or
grouping post-processing, FCPose eliminates the ROIs and grouping
post-processing with dynamic instance-aware keypoint estimation heads. The
dynamic keypoint heads are conditioned on each instance (person), and can
encode the instance concept in the dynamically-generated weights of their
filters. Moreover, with the strong representation capacity of dynamic
convolutions, the keypoint heads in FCPose are designed to be very compact,
resulting in fast inference and making FCPose have almost constant inference
time regardless of the number of persons in the image. For example, on the COCO
dataset, a real-time version of FCPose using the DLA-34 backbone infers about
4.5x faster than Mask R-CNN (ResNet-101) (41.67 FPS vs. 9.26FPS) while
achieving improved performance. FCPose also offers better speed/accuracy
trade-off than other state-of-the-art methods. Our experiment results show that
FCPose is a simple yet effective multi-person pose estimation framework. Code
is available at: https://git.io/AdelaiDet
|
We discuss an approach to the computer assisted proof of the existence of
branches of stationary and periodic solutions for dissipative PDEs, using the
Brussellator system with diffusion and Dirichlet boundary conditions as an
example, We also consider the case where the branch of periodic solutions
emanates from a branch of stationary solutions through a Hopf bifurcation.
|
This article presents a randomized matrix-free method for approximating the
trace of $f({\bf A})$, where ${\bf A}$ is a large symmetric matrix and $f$ is a
function analytic in a closed interval containing the eigenvalues of ${\bf A}$.
Our method uses a combination of stochastic trace estimation (i.e.,
Hutchinson's method), Chebyshev approximation, and multilevel Monte Carlo
techniques. We establish general bounds on the approximation error of this
method by extending an existing error bound for Hutchinson's method to
multilevel trace estimators. Numerical experiments are conducted for common
applications such as estimating the log-determinant, nuclear norm, and Estrada
index, and triangle counting in graphs. We find that using multilevel
techniques can substantially reduce the variance of existing single-level
estimators.
|
Computational Social Science (CSS), aiming at utilizing computational methods
to address social science problems, is a recent emerging and fast-developing
field. The study of CSS is data-driven and significantly benefits from the
availability of online user-generated contents and social networks, which
contain rich text and network data for investigation. However, these
large-scale and multi-modal data also present researchers with a great
challenge: how to represent data effectively to mine the meanings we want in
CSS? To explore the answer, we give a thorough review of data representations
in CSS for both text and network. Specifically, we summarize existing
representations into two schemes, namely symbol-based and embedding-based
representations, and introduce a series of typical methods for each scheme.
Afterwards, we present the applications of the above representations based on
the investigation of more than 400 research articles from 6 top venues involved
with CSS. From the statistics of these applications, we unearth the strength of
each kind of representations and discover the tendency that embedding-based
representations are emerging and obtaining increasing attention over the last
decade. Finally, we discuss several key challenges and open issues for future
directions. This survey aims to provide a deeper understanding and more
advisable applications of data representations for CSS researchers.
|
In this paper, we develop a safe decision-making method for self-driving cars
in a multi-lane, single-agent setting. The proposed approach utilizes deep
reinforcement learning (RL) to achieve a high-level policy for safe tactical
decision-making. We address two major challenges that arise solely in
autonomous navigation. First, the proposed algorithm ensures that collisions
never happen, and therefore accelerate the learning process. Second, the
proposed algorithm takes into account the unobservable states in the
environment. These states appear mainly due to the unpredictable behavior of
other agents, such as cars, and pedestrians, and make the Markov Decision
Process (MDP) problematic when dealing with autonomous navigation. Simulations
from a well-known self-driving car simulator demonstrate the applicability of
the proposed method
|
We present a simple lithographic method for fabrication of microresonator
devices at the optical fiber surface. First, we undress the predetermined
surface areas of a fiber segment from the polymer coating with a focused CO2
laser beam. Next, using the remaining coating as a mask, we etch the fiber in a
hydrofluoric acid solution. Finally, we completely undress the fiber segment
from coating to create a chain of silica bottle microresonators with nanoscale
radius variation (SNAP microresonators). We demonstrate the developed method by
fabrication of a chain of five 1 mm long and 30 nm high microresonators at the
surface of a 125 micron diameter optical fiber and a single 0.5 mm long and 291
nm high microresonator at the surface of a 38 micron diameter fiber. As another
application, we fabricate a rectangular 5 mm long SNAP microresonator at the
surface of a 38 micron diameter fiber and investigate its performance as a
miniature delay line. The propagation of a 100 ps pulse with 1 ns delay, 0.035c
velocity, and negligible dispersion is demonstrated. In contrast to the
previously developed approaches in SNAP technology, the developed method allows
the introduction of much larger fiber radius variation ranging from nanoscale
to microscale.
|
This study follows many classical approaches to multi-object tracking (MOT)
that model the problem using dynamic graphical data structures, and adapts this
formulation to make it amenable to modern neural networks. Our main
contributions in this work are the creation of a framework based on dynamic
undirected graphs that represent the data association problem over multiple
timesteps, and a message passing graph neural network (MPNN) that operates on
these graphs to produce the desired likelihood for every association therein.
We also provide solutions and propositions for the computational problems that
need to be addressed to create a memory-efficient, real-time, online algorithm
that can reason over multiple timesteps, correct previous mistakes, update
beliefs, and handle missed/false detections. To demonstrate the efficacy of our
approach, we only use the 2D box location and object category ID to construct
the descriptor for each object instance. Despite this, our model performs on
par with state-of-the-art approaches that make use of additional sensors, as
well as multiple hand-crafted and/or learned features. This illustrates that
given the right problem formulation and model design, raw bounding boxes (and
their kinematics) from any off-the-shelf detector are sufficient to achieve
competitive tracking results on challenging MOT benchmarks.
|
The model-based investing using financial factors is evolving as a principal
method for quantitative investment. The main challenge lies in the selection of
effective factors towards excess market returns. Existing approaches, either
hand-picking factors or applying feature selection algorithms, do not
orchestrate both human knowledge and computational power. This paper presents
iQUANT, an interactive quantitative investment system that assists equity
traders to quickly spot promising financial factors from initial
recommendations suggested by algorithmic models, and conduct a joint refinement
of factors and stocks for investment portfolio composition. We work closely
with professional traders to assemble empirical characteristics of "good"
factors and propose effective visualization designs to illustrate the
collective performance of financial factors, stock portfolios, and their
interactions. We evaluate iQUANT through a formal user study, two case studies,
and expert interviews, using a real stock market dataset consisting of 3000
stocks times 6000 days times 56 factors.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.