abstract
stringlengths 42
2.09k
|
---|
This paper deals with perturbation theory for discrete spectra of linear
operators. To simplify exposition we consider here self-adjoint operators. This
theory is based on the Feshbach-Schur map and it has advantages with respect to
the standard perturbation theory in three aspects: (a) it readily produces
rigorous estimates on eigenvalues and eigenfunctions with explicit constants;
(b) it is compact and elementary (it uses properties of norms and the
fundamental theorem of algebra about solutions of polynomial equations); and
(c) it is based on a self-contained formulation of a fixed point problem for
the eigenvalues and eigenfunctions, allowing for easy iterations. We apply our
abstract results to obtain rigorous bounds on the ground states of Helium-type
ions.
|
We prove various low-energy decomposition results, showing that we can
decompose a finite set $A\subset \mathbb{F}_p$ satisfying $|A|<p^{5/8}$, into
$A = S\sqcup T$ so that, for a non-degenerate quadratic $f\in
\mathbb{F}_p[x,y]$, we have
\[ |\{(s_1,s_2,s_3,s_4)\in S^4 : s_1 + s_2 = s_3 + s_4\}| \ll |A|^{3 -
\frac15 + \varepsilon}
\] and
\[
|\{(t_1,t_2,t_3,t_4)\in T^4 : f(t_1, t_2) = f(t_3, t_4)\}|\ll |A|^{3 -
\frac15 + \varepsilon}\,.
\]
Variations include extending this result to large $A$ and a low-energy
decomposition involving additive energy of images of rational functions. This
gives a quantitative improvement to a result of Roche-Newton, Shparlinski and
Winterhof as well as a generalisation of a result of Rudnev, Shkredov and
Stevens.
We consider applications to conditional expanders, exponential sum estimates
and the finite field Littlewood problem. In particular, we improve results of
Mirzaei, Swaenepoel and Winterhof and Garcia.
|
To improve the viewer's Quality of Experience (QoE) and optimize computer
graphics applications, 3D model quality assessment (3D-QA) has become an
important task in the multimedia area. Point cloud and mesh are the two most
widely used digital representation formats of 3D models, the visual quality of
which is quite sensitive to lossy operations like simplification and
compression. Therefore, many related studies such as point cloud quality
assessment (PCQA) and mesh quality assessment (MQA) have been carried out to
measure the caused visual quality degradations. However, a large part of
previous studies utilizes full-reference (FR) metrics, which means they may
fail to predict the quality level with the absence of the reference 3D model.
Furthermore, few 3D-QA metrics are carried out to consider color information,
which significantly restricts the effectiveness and scope of application. In
this paper, we propose a no-reference (NR) quality assessment metric for
colored 3D models represented by both point cloud and mesh. First, we project
the 3D models from 3D space into quality-related geometry and color feature
domains. Then, the natural scene statistics (NSS) and entropy are utilized to
extract quality-aware features. Finally, the Support Vector Regressor (SVR) is
employed to regress the quality-aware features into quality scores. Our method
is mainly validated on the colored point cloud quality assessment database
(SJTU-PCQA) and the colored mesh quality assessment database (CMDM). The
experimental results show that the proposed method outperforms all the
state-of-art NR 3D-QA metrics and obtains an acceptable gap with the
state-of-art FR 3D-QA metrics.
|
In the highly non-equilibrium conditions of laser induced spin dynamics
magnetic moments can only be obtained from the spectral information, most
commonly from the spectroscopy of semi-core states using the so-called x-ray
magnetic circular dichroism (XMCD) sum rules. The validity of the these sum
rules in tracking femtosecond spin dynamics remains, however, an open question.
Employing the time dependent extension of density functional theory (TD-DFT) we
compare spectroscopically obtained moments with those directly calculated from
the TD-DFT densities. We find that for experimentally typical pump pulses these
two very distinct routes to the spin moment are, for Co and Ni, in excellent
agreement, validating the experimental approach. However, for short and intense
pulses or high fluence pulses of long duration the XMCD sum rules fail, with
errors exceeding 50\%. This failure persists only during the pulse and occurs
when the pump pulse excites charge out of the $d$-band and into $sp$-character
bands, invalidating the semi-core to $d$-state transitions assumed by the XMCD
sum rules.
|
Under stereo settings, the problem of image super-resolution (SR) and
disparity estimation are interrelated that the result of each problem could
help to solve the other. The effective exploitation of correspondence between
different views facilitates the SR performance, while the high-resolution (HR)
features with richer details benefit the correspondence estimation. According
to this motivation, we propose a Stereo Super-Resolution and Disparity
Estimation Feedback Network (SSRDE-FNet), which simultaneously handles the
stereo image super-resolution and disparity estimation in a unified framework
and interact them with each other to further improve their performance.
Specifically, the SSRDE-FNet is composed of two dual recursive sub-networks for
left and right views. Besides the cross-view information exploitation in the
low-resolution (LR) space, HR representations produced by the SR process are
utilized to perform HR disparity estimation with higher accuracy, through which
the HR features can be aggregated to generate a finer SR result. Afterward, the
proposed HR Disparity Information Feedback (HRDIF) mechanism delivers
information carried by HR disparity back to previous layers to further refine
the SR image reconstruction. Extensive experiments demonstrate the
effectiveness and advancement of SSRDE-FNet.
|
The open string field theory of Witten (SFT) has a close formal similarity
with Chern-Simons theory in three dimensions. This similarity is due to the
fact that the former theory has concepts corresponding to forms, exterior
derivative, wedge product and integration over the manifold. In this paper, we
introduce the interior product and the Lie derivative in the $KBc$ subsector of
SFT. The interior product in SFT is specified by a two-component "tangent
vector" and lowers the ghost number by one (like the ordinary interior product
maps a $p$-form to $(p-1)$-form). The Lie derivative in SFT is defined as the
anti-commutator of the interior product and the BRST operator. The important
property of these two operations is that they respect the $KBc$ algebra.
Deforming the original $(K,B,c)$ by using the Lie derivative, we can consider
an infinite copies of the $KBc$ algebra, which we call the $KBc$ manifold. As
an application, we construct the Wilson line on the manifold, which could play
a role in reproducing degenerate fluctuation modes around a multi-brane
solution.
|
A common approach to solving physical reasoning tasks is to train a value
learner on example tasks. A limitation of such an approach is that it requires
learning about object dynamics solely from reward values assigned to the final
state of a rollout of the environment. This study aims to address this
limitation by augmenting the reward value with self-supervised signals about
object dynamics. Specifically, we train the model to characterize the
similarity of two environment rollouts, jointly with predicting the outcome of
the reasoning task. This similarity can be defined as a distance measure
between the trajectory of objects in the two rollouts, or learned directly from
pixels using a contrastive formulation. Empirically, we find that this approach
leads to substantial performance improvements on the PHYRE benchmark for
physical reasoning (Bakhtin et al., 2019), establishing a new state-of-the-art.
|
Network spaces have been known as a critical factor in both handcrafted
network designs or defining search spaces for Neural Architecture Search (NAS).
However, an effective space involves tremendous prior knowledge and/or manual
effort, and additional constraints are required to discover efficiency-aware
architectures. In this paper, we define a new problem, Network Space Search
(NSS), as searching for favorable network spaces instead of a single
architecture. We propose an NSS method to directly search for efficient-aware
network spaces automatically, reducing the manual effort and immense cost in
discovering satisfactory ones. The resultant network spaces, named Elite
Spaces, are discovered from Expanded Search Space with minimal human expertise
imposed. The Pareto-efficient Elite Spaces are aligned with the Pareto front
under various complexity constraints and can be further served as NAS search
spaces, benefiting differentiable NAS approaches (e.g. In CIFAR-100, an
averagely 2.3% lower error rate and 3.7% closer to target constraint than the
baseline with around 90% fewer samples required to find satisfactory networks).
Moreover, our NSS approach is capable of searching for superior spaces in
future unexplored spaces, revealing great potential in searching for network
spaces automatically. Website: https://minhungchen.netlify.app/publication/nss.
|
This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackathon:
Detecting and Rating Humor and Offense. This task aims to detect whether the
text is humorous and how humorous it is. There are four subtasks in the
competition. In this paper, we mainly present our solution, a multi-task
learning model based on adversarial examples, for task 1a and 1b. More
specifically, we first vectorize the cleaned dataset and add the perturbation
to obtain more robust embedding representations. We then correct the loss via
the confidence level. Finally, we perform interactive joint learning on
multiple tasks to capture the relationship between whether the text is humorous
and how humorous it is. The final result shows the effectiveness of our system.
|
Mobile edge computing (MEC) has become a promising solution to utilize
distributed computing resources for supporting computation-intensive vehicular
applications in dynamic driving environments. To facilitate this paradigm, the
onsite resource trading serves as a critical enabler. However, dynamic
communications and resource conditions could lead unpredictable trading
latency, trading failure, and unfair pricing to the conventional resource
trading process. To overcome these challenges, we introduce a novel
futures-based resource trading approach in edge computing-enabled internet of
vehicles (IoV), where a forward contract is used to facilitate resource trading
related negotiations between an MEC server (seller) and a vehicle (buyer) in a
given future term. Through estimating the historical statistics of future
resource supply and network condition, we formulate the futures-based resource
trading as the optimization problem aiming to maximize the seller's and the
buyer's expected utility, while applying risk evaluations to relieve possible
losses incurred by the uncertainties in the system. To tackle this problem, we
propose an efficient bilateral negotiation approach which facilitates the
participants reaching a consensus. Extensive simulations demonstrate that the
proposed futures-based resource trading brings considerable utilities to both
participants, while significantly outperforming the baseline methods on
critical factors, e.g., trading failures and fairness, negotiation latency and
cost.
|
The silicon pixel sensor is the core component of the vertex detector for the
Circular Electron Positron Collider~(CEPC). The JadePix3 is a full-function
large-size CMOS chip designed for the CEPC vertex detector. To test all the
functions and the performance of this chip, we designed a test system based on
the IPbus framework. The test system controls the parameters and monitors the
status of the pixel chip. By integrating the jumbo frame feature into the IPbus
suite, the block read/write speed is further extended in order to meet the
specifications of the JadePix3. The robustness, scalability, and portability of
this system have been verified by pulse test, cosmic test and laser test in the
laboratory. This paper summarizes the DAQ and control system of the JadePix3
and presents the first results of the tests.
|
We study the thermal phase transitions of a generic real scalar field,
without a $Z_2$-symmetry, referred to variously as an inert, sterile or singlet
scalar, or $\phi^3+\phi^4$ theory. Such a scalar field arises in a wide range
of models, including as the inflaton, or as a portal to the dark sector. At
high temperatures, we perform dimensional reduction, matching to an effective
theory in three dimensions, which we then study both perturbatively to
three-loop order and on the lattice. For strong first-order transitions, with
large tree-level cubic couplings, our lattice Monte-Carlo simulations agree
with perturbation theory within error. However, as the size of the cubic
coupling decreases, relative to the quartic coupling, perturbation theory
becomes less and less reliable, breaking down completely in the approach to the
$Z_2$-symmetric limit, in which the transition is of second order.
Notwithstanding, the renormalisation group is shown to significantly extend the
validity of perturbation theory. Throughout, our calculations are made as
explicit as possible so that this article may serve as a guide for similar
calculations in other theories.
|
With increasing adoption of face recognition systems, it is important to
ensure adequate performance of these technologies across demographic groups.
Recently, phenotypes such as skin-tone, have been proposed as superior
alternatives to traditional race categories when exploring performance
differentials. However, there is little consensus regarding how to
appropriately measure skin-tone in evaluations of biometric performance or in
AI more broadly. In this study, we explore the relationship between
face-area-lightness-measures (FALMs) estimated from images and ground-truth
skin readings collected using a device designed to measure human skin. FALMs
estimated from different images of the same individual varied significantly
relative to ground-truth FALM. This variation was only reduced by greater
control of acquisition (camera, background, and environment). Next, we compare
ground-truth FALM to Fitzpatrick Skin Types (FST) categories obtained using the
standard, in-person, medical survey and show FST is poorly predictive of
skin-tone. Finally, we show how noisy estimation of FALM leads to errors
selecting explanatory factors for demographic differentials. These results
demonstrate that measures of skin-tone for biometric performance evaluations
must come from objective, characterized, and controlled sources. Further,
despite this being a currently practiced approach, estimating FST categories
and FALMs from uncontrolled imagery does not provide an appropriate measure of
skin-tone.
|
The quantum gravity vacuum must contain virtual fluctuations of black hole
microstates. These extended-size fluctuations get `crushed' when a closed
trapped surface forms, and turn into on-shell `fuzzball' states that resolve
the information puzzle. We argue that these same fluctuations can get
`stretched' by the anti-trapped surfaces in an expanding cosmology, and that
this stretching generates vacuum energy. The stretching happen when the Hubble
deceleration reduces quickly, which happens whenever the pressure drops
quickly. We thus get an inflation-scale vacuum energy when the heavy GUTS
particles become nonrelativistic, and again a small vacuum energy when the
radiation phase turns to dust. The expansion law in the radiation phase does
not allow stretching, in agreement with the observed irrelevance of vacuum
energy in that phase. The extra energy induced when the radiation phase changes
to dust may explain the tension in the Hubble constant between low and high
redshift data.
|
Bloch states of electrons in honeycomb two-dimensional crystals with
multi-valley band structure and broken inversion symmetry have orbital magnetic
moments of a topological nature. In crystals with two degenerate valleys, a
perpendicular magnetic field lifts the valley degeneracy via a Zeeman effect
due to these magnetic moments, leading to magnetoelectric effects which can be
leveraged for creating valleytronic devices. In this work, we demonstrate that
trilayer graphene with Bernal stacking, (ABA TLG) hosts topological magnetic
moments with a large and widely tunable valley g-factor, reaching a value 1050
at the extreme of the studied parametric range. The reported experiment
consists in sublattice-resolved scanning tunneling spectroscopy under
perpendicular electric and magnetic fields that control the TLG bands. The
tunneling spectra agree very well with the results of theoretical modeling that
includes the full details of the TLG tight-binding model and accounts for a
quantum-dot-like potential profile formed electrostatically under the scanning
tunneling microscope tip.
|
Time-dependent orbital-free DFT is an efficient method for calculating the
dynamic properties of large scale quantum systems due to the low computational
cost compared to standard time-dependent DFT. We formalize this method by
mapping the real system of interacting fermions onto a fictitious system of
non-interacting bosons. The dynamic Pauli potential and associated kernel
emerge as key ingredients of time-tependent orbital-free DFT. Using the uniform
electron gas as a model system, we derive an approximate frequency-dependent
Pauli kernel. Pilot calculations suggest that space nonlocality is a key
feature for this kernel. Nonlocal terms arise already in the second order
expansion with respect to unitless frequency and reciprocal space variable
($\frac{\omega}{q\, k_F}$ and $\frac{q}{2\, k_F}$, respectively). Given the
encouraging performance of the proposed kernel, we expect it will lead to more
accurate orbital-free DFT simulations of nanoscale systems out of equilibrium.
Additionally, the proposed path to formulate nonadiabatic Pauli kernels
presents several avenues for further improvements which can be exploited in
future works to improve the results.
|
Recognizing human grasping strategies is an important factor in robot
teaching as these strategies contain the implicit knowledge necessary to
perform a series of manipulations smoothly. This study analyzed the effects of
object affordance-a prior distribution of grasp types for each object-on
convolutional neural network (CNN)-based grasp-type recognition. To this end,
we created datasets of first-person grasping-hand images labeled with grasp
types and object names, and tested a recognition pipeline leveraging object
affordance. We evaluated scenarios with real and illusory objects to be
grasped, to consider a teaching condition in mixed reality where the lack of
visual object information can make the CNN recognition challenging. The results
show that object affordance guided the CNN in both scenarios, increasing the
accuracy by 1) excluding unlikely grasp types from the candidates and 2)
enhancing likely grasp types. In addition, the "enhancing effect" was more
pronounced with high degrees of grasp-type heterogeneity. These results
indicate the effectiveness of object affordance for guiding grasp-type
recognition in robot teaching applications.
|
The field of two-dimensional topological semimetals, which emerged at the
intersection of two-dimensional materials and topological materials, have been
rapidly developing in recent years. In this article, we briefly review the
progress in this field. Our focus is on the basic concepts and notions, in
order to convey a coherent overview of the field. Some material examples are
discussed to illustrate the concepts. We discuss the outstanding problems in
the field that need to be addressed in future research.
|
Purpose. Early squamous cell neoplasia (ESCN) in the oesophagus is a highly
treatable condition. Lesions confined to the mucosal layer can be curatively
treated endoscopically. We build a computer-assisted detection (CADe) system
that can classify still images or video frames as normal or abnormal with high
diagnostic accuracy. Methods. We present a new benchmark dataset containing 68K
binary labeled frames extracted from 114 patient videos whose imaged areas have
been resected and correlated to histopathology. Our novel convolutional network
(CNN) architecture solves the binary classification task and explains what
features of the input domain drive the decision-making process of the network.
Results. The proposed method achieved an average accuracy of 91.7 % compared to
the 94.7 % achieved by a group of 12 senior clinicians. Our novel network
architecture produces deeply supervised activation heatmaps that suggest the
network is looking at intrapapillary capillary loop (IPCL) patterns when
predicting abnormality. Conclusion. We believe that this dataset and baseline
method may serve as a reference for future benchmarks on both video frame
classification and explainability in the context of ESCN detection. A future
work path of high clinical relevance is the extension of the classification to
ESCN types.
|
Mennicke--Newman lemma for unimodular rows was used by W. van der Kallen to
give a group structure on the orbit set $\frac{Um_{n}(R)}{E_{n}(R)}$ for a
commutative noetherian ring of dimension $d\leq 2n-4.$ In this paper, we
generalise the Mennicke--Newman lemma for $m\times n $ right invertible
matrices.
|
We consider the two dimensional surface quasi-geostrophic equations with
super-critical dissipation. For large initial data in critical Sobolev and
Besov spaces, we prove optimal Gevrey regularity with the same decay exponent
as the linear part. This settles several open problems in \cite{Bis14, BMS15}.
|
Aldous-Broder algorithm is a famous algorithm used to sample a uniform
spanning tree of any finite connected graph G, but it is more general: it
states that given a reversible M Markov chain on G started at r, the tree
rooted at r formed by the steps of successive first entrance in each node
(different from the root) has a probability proportional to
$\prod_{e=(e1,e2)\in{\sf Edges}(t,r)} M_{e1,e2}$ , where the edges are directed
toward r. As stated, it allows to sample many distributions on the set of
spanning trees. In this paper we extend Aldous-Broder theorem by dropping the
reversibility condition on M. We highlight that the general statement we prove
is not the same as the original one (but it coincides in the reversible case
with that of Aldous and Broder). We prove this extension in two ways: an
adaptation of the classical argument, which is purely probabilistic, and a new
proof based on combinatorial arguments. On the way we introduce a new
combinatorial object that we call the golf sequences.
|
Predictor screening rules, which discard predictors from the design matrix
before fitting a model, have had sizable impacts on the speed with which
$\ell_1$-regularized regression problems, such as the lasso, can be solved.
Current state-of-the-art screening rules, however, have difficulties in dealing
with highly-correlated predictors, often becoming too conservative. In this
paper, we present a new screening rule to deal with this issue: the Hessian
Screening Rule. The rule uses second-order information from the model in order
to provide more accurate screening as well as higher-quality warm starts. In
our experiments on $\ell_1$-regularized least-squares (the lasso) and logistic
regression, we show that the rule outperforms all other alternatives in
simulated experiments with high correlation, as well as in the majority of real
datasets that we study.
|
It is not yet fully understood how planet formation affects the properties of
host stars, in or out of a cluster; however, abundance trends can help us
understand these processes. We present a detailed chemical abundance analysis
of six stars in Praesepe, a planet-hosting open cluster. Pr0201 is known to
host a close-in (period of 4.4 days) giant planet (mass of 0.54$\rm
M_{\rm{J}}$), while the other five cluster members in our sample (Pr0133,
Pr0081, Pr0208, Pr0051, and Pr0076) have no detected planets according to RV
measurements. Using high-resolution, high signal-to-noise echelle spectra
obtained with Keck/HIRES and a novel approach to equivalent width measurements
(XSpect-EW), we derived abundances of up to 20 elements spanning a range of
condensation temperatures (Tc). We find a mean cluster metallicity of [Fe/H] =
+0.21$\pm$0.02 dex, in agreement with most previous determinations. We find
most of our elements show a [X/Fe] scatter of $\sim$0.02-0.03 dex and conclude
that our stellar sample is chemically homogeneous. The Tc slope for the cluster
mean abundances is consistent with zero and none of the stars in our sample
exhibit individually a statistically significant Tc slope. Using a planet
engulfment model, we find that the planet-host, Pr0201, shows no evidence of
significant enrichment in its refractory elements when compared to the cluster
mean that would be consistent with a planetary accretion scenario.
|
The systematic trend in nuclear charge radii is of great interest due to the
universal shell effects and odd-even staggering (OES). Modified root mean
square (rms) charge radius formula to phenomenologically account for the
formation of neutron-proton ($np$) correlations is firstly applied to study the
odd-$Z$ copper and indium isotopes. Theoretical results obtained by
relativistic mean field (RMF) model with NL3, PK1 and NL3$^{*}$ parameter sets
are compared with the experimental data. Our results show that both OES and the
abrupt changes across $N=50$ and $82$ shell closures are reproduced evidently
in nuclear charge radii. The inverted parabolic-like behaviors of rms charge
radii can also be remarkably reproduced between two neutron magic numbers,
namely $N=28$ to $50$ for copper isotopes and $N=50$ to $82$ for indium
isotopes. Meanwhile, our conclusions show almost no dependence on the effective
forces. This means the $np$-correlations play an indispensable role in
quantitatively determining the fine structures of nuclear charge radii along
odd-$Z$ isotopic chains. The underlying mechanism is discussed.
|
For general spin systems, we prove that a contractive coupling for any local
Markov chain implies optimal bounds on the mixing time and the modified
log-Sobolev constant for a large class of Markov chains including the Glauber
dynamics, arbitrary heat-bath block dynamics, and the Swendsen-Wang dynamics.
This reveals a novel connection between probabilistic techniques for bounding
the convergence to stationarity and analytic tools for analyzing the decay of
relative entropy. As a corollary of our general results, we obtain
$O(n\log{n})$ mixing time and $\Omega(1/n)$ modified log-Sobolev constant of
the Glauber dynamics for sampling random $q$-colorings of an $n$-vertex graph
with constant maximum degree $\Delta$ when $q > (11/6 - \epsilon_0)\Delta$ for
some fixed $\epsilon_0>0$. We also obtain $O(\log{n})$ mixing time and
$\Omega(1)$ modified log-Sobolev constant of the Swendsen-Wang dynamics for the
ferromagnetic Ising model on an $n$-vertex graph of constant maximum degree
when the parameters of the system lie in the tree uniqueness region. At the
heart of our results are new techniques for establishing spectral independence
of the spin system and block factorization of the relative entropy. On one hand
we prove that a contractive coupling of a local Markov chain implies spectral
independence of the Gibbs distribution. On the other hand we show that spectral
independence implies factorization of entropy for arbitrary blocks,
establishing optimal bounds on the modified log-Sobolev constant of the
corresponding block dynamics.
|
Distillation is the technique of training a "student" model based on examples
that are labeled by a separate "teacher" model, which itself is trained on a
labeled dataset. The most common explanations for why distillation "works" are
predicated on the assumption that student is provided with \emph{soft} labels,
\eg probabilities or confidences, from the teacher model. In this work, we
show, that, even when the teacher model is highly overparameterized, and
provides \emph{hard} labels, using a very large held-out unlabeled dataset to
train the student model can result in a model that outperforms more
"traditional" approaches.
Our explanation for this phenomenon is based on recent work on "double
descent". It has been observed that, once a model's complexity roughly exceeds
the amount required to memorize the training data, increasing the complexity
\emph{further} can, counterintuitively, result in \emph{better} generalization.
Researchers have identified several settings in which it takes place, while
others have made various attempts to explain it (thus far, with only partial
success). In contrast, we avoid these questions, and instead seek to
\emph{exploit} this phenomenon by demonstrating that a highly-overparameterized
teacher can avoid overfitting via double descent, while a student trained on a
larger independent dataset labeled by this teacher will avoid overfitting due
to the size of its training set.
|
The learning and usage of an API is supported by official documentation. Like
source code, API documentation is itself a software product. Several research
results show that bad design in API documentation can make the reuse of API
features difficult. Indeed, similar to code smells or code antipatterns, poorly
designed API documentation can also exhibit 'smells'. Such documentation smells
can be described as bad documentation styles that do not necessarily produce an
incorrect documentation but nevertheless make the documentation difficult to
properly understand and to use. Recent research on API documentation has
focused on finding content inaccuracies in API documentation and to complement
API documentation with external resources (e.g., crowd-shared code examples).
We are aware of no research that focused on the automatic detection of API
documentation smells. This paper makes two contributions. First, we produce a
catalog of five API documentation smells by consulting literature on API
documentation presentation problems. We create a benchmark dataset of 1,000 API
documentation units by exhaustively and manually validating the presence of the
five smells in Java official API reference and instruction documentation.
Second, we conduct a survey of 21 professional software developers to validate
the catalog. The developers agreed that they frequently encounter all five
smells in API official documentation and 95.2% of them reported that the
presence of the documentation smells negatively affects their productivity. The
participants wished for tool support to automatically detect and fix the smells
in API official documentation. We develop a suite of rule-based, deep and
shallow machine learning classifiers to automatically detect the smells. The
best performing classifier BERT, a deep learning model, achieves F1-scores of
0.75 - 0.97.
|
Given a countable metric space, we can consider its end. Then a basis of a
Hilbert space indexed by the metric space defines an end of the Hilbert space,
which is a new notion and different from an end as a metric space.
Such an indexed basis also defines unitary operators of finite propagation,
and these operators preserve an end of a Hilbert space. Then, we can define a
Hilbert bundle with end, which lightens up new structures of Hilbert bundles.
In a special case, we can define characteristic classes of Hilbert bundles with
ends, which are new invariants of Hilbert bundles. We show Hilbert bundles with
ends appear in natural contexts. First, we generalize the pushforward of a
vector bundle along a finite covering to an infinite covering, which is a
Hilbert bundle with end under a mild condition. Then we compute characteristic
classes of some pushforwards along infinite coverings. Next, we will show the
spectral decompositions of nice differential operators give rise to Hilbert
bundles with ends, which elucidate new features of spectral decompositions. The
spectral decompositions we will consider are the Fourier transform and the
harmonic oscillators.
|
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses
data privacy, security, access rights and access to heterogeneous information
issues by training a global model using distributed nodes. Despite its
advantages, there is an increased potential for cyberattacks on FL-based ML
techniques that can undermine the benefits. Model-poisoning attacks on FL
target the availability of the model. The adversarial objective is to disrupt
the training. We propose attestedFL, a defense mechanism that monitors the
training of individual nodes through state persistence in order to detect a
malicious worker. A fine-grained assessment of the history of the worker
permits the evaluation of its behavior in time and results in innovative
detection strategies. We present three lines of defense that aim at assessing
if the worker is reliable by observing if the node is really training,
advancing towards a goal. Our defense exposes an attacker's malicious behavior
and removes unreliable nodes from the aggregation process so that the FL
process converge faster. Through extensive evaluations and against various
adversarial settings, attestedFL increased the accuracy of the model between
12% to 58% under different scenarios such as attacks performed at different
stages of convergence, attackers colluding and continuous attacks.
|
In many information extraction applications, entity linking (EL) has emerged
as a crucial task that allows leveraging information about named entities from
a knowledge base. In this paper, we address the task of multimodal entity
linking (MEL), an emerging research field in which textual and visual
information is used to map an ambiguous mention to an entity in a knowledge
base (KB). First, we propose a method for building a fully annotated Twitter
dataset for MEL, where entities are defined in a Twitter KB. Then, we propose a
model for jointly learning a representation of both mentions and entities from
their textual and visual contexts. We demonstrate the effectiveness of the
proposed model by evaluating it on the proposed dataset and highlight the
importance of leveraging visual information when it is available.
|
Recent advances in multi-task peer prediction have greatly expanded our
knowledge about the power of multi-task peer prediction mechanisms. Various
mechanisms have been proposed in different settings to elicit different types
of information. But we still lack understanding about when desirable mechanisms
will exist for a multi-task peer prediction problem. In this work, we study the
elicitability of multi-task peer prediction problems. We consider a designer
who has certain knowledge about the underlying information structure and wants
to elicit certain information from a group of participants. Our goal is to
infer the possibility of having a desirable mechanism based on the primitives
of the problem.
Our contribution is twofold. First, we provide a characterization of the
elicitable multi-task peer prediction problems, assuming that the designer only
uses scoring mechanisms. Scoring mechanisms are the mechanisms that reward
participants' reports for different tasks separately. The characterization uses
a geometric approach based on the power diagram characterization in the
single-task setting ([Lambert and Shoham, 2009, Frongillo and Witkowski,
2017]). For general mechanisms, we also give a necessary condition for a
multi-task problem to be elicitable.
Second, we consider the case when the designer aims to elicit some properties
that are linear in the participant's posterior about the state of the world. We
first show that in some cases, the designer basically can only elicit the
posterior itself. We then look into the case when the designer aims to elicit
the participants' posteriors. We give a necessary condition for the posterior
to be elicitable. This condition implies that the mechanisms proposed by Kong
and Schoenebeck are already the best we can hope for in their setting, in the
sense that their mechanisms can solve any problem instance that can possibly be
elicitable.
|
Semantic segmentation of building facade is significant in various
applications, such as urban building reconstruction and damage assessment. As
there is a lack of 3D point clouds datasets related to the fine-grained
building facade, we construct the first large-scale building facade point
clouds benchmark dataset for semantic segmentation. The existing methods of
semantic segmentation cannot fully mine the local neighborhood information of
point clouds. Addressing this problem, we propose a learnable attention module
that learns Dual Local Attention features, called DLA in this paper. The
proposed DLA module consists of two blocks, including the self-attention block
and attentive pooling block, which both embed an enhanced position encoding
block. The DLA module could be easily embedded into various network
architectures for point cloud segmentation, naturally resulting in a new 3D
semantic segmentation network with an encoder-decoder architecture, called
DLA-Net in this work. Extensive experimental results on our constructed
building facade dataset demonstrate that the proposed DLA-Net achieves better
performance than the state-of-the-art methods for semantic segmentation.
|
Academic plagiarism is a serious problem nowadays. Due to the existence of
inexhaustible sources of digital information, today it is easier to plagiarize
more than ever before. The good thing is that plagiarism detection techniques
have improved and are powerful enough to detect attempts of plagiarism in
education. We are now witnessing efficient plagiarism detection software in
action, such as Turnitin, iThenticate or SafeAssign. In the introduction we
explore software that is used within the Croatian academic community for
plagiarism detection in universities and/or in scientific journals. The
question is: is this enough? Current software has proven to be successful,
however the problem of identifying paraphrasing or obfuscation plagiarism
remains unresolved. In this paper we present a report of how semantic
similarity measures can be used in the plagiarism detection task.
|
The thermodynamic limit for the efficiency of solar cells is predominantly
defined by the energy bandgap of the used semiconductor. In case of organic
solar cells both energetics and kinetics of three different species play role:
excitons, charge transfer states and charge separated states. In this work, we
clarify the effect of the relative energetics and kinetics of these species on
the recombination and generation dynamics. Making use of detailed balance, we
develop an analytical framework describing how the intricate interplay between
the different species influence the photocurrent generation, the recombination,
and the open-circuit voltage in organic solar cells. Furthermore, we clarify
the essential requirements for equilibrium between excitons, CT states and
charge carriers to occur. Finally, we find that the photovoltaic parameters are
not only determined by the relative energy level between the different states
but also by the kinetic rate constants. These findings provide vital insights
into the operation of state-of-art non-fullerene organic solar cells with low
offsets.
|
We investigated the electronic structure of the Si(111)--7$\times$7 surface
below 20 K by scanning tunneling and photoemission spectroscopies and by
density functional theory calculations. Previous experimental studies have
questioned the ground state of this surface, which is expected to be metallic
in a band picture because of the odd number of electrons per unit cell. Our
differential conductance spectra instead show the opening of an energy gap at
the Fermi level and a significant temperature dependence of the electronic
properties, especially for the adatoms at the center of the unfaulted half of
the unit cell. Complementary photoemission spectra with improved correction of
the surface photovoltage shift corroborate the differential conductance data
and demonstrate the absence of surface bands crossing the Fermi level at 17 K.
These consistent experimental observations point to an insulating ground state
and contradict the prediction of a metallic surface obtained by density
functional theory in the generalized gradient approximation. The calculations
indicate that this surface has or is near a magnetic instability, but remains
metallic in the magnetic phases even including correlation effects at
mean-field level. We discuss possible origins of the observed discrepancies
between experiments and calculations.
|
Challenging space mission scenarios include those in low altitude orbits,
where the atmosphere creates significant drag to the S/C and forces their orbit
to an early decay. For drag compensation, propulsion systems are needed,
requiring propellant to be carried on-board. An atmosphere-breathing electric
propulsion system (ABEP) ingests the residual atmosphere particles through an
intake and uses them as propellant for an electric thruster. Theoretically
applicable to any planet with atmosphere, the system might allow to orbit for
unlimited time without carrying propellant. A new range of altitudes for
continuous operation would become accessible, enabling new scientific missions
while reducing costs. Preliminary studies have shown that the collectible
propellant flow for an ion thruster (in LEO) might not be enough, and that
electrode erosion due to aggressive gases, such as atomic oxygen, will limit
the thruster lifetime. In this paper an inductive plasma thruster (IPT) is
considered for the ABEP system. The starting point is a small scale inductively
heated plasma generator IPG6-S. These devices are electrodeless and have
already shown high electric-to-thermal coupling efficiencies using O2 and CO2.
The system analysis is integrated with IPG6-S tests to assess mean
mass-specific energies of the plasma plume and estimate exhaust velocities.
|
Recent progress in artificial intelligence (AI) raises a wide array of
ethical and societal concerns. Accordingly, an appropriate policy approach is
needed today. While there has been a wave of scholarship in this field, the
research community at times appears divided amongst those who emphasize
near-term concerns, and those focusing on long-term concerns and corresponding
policy measures. In this paper, we seek to map and critically examine this
alleged gulf, with a view to understanding the practical space for
inter-community collaboration on AI policy. This culminates in a proposal to
make use of the legal notion of an incompletely theorized agreement. We propose
that on certain issue areas, scholars working with near-term and long-term
perspectives can converge and cooperate on selected mutually beneficial AI
policy projects all the while maintaining divergent perspectives.
|
Understanding how events are semantically related to each other is the
essence of reading comprehension. Recent event-centric reading comprehension
datasets focus mostly on event arguments or temporal relations. While these
tasks partially evaluate machines' ability of narrative understanding,
human-like reading comprehension requires the capability to process event-based
information beyond arguments and temporal reasoning. For example, to understand
causality between events, we need to infer motivation or purpose; to establish
event hierarchy, we need to understand the composition of events. To facilitate
these tasks, we introduce ESTER, a comprehensive machine reading comprehension
(MRC) dataset for Event Semantic Relation Reasoning. The dataset leverages
natural language queries to reason about the five most common event semantic
relations, provides more than 6K questions and captures 10.1K event relation
pairs. Experimental results show that the current SOTA systems achieve 22.1%,
63.3%, and 83.5% for token-based exact-match, F1, and event-based HIT@1 scores,
which are all significantly below human performances (36.0%, 79.6%, 100%
respectively), highlighting our dataset as a challenging benchmark.
|
Artificial Recurrent Neural Networks are a powerful information processing
abstraction, and Reservoir Computing provides an efficient strategy to build
robust implementations by projecting external inputs into high dimensional
dynamical system trajectories. In this paper, we propose an extension of the
original approach, a local unsupervised learning mechanism we call Phase
Transition Adaptation, designed to drive the system dynamics towards the `edge
of stability'. Here, the complex behavior exhibited by the system elicits an
enhancement in its overall computational capacity. We show experimentally that
our approach consistently achieves its purpose over several datasets.
|
In the early universe it is important to take into account quantum effect of
the gravity to explain the feature of the inflation. In this paper, we consider
the M-theory effective action which consists of 11 dimensional supergravity and
(Weyl)$^4$ terms. The equations of motion are solved perturbatively, and the
solution describes the inflation-like expansion in 4 dimensional spacetime.
Scalar and tensor perturbations around this background are evaluated
analytically and their behaviors are investigated numerically. If we assume
that these perturbations are constant at the beginning of the inflation,
spectral indices for scalar and tensor perturbations become almost scale
invariant.
|
We revisit the contour dynamics (CD) simulation method which is applicable to
large deformation of distribution function in the Vlasov-Poisson plasma with
the periodic boundary, where contours of distribution function are traced
without using spatial grids. Novelty of this study lies in application of CD to
the one-dimensional Vlasov-Poisson plasma with the periodic boundary condition.
A major difficulty in application of the periodic boundary is how to deal with
contours when they cross the boundaries. It has been overcome by virtue of a
periodic Green's function, which effectively introduces the periodic boundary
condition without cutting nor reallocating the contours. The simulation results
are confirmed by comparing with an analytical solution for the piece-wise
constant distribution function in the linear regime and a linear analysis of
the Landau damping. Also, particle trapping by Langmuir wave is successfully
reproduced in the nonlinear regime.
|
This is the second one in a series of papers classifying the factorizations
of almost simple groups with nonsolvable factors. In this paper we deal with
almost simple unitary groups.
|
We derive the general continuum model for a bilayer system of staggered-flux
square lattices, with arbitrary elastic deformation in each layer. Applying
this general continuum model to the case where the two layers are rigidly
rotated relative to each other by a small angle, we obtain the band structure
of the twisted bilayer staggered-flux square lattice. We show that this band
structure exhibits a "magic continuum" in the sense that an exponential
reduction of the Dirac velocity and bandwidths occurs in a large parameter
regime. We show that the continuum model of the twisted bilayer system
effectively describes a massless Dirac fermion in a spatially modulating
magnetic field, whose renormalized Dirac velocity can be exactly calculated. We
further give an intuitive argument for the emergence of flattened bands near
half filling in the magic continuum and provide an estimation of the large
number of associated nearly-zero-energy states. We also show that the entire
band structure of the twisted bilayer system is free of band gaps due to
symmetry constraints.
|
White dwarf spectroscopy shows that nearly half of white dwarf atmospheres
contain metals that must have been accreted from planetary material that
survived the red giant phases of stellar evolution. We can use metal pollution
in white dwarf atmospheres as flags, signalling recent accretion, in order to
prioritize an efficient sample of white dwarfs to search for transiting
material. We present a search for planetesimals orbiting six nearby white
dwarfs with the CHEOPS spacecraft. The targets are relatively faint for CHEOPS,
$11$ mag $< G < 12.8$ mag. We use aperture photometry data products from the
CHEOPS mission as well as custom PSF photometry to search for periodic
variations in flux due to transiting planetesimals. We detect no significant
variations in flux that cannot be attributed to spacecraft systematics, despite
reaching a photometric precision of $<2$ ppt in 60 s exposures on each target.
We simulate observations to show that the small survey is sensitive primarily
to Moon-sized transiting objects with periods $3$ hr $< P < 10$ hr, with radii
$R \gtrsim 1000$ km.
|
Recent performance analysis of dual-function radar communications (DFRC)
systems, which embed information using phase shift keying (PSK) into
multiple-input multiple-output (MIMO) frequency hopping (FH) radar pulses,
shows promising results for addressing spectrum sharing issues between radar
and communications. However, the problem of decoding information at the
communication receiver remains challenging, since the DFRC transmitter is
typically assumed to transmit only information embedded radar waveforms and not
the training sequence. We propose a novel method for decoding information at
the communication receiver without using training data, which is implemented
using a software-defined radio (SDR). The performance of the SDR implementation
is examined in terms of bit error rate (BER) as a function of signal-to-noise
ratio (SNR) for differential binary and quadrature phase shift keying
modulation schemes and compared with the BER versus SNR obtained with numerical
simulations.
|
Automatically configuring a robotic prosthesis to fit its user's needs and
physical conditions is a great technical challenge and a roadblock to the
adoption of the technology. Previously, we have successfully developed
reinforcement learning (RL) solutions toward addressing this issue. Yet, our
designs were based on using a subjectively prescribed target motion profile for
the robotic knee during level ground walking. This is not realistic for
different users and for different locomotion tasks. In this study for the first
time, we investigated the feasibility of RL enabled automatic configuration of
impedance parameter settings for a robotic knee to mimic the intact knee motion
in a co-adapting environment. We successfully achieved such tracking control by
an online policy iteration. We demonstrated our results in both OpenSim
simulations and two able-bodied (AB) subjects.
|
We present a systematic spectral-timing analysis of a fast
appearance/disappearance of a type-B quasi-periodic oscillation (QPO), observed
in four NICER observations of MAXI J1348-630. By comparing the spectra of the
period with and without the type-B QPO, we found that the main difference
appears at energy bands above ~2 keV, suggesting that the QPO emission is
dominated by the hard Comptonised component. During the transition, a change in
the relative contribution of the disk and Comptonised emission was observed.
The disk flux decreased while the Comptonised flux increased from non-QPO to
type-B QPO. However, the total flux did not change too much in the NICER band.
Our results reveal that the type-B QPO is associated with a redistribution of
accretion power between the disk and Comptonised emission. When the type-B QPO
appears, more accretion power is dissipated into the Comptonised region than in
the disk. Our spectral fits give a hint that the increased Comptonised emission
may come from an additional component that is related to the base of the jet.
|
Blockchain (BC) technology can revolutionize the future of communications by
enabling decentralized and open sharing networks. In this paper, we propose the
application of BC to facilitate Mobile Network Operators (MNOs) and other
players such as Verticals or Over-The-Top (OTT) service providers to exchange
Radio Access Network (RAN) resources (e.g., infrastructure, spectrum) in a
secure, flexible and autonomous manner. In particular, we propose a BC-enabled
reverse auction mechanism for RAN sharing and dynamic users' service provision
in Beyond 5G networks, and we analyze its potential advantages with respect to
current service provisioning and RAN sharing schemes. Moreover, we study the
delay and overheads incurred by the BC in the whole process, when running over
both wireless and wired interfaces.
|
Human can infer the 3D geometry of a scene from a sketch instead of a
realistic image, which indicates that the spatial structure plays a fundamental
role in understanding the depth of scenes. We are the first to explore the
learning of a depth-specific structural representation, which captures the
essential feature for depth estimation and ignores irrelevant style
information. Our S2R-DepthNet (Synthetic to Real DepthNet) can be well
generalized to unseen real-world data directly even though it is only trained
on synthetic data. S2R-DepthNet consists of: a) a Structure Extraction (STE)
module which extracts a domaininvariant structural representation from an image
by disentangling the image into domain-invariant structure and domain-specific
style components, b) a Depth-specific Attention (DSA) module, which learns
task-specific knowledge to suppress depth-irrelevant structures for better
depth estimation and generalization, and c) a depth prediction module (DP) to
predict depth from the depth-specific representation. Without access of any
real-world images, our method even outperforms the state-of-the-art
unsupervised domain adaptation methods which use real-world images of the
target domain for training. In addition, when using a small amount of labeled
real-world data, we achieve the state-ofthe-art performance under the
semi-supervised setting. The code and trained models are available at
https://github.com/microsoft/S2R-DepthNet.
|
A search for Heavy Neutral Leptons has been performed with the ArgoNeuT
detector exposed to the NuMI neutrino beam at Fermilab. We search for the decay
signature $N \to \nu \mu^+ \mu^-$, considering decays occurring both inside
ArgoNeuT and in the upstream cavern. In the data, corresponding to an exposure
to $1.25 \times 10^{20}$ POT, zero passing events are observed consistent with
the expected background. This measurement leads to a new constraint at 90\%
confidence level on the mixing angle $\left\vert U_{\tau N}\right\rvert^2$ of
tau-coupled Dirac Heavy Neutral Leptons with masses $m_N =$ 280 - 970 MeV,
assuming $\left\vert U_{eN}\right\rvert^2 = \left\vert U_{\mu N}\right\rvert^2
= 0$.
|
Cognitive Diagnosis Models (CDMs) are a special family of discrete latent
variable models widely used in educational, psychological and social sciences.
In many applications of CDMs, certain hierarchical structures among the latent
attributes are assumed by researchers to characterize their dependence
structure. Specifically, a directed acyclic graph is used to specify
hierarchical constraints on the allowable configurations of the discrete latent
attributes. In this paper, we consider the important yet unaddressed problem of
testing the existence of latent hierarchical structures in CDMs. We first
introduce the concept of testability of hierarchical structures in CDMs and
present sufficient conditions. Then we study the asymptotic behaviors of the
likelihood ratio test (LRT) statistic, which is widely used for testing nested
models. Due to the irregularity of the problem, the asymptotic distribution of
LRT becomes nonstandard and tends to provide unsatisfactory finite sample
performance under practical conditions. We provide statistical insights on such
failures, and propose to use parametric bootstrap to perform the testing. We
also demonstrate the effectiveness and superiority of parametric bootstrap for
testing the latent hierarchies over non-parametric bootstrap and the na\"ive
Chi-squared test through comprehensive simulations and an educational
assessment dataset.
|
Hypergraphs are a generalization of graphs in which edges can connect any
number of vertices. They allow the modeling of complex networks with
higher-order interactions, and their spectral theory studies the qualitative
properties that can be inferred from the spectrum, i.e. the multiset of the
eigenvalues, of an operator associated to a hypergraph. It is expected that a
small perturbation of a hypergraph, such as the removal of a few vertices or
edges, does not lead to a major change of the eigenvalues. In particular, it is
expected that the eigenvalues of the original hypergraph interlace the
eigenvalues of the perturbed hypergraph. Here we work on hypergraphs where, in
addition, each vertex--edge incidence is given a real number, and we prove
interlacing results for the adjacency matrix, the Kirchoff Laplacian and the
normalized Laplacian. Tightness of the inequalities is also shown.
|
Recent FDA guidance on adaptive clinical trial designs defines bias as "a
systematic tendency for the estimate of treatment effect to deviate from its
true value", and states that it is desirable to obtain and report estimates of
treatment effects that reduce or remove this bias. In many adaptive designs,
the conventional end-of-trial point estimates of the treatment effects are
prone to bias, because they do not take into account the potential and realised
trial adaptations. While much of the methodological developments on adaptive
designs have tended to focus on control of type I error rates and power
considerations, in contrast the question of biased estimation has received less
attention. This article addresses this issue by providing a comprehensive
overview of proposed approaches to remove or reduce the potential bias in point
estimation of treatment effects in an adaptive design, as well as illustrating
how to implement them. We first discuss how bias can affect standard estimators
and critically assess the negative impact this can have. We then describe and
compare proposed unbiased and bias-adjusted estimators of treatment effects for
different types of adaptive designs. Furthermore, we illustrate the computation
of different estimators in practice using a real trial example. Finally, we
propose a set of guidelines for researchers around the choice of estimators and
the reporting of estimates following an adaptive design.
|
In this paper, we present a Bayesian multipath-based simultaneous
localization and mapping (SLAM) algorithm that continuously adapts interacting
multiple models (IMM) parameters to describe the mobile agent state dynamics.
The time-evolution of the IMM parameters is described by a Markov chain and the
parameters are incorporated into the factor graph structure that represents the
statistical structure of the SLAM problem. The proposed belief propagation
(BP)-based algorithm adapts, in an online manner, to time-varying system models
by jointly inferring the model parameters along with the agent and map feature
states. The performance of the proposed algorithm is finally evaluating with a
simulated scenario. Our numerical simulation results show that the proposed
multipath-based SLAM algorithm is able to cope with strongly changing agent
state dynamics.
|
When two resonantly interacting modes are in contact with a thermostat, their
statistics is exactly Gaussian and the modes are statistically independent
despite strong interaction. Considering noise-driven system, we show that when
one mode is pumped and another dissipates, the statistics (of such cascades) is
never close to Gaussian no matter the interaction/noise relation. One finds
substantial phase correlation in the limit of strong interaction (weak noise).
Surprisingly, for both cascades, the mutual information between modes increases
and entropy further decreases when interaction strength decreases. We use the
model to elucidate the fundamental problem of far-from equilibrium physics:
where the information (entropy deficit) is encoded and how singular measures
form. For an instability-driven system (a laser), even a small added noise
leads to large fluctuations of the relative phase near the stability threshold,
while far from it we show that the conversion into the second harmonic is
weakly affected by noise.
|
We consider the jump telegraph process when switching intensities depend on
external shocks also accompanying with jumps. The incomplete financial market
model based on this process is studied. The Esscher transform, which changes
only unobservable parameters, is considered in detail. The financial market
model based on this transform can price switching risks as well as jump risks
of the model.
|
The Legacy Survey of Space and Time (LSST) by the Vera C. Rubin Observatory
is expected to discover tens of millions of quasars. A significant fraction of
these could be powered by coalescing massive black hole (MBH) binaries, since
many quasars are believed to be triggered by mergers. We show that under
plausible assumptions about the luminosity functions, lifetimes, and binary
fractions of quasars, we expect the full LSST quasar catalogue to contain
between 20-100 million compact MBH binaries with masses $M=10^{5-9}M_{\odot}$,
redshifts $z=0-6$, and orbital periods $P=1-70$ days. Their light-curves are
expected to be distinctly periodic, which can be confidently distinguished from
stochastic red-noise variability, because LSST will cover dozens, or even
hundreds of cycles. A very small subset of 10-150 ultra-compact ($P\lesssim1$
day) binary quasars among these will, over $\sim$5-15 years, evolve into the
mHz gravitational-wave (GW) frequency band and can be detected by
$\textit{LISA}$. They can therefore be regarded as "$\textit{LISA}$
verification binaries", analogous to short-period Galactic compact-object
binaries. The practical question is how to find these handful of "needles in
the haystack" among the large number of quasars: this will likely require a
tailored co-adding analysis optimised for this purpose.
|
Quantum spin models find applications in many different areas, such as
spintronics, high-Tc superconductivity, and even complex optimization problems.
However, studying their many-body behaviour, especially in the presence of
frustration, represents an outstanding computational challenge. To overcome it,
quantum simulators based on cold, trapped atoms and ions have been built,
shedding light already on many non-trivial phenomena. Unfortunately, the models
covered by these simulators are limited by the type of interactions that appear
naturally in these systems. Waveguide QED setups have recently been pointed out
as a powerful alternative due to the possibility of mediating more versatile
spin-spin interactions with tunable sign, range, and even chirality. Yet,
despite their potential, the many-body phases emerging from these systems have
only been scarcely explored. In this manuscript, we fill this gap analyzing the
ground states of a general class of spin models that can be obtained in such
waveguide QED setups. Importantly, we find novel many-body phases different
from the ones obtained in other platforms, e.g., symmetry-protected topological
phases with large-period magnetic orderings, and explain the measurements
needed to probe them.
|
We establish uniform with respect to the Mach number regularity estimates for
the isentropic compressible Navier-Stokes system in smooth domains with
Navier-slip condition on the boundary in the general case of ill-prepared
initial data. To match the boundary layer effects due to the fast oscillations
and the ill-prepared initial data assumption, we prove uniform estimates in an
anisotropic functional framework with only one normal derivative close to the
boundary. This allows to prove the local existence of a strong solution on a
time interval independent of the Mach number and to justify the incompressible
limit through a simple compactness argument.
|
C/2020 F3 (NEOWISE) was discovered in images from the Near Earth Object
program of the Wide-Field Infrared Survey Explorer (NEOWISE) taken on 27 March
2020 and has become the Great Comet of 2020. The Solar Wind ANisotropies (SWAN)
camera on the Solar and Heliospheric Observatory (SOHO) spacecraft, located in
a halo orbit around the Earth-Sun L1 Lagrange point, makes daily full-sky
images of hydrogen Lyman-alpha. Water production rates were determined from the
SWAN hydrogen Lyman-alpha brightness and spatial distribution of the comet
measured over a 4-month period of time on either side of the comet's perihelion
on 3 July 2020. The water production rate in s^-1 was moderately asymmetric
around perihelion and varied with the heliocentric distance, r, in au as
(6.9+/-0.5) x 10^28 r^-2.5+/-0.2 and (10.1+/-0.5) x 10^28 r^-3.5+/-0.1 before
and after perihelion, respectively. This is consistent with the comet having
been through the planetary region of the solar system on one or more previous
apparitions. A water production rates as large as 5.27 x 10^30 s^-1 were
determined shortly after perihelion, once the comet was outside the solar
avoidance area of SWAN, when the comet was 0.324 au from the Sun.
|
With the ChemCam instrument, laser-induced breakdown spectroscopy (LIBS) has
successively contributed to Mars exploration by determining elemental
compositions of the soil, crust and rocks. Two new lunched missions, Chinese
Tianwen 1 and American Perseverance, will further increase the number of LIBS
instruments on Mars after the planned landings in spring 2021. Such
unprecedented situation requires a reinforced research effort on the methods of
LIBS spectral data treatment. Although the matrix effects correspond to a
general issue in LIBS, they become accentuated in the case of rock analysis for
Mars exploration, because of the large variation of rock composition leading to
the chemical matrix effect, and the difference in morphology between laboratory
standard samples (in pressed pellet, glass or ceramics) used to establish
calibration models and natural rocks encountered on Mars, leading to the
physical matric effect. The chemical matrix effect has been tackled in the
ChemCam project with large sets of laboratory standard samples offering a good
representation of various compositions of Mars rocks. The present work deals
with the physical matrix effect which is still expecting a satisfactory
solution. The approach consists in introducing transfer learning in LIBS data
treatment. For the specific case of total alkali-silica (TAS) classification of
natural rocks, the results show a significant improvement of the prediction
capacity of pellet sample-based models when trained together with suitable
information from rocks in a procedure of transfer learning. The correct
classification rate of rocks increases from 33.3% with a machine learning model
to 83.3% with a transfer learning model.
|
We propose a Multiscale Invertible Generative Network (MsIGN) and associated
training algorithm that leverages multiscale structure to solve
high-dimensional Bayesian inference. To address the curse of dimensionality,
MsIGN exploits the low-dimensional nature of the posterior, and generates
samples from coarse to fine scale (low to high dimension) by iteratively
upsampling and refining samples. MsIGN is trained in a multi-stage manner to
minimize the Jeffreys divergence, which avoids mode dropping in
high-dimensional cases. On two high-dimensional Bayesian inverse problems, we
show superior performance of MsIGN over previous approaches in posterior
approximation and multiple mode capture. On the natural image synthesis task,
MsIGN achieves superior performance in bits-per-dimension over baseline models
and yields great interpret-ability of its neurons in intermediate layers.
|
The axion-like particles with ultralight mass ($\sim10^{-22}$eV) can be a
possibile candidate of dark matter, known as the fuzzy dark matter (FDM). These
particles form Bose-Einstein condensate in the early universe which can explain
the dark matter density distribution in galaxies at the present time. We study
the time evolution of ultralight axion-like field in the near region of a
strong gravitational wave (GW) source, such as binary black hole merger. We
show that GWs can lead to the generation of field excitations in a spherical
shell about the source that eventually propagate out of the shell to minimize
the energy density of the field configuration. These excitations are generated
towards the end of the merger and in some cases even in the ringdown phase of
the merger, therefore it can provide a qualitatively distinct prediction for
changes in the GW waveform due to presence of FDM. This would be helpful in
investigating the existence of FDM in galaxies.
|
Because of surface structural constraint and thermal management requirement,
visible - infrared compatible camouflage is still a great challenge. In this
study, we introduce a 2D periodic aperture array into ZnO/Ag/ZnO film to
realize visible-infrared compatible camouflage with a performance of thermal
management by utilizing the extraordinary optical transmission in a
dielectric/metal/dielectric (D/M/D) structure. Because of the high visible
transmittance of the D/M/D structure, when applied on a visible camouflage
coating, the beneath coating can be observed, realizing arbitrary visible
camouflage. Due to the perforated Ag layer, both low emittances in 3~5 {\mu}m,
8~14 {\mu}m for infrared camouflage and high emittance in 5~8 {\mu}m for heat
dissipation by radiation are achieved theoretically and experimentally. The
fabricated photonic crystal exhibits high-temperature infrared camouflage in
two atmospheric windows. With the same heating power of 0.40 W/cm2, this
photonic crystal is 12.2 K cooler than a sample with a low-emittance surface.
The proposed visible - infrared compatible camouflage photonic crystal with the
performance of thermal management provides a guideline on coordinated control
of light and heat, indicating a potential application in energy & thermal
technologies.
|
Historically, to bound the mean for small sample sizes, practitioners have
had to choose between using methods with unrealistic assumptions about the
unknown distribution (e.g., Gaussianity) and methods like Hoeffding's
inequality that use weaker assumptions but produce much looser (wider)
intervals. In 1969, Anderson (1969) proposed a mean confidence interval
strictly better than or equal to Hoeffding's whose only assumption is that the
distribution's support is contained in an interval $[a,b]$. For the first time
since then, we present a new family of bounds that compares favorably to
Anderson's. We prove that each bound in the family has {\em guaranteed
coverage}, i.e., it holds with probability at least $1-\alpha$ for all
distributions on an interval $[a,b]$. Furthermore, one of the bounds is tighter
than or equal to Anderson's for all samples. In simulations, we show that for
many distributions, the gain over Anderson's bound is substantial.
|
Suppose that a statistician observes two independent variates $X_1$ and $X_2$
having densities $f_i(\cdot;\theta)\equiv f_i(\cdot-\theta)\ ,\ i=1,2$ ,
$\theta\in\mathbb{R}$. His purpose is to conduct a test for
\begin{equation*}
H:\theta=0 \ \ \text{vs.}\ \ K:\theta\in\mathbb{R}\setminus\{0\}
\end{equation*}
with a pre-defined significance level $\alpha\in(0,1)$.
Moran (1973) suggested a test which is based on a single split of the data,
\textit{i.e.,} to use $X_2$ in order to conduct a one-sided test in the
direction of $X_1$. Specifically, if $b_1$ and $b_2$ are the $(1-\alpha)$'th
and $\alpha$'th quantiles associated with the distribution of $X_2$ under $H$,
then Moran's test has a rejection zone
\begin{equation*}
(a,\infty)\times(b_1,\infty)\cup(-\infty,a)\times(-\infty,b_2)
\end{equation*}
where $a\in\mathbb{R}$ is a design parameter.
Motivated by this issue, the current work includes an analysis of a new
notion, \textit{regular admissibility} of tests. It turns out that the theory
regarding this kind of admissibility leads to a simple sufficient condition on
$f_1(\cdot)$ and $f_2(\cdot)$ under which Moran's test is inadmissible.
Furthermore, the same approach leads to a formal proof for the conjecture of
DiCiccio (2018) addressing that the multi-dimensional version of Moran's test
is inadmissible when the observations are $d$-dimensional Gaussians.
|
We derive for the first time an effective neutrino evolution Hamiltonian
accounting for neutrino interactions with external magnetic field due to
neutrino charge radii and anapole moment. The results are interesting for
possible applications in astrophysics.
|
We consider agents in a social network competing to be selected as partners
in collaborative, mutually beneficial activities. We study this through a model
in which an agent i can initiate a limited number k_i>0 of games and selects
the ideal partners from its one-hop neighborhood. On the flip side it can
accept as many games offered from its neighbors. Each game signifies a
productive joint economic activity, and players attempt to maximize their
individual utilities. Unsurprisingly, more trustworthy agents are more
desirable as partners. Trustworthiness is measured by the game theoretic
concept of Limited-Trust, which quantifies the maximum cost an agent is willing
to incur in order to improve the net utility of all agents. Agents learn about
their neighbors' trustworthiness through interactions and their behaviors
evolve in response. Empirical trials performed on realistic social networks
show that when given the option, many agents become highly trustworthy; most or
all become highly trustworthy when knowledge of their neighbors'
trustworthiness is based on past interactions rather than known a priori. This
trustworthiness is not the result of altruism, instead agents are intrinsically
motivated to become trustworthy partners by competition. Two insights are
presented: first, trustworthy behavior drives an increase in the utility of all
agents, where maintaining a relatively modest level of trustworthiness may
easily improve net utility by as much as 14.5%. If only one agent exhibits
modest trust among self-centered ones, it can increase its average utility by
up to 25% in certain cases! Second, and counter-intuitively, when partnership
opportunities are abundant agents become less trustworthy.
|
In supervised learning, it is known that overparameterized neural networks
with one hidden layer provably and efficiently learn and generalize, when
trained using stochastic gradient descent with sufficiently small learning rate
and suitable initialization. In contrast, the benefit of overparameterization
in unsupervised learning is not well understood. Normalizing flows (NFs)
constitute an important class of models in unsupervised learning for sampling
and density estimation. In this paper, we theoretically and empirically analyze
these models when the underlying neural network is one-hidden-layer
overparameterized network. Our main contributions are two-fold: (1) On the one
hand, we provide theoretical and empirical evidence that for a class of NFs
containing most of the existing NF models, overparametrization hurts training.
(2) On the other hand, we prove that unconstrained NFs, a recently introduced
model, can efficiently learn any reasonable data distribution under minimal
assumptions when the underlying network is overparametrized.
|
In this paper, we study the galactic cosmic ray (GCR) variations over the
solar cycles 23 and 24, with measurements from the NASA's ACE/CRIS instrument
and the ground-based neutron monitors (NMs). The results show that the maximum
GCR intensities of heavy nuclei (nuclear charge 5-28, 50-500 MeV/nuc) at 1 AU
during the solar minimum in 2019-2020 break their previous records, exceeding
those recorded in 1997 and 2009 by ~25% and ~6%, respectively, and are at the
highest levels since the space age. However, the peak NM count rates are lower
than those in late 2009. The difference between GCR intensities and NM count
rates still remains to be explained. Furthermore, we find that the GCR
modulation environment during the solar minimum P24/25 are significantly
different from previous solar minima in several aspects, including remarkably
low sunspot numbers, extremely low inclination of the heliospheric current
sheet, rare coronal mass ejections, weak interplanetary magnetic field and
turbulence. These changes are conducive to reduce the level of solar
modulation, providing a plausible explanation for the record-breaking GCR
intensities in interplanetary space.
|
Quantum teleportation, the faithful transfer of an unknown input state onto a
remote quantum system, is a key component in long distance quantum
communication protocols and distributed quantum computing. At the same time,
high frequency nano-optomechanical systems hold great promise as nodes in a
future quantum network, operating on-chip at low-loss optical telecom
wavelengths with long mechanical lifetimes. Recent demonstrations include
entanglement between two resonators, a quantum memory and microwave to optics
transduction. Despite these successes, quantum teleportation of an optical
input state onto a long-lived optomechanical memory is an outstanding
challenge. Here we demonstrate quantum teleportation of a polarization-encoded
optical input state onto the joint state of a pair of nanomechanical
resonators. Our protocol also allows for the first time to store and retrieve
an arbitrary qubit state onto a dual-rail encoded optomechanical quantum
memory. This work demonstrates the full functionality of a single quantum
repeater node, and presents a key milestone towards applications of
optomechanical systems as quantum network nodes.
|
We investigate both linear and nonlinear stability aspects of rigid motions
(resp. M\"obius transformations) of $\mathbb{S}^{n-1}$ among Sobolev maps from
$\mathbb{S}^{n-1}$ into $\mathbb{R}^n$. Unlike similar in flavour results for
maps defined on domains of $\mathbb{R}^n$ and mapping into $\mathbb{R}^n$, not
only an isometric (resp. conformal) deficit is necessary in this more flexible
setting, but also a deficit measuring the distortion of $\mathbb{S}^{n-1}$
under the maps in consideration. The latter is defined as an associated
isoperimetric type of deficit. We mostly focus on the case $n=3$, where we also
explain why the estimates are optimal in their corresponding settings. In the
isometric case the estimate holds true also when $n=2$ and generalizes in
dimensions $n\geq 4$ as well, if one requires apriori boundedness in a certain
higher Sobolev norm. We also obtain linear stability estimates for both cases
in all dimensions. These can be regarded as Korn-type inequalities for the
combination of the quadratic form associated with the isometric (resp.
conformal) deficit on $\mathbb{S}^{n-1}$ and the isoperimetric one.
|
It was Fano who first classified Enriques-Fano threefolds. However his
arguments appear to contain several gaps. In this paper, we will verify some of
his assertions through the use of modern techniques.
|
We investigate the Nichols algebra $\mathfrak{B}(V_{abe})$ which are from the
Yetter-Drinfeld category of Suzuki algebras. The $4n$ and $n^2$ dimensional
Nichols algebras, first appeared in \cite{Andruskiewitsch2018}, are obtained
again via a different method. And the connection between the Nichols algebra
$\mathfrak{B}(V_{abe})$ and a class of combinatorial numbers on the subgroups
of symmetric groups is established.
|
We present a bulk data collection service, Harvest, for energy constrained
wireless sensor nodes. To increase spatial reuse and thereby decrease latency,
Harvest performs concurrent, pipelined exfiltration from multiple nodes to a
base station. To this end, it uses a distance-k coloring of the nodes, notably
with a constant number of colors, which yields a TDMA schedule whereby nodes
can communicate concurrently with low packet losses due to collision. This
coloring is based on a randomized CSMA approach which does not exploit location
knowledge. Given a bounded degree of the network, each node waits only O$(1)$
time to obtain a unique color among its distance-k neighbors, in contrast to
the traditional deterministic distributed distance-k vertex coloring wherein
each node waits O$(\Delta^{2})$ time to obtain a color.
Harvest offers the option of limiting memory use to only a small constant
number of bytes or of improving latency with increased memory use; it can be
used with or without additional mechanisms for reliability of message
forwarding. We experimentally evaluate the performance of Harvest using 51
motes in the Kansei testbed. We also provide theoretical as well as
TOSSIM-based comparison of Harvest with Straw, an extant data collection
service implemented for TinyOS platforms that use one-node at a time
exfiltration. For networks with more than 3-hops, Harvest reduces the latency
by at least 33% as compared to that of Straw.
|
We present a sparse Gauss-Newton solver for accelerated sensitivity analysis
with applications to a wide range of equilibrium-constrained optimization
problems. Dense Gauss-Newton solvers have shown promising convergence rates for
inverse problems, but the cost of assembling and factorizing the associated
matrices has so far been a major stumbling block. In this work, we show how the
dense Gauss-Newton Hessian can be transformed into an equivalent sparse matrix
that can be assembled and factorized much more efficiently. This leads to
drastically reduced computation times for many inverse problems, which we
demonstrate on a diverse set of examples. We furthermore show links between
sensitivity analysis and nonlinear programming approaches based on Lagrange
multipliers and prove equivalence under specific assumptions that apply for our
problem setting.
|
Phase-field modeling -- a continuous approach to discontinuities -- is
gaining popularity for simulating rock fractures due to its ability to handle
complex, discontinuous geometry without an explicit surface tracking algorithm.
None of the existing phase-field models, however, incorporates the impact of
surface roughness on the mechanical response of fractures -- such as elastic
deformability and shear-induced dilation -- despite the importance of this
behavior for subsurface systems. To fill this gap, here we introduce the first
framework for phase-field modeling of rough rock fractures. The framework
transforms a displacement-jump-based discrete constitutive model for
discontinuities into a strain-based continuous model, without any additional
parameter, and then casts it into a phase-field formulation for frictional
interfaces. We illustrate the framework by constructing a particular
phase-field form employing a rock joint model originally formulated for
discrete modeling. The results obtained by the new formulation show excellent
agreement with those of a well-established discrete method for a variety of
problems ranging from shearing of a single discontinuity to compression of
fractured rocks. It is further demonstrated that the phase-field formulation
can well simulate complex crack growth from rough discontinuities.
Consequently, our phase-field framework provides an unprecedented bridge
between a discrete constitutive model for rough discontinuities -- common in
rock mechanics -- and the continuous finite element method -- standard in
computational mechanics -- without any algorithm to explicitly represent
discontinuity geometry.
|
The bottleneck for an attosecond science experiment is concluded to be the
lack of a high-peak-power isolated attosecond pulse source. Therefore,
currently, generating an intense attosecond pulse would be one of the highest
priority goals. In this paper, we review a TW-class parallel three-channel
waveform synthesizer for generating a gigawatt-scale soft-x-ray isolated
attosecond pulse (IAP) using high-order harmonics generation (HHG).
Simultaneously, using several stabilization methods, namely, the
low-repetition-rate laser carrier-envelope phase stabilization, Mach-Zehnder
interferometer, balanced optical cross-correlator, and beam-pointing
stabilizer, we demonstrate a stable 50-mJ three-channel optical-waveform
synthesizer with a peak power at the multi-TW level. This optical-waveform
synthesizer is capable of creating a stable intense optical field for
generating an intense continuum harmonic beam thanks to the successful
stabilization of all the parameters. Furthermore, the precision control of
shot-to-shot reproducible synthesized waveforms is achieved. Through the HHG
process employing a loose-focusing geometry, an intense shot-to-shot stable
supercontinuum (50-70 eV) is generated in an argon gas cell. This continuum
spectrum supports an IAP with a transform-limited duration of 170 as and a
submicrojoule pulse energy, which allows the generation of a GW-scale IAP.
Another supercontinuum in the soft-x-ray region with higher photon energy of
approximately 100-130 eV is also generated in neon gas from the synthesizer.
The transform-limited pulse duration is 106 as. According to this work, the
enhancement of HHG output through optimized waveform synthesis is
experimentally proved. The high-energy multicycle pulse with 10-Hz repetition
rate is proved to have the same controllability for optimized waveform
synthesis for HHG as few- or subcycle pulses from a 1-kHz laser.
|
In a previous work I constructed R-negative scissors states in the Two-Rotors
Model in which the rotors are two-sided classical bodies. Such states have |K|
= 1; 0 components and negative space parity. Here I extend this work including
reflections in a plane through the rotors symmetry axes obtaining also
R-negative states of positive parity. I then associate reflections and
R-operations with corresponding operations on internal noncollective variables
of the rotors. I evaluate the em transition strengths of these states and
compare them with the corresponding strengths of R-positive scissors modes.
|
We comment on two formal proofs of Fermat's sum of two squares theorem,
written using the Mathematical Components libraries of the Coq proof assistant.
The first one follows Zagier's celebrated one-sentence proof; the second
follows David Christopher's more recent proof relying on partition-theoretic
arguments. Both formal proofs rely on a general property of involutions of
finite sets, of independent interest. The proof technique consists for the most
part of automating recurrent tasks (such as case distinctions and computations
on natural numbers) via ad hoc tactics. With the same method, we also provide a
formal proof of another classical result on primes of the form $a^2 + 2 b^2$.
|
Networks describe the, often complex, relationships between individual
actors. In this work, we address the question of how to determine whether a
parametric model, such as a stochastic block model or latent space model, fits
a dataset well and will extrapolate to similar data. We use recent results in
random matrix theory to derive a general goodness-of-fit test for dyadic data.
We show that our method, when applied to a specific model of interest, provides
an straightforward, computationally fast way of selecting parameters in a
number of commonly used network models. For example, we show how to select the
dimension of the latent space in latent space models. Unlike other network
goodness-of-fit methods, our general approach does not require simulating from
a candidate parametric model, which can be cumbersome with large graphs, and
eliminates the need to choose a particular set of statistics on the graph for
comparison. It also allows us to perform goodness-of-fit tests on partial
network data, such as Aggregated Relational Data. We show with simulations that
our method performs well in many situations of interest. We analyze several
empirically relevant networks and show that our method leads to improved
community detection algorithms. R code to implement our method is available on
Github.
|
We studied simple random-walk models with asymmetric time delays. Stochastic
simulations were performed for hyperbolic-tangent fitness functions and to
obtain analytical results we approximated them by step functions. A novel
behavior has been observed. Namely, the mean position of a walker depends on
time delays. This is a joint effect of both stochasticity and time delays
present in the system. We also observed that by shifting appropriately fitness
functions we may reverse the effect of time delays - the mean position of the
walker changes the sign.
|
Travel time or speed estimation are part of many intelligent transportation
applications. Existing estimation approaches rely on either function fitting or
aggregation and represent different trade-offs between generalizability and
accuracy. Function-fitting approaches learn functions that map feature vectors
of, e.g., routes, to travel time or speed estimates, which enables
generalization to unseen routes. However, mapping functions are imperfect and
offer poor accuracy in practice. Aggregation-based approaches instead form
estimates by aggregating historical data, e.g., traversal data for routes. This
enables very high accuracy given sufficient data. However, they rely on
simplistic heuristics when insufficient data is available, yielding poor
generalizability. We present a Unifying approach to Travel time and speed
Estimation (UniTE) that combines function-fitting and aggregation-based
approaches into a unified framework that aims to achieve the generalizability
of function-fitting approaches and the accuracy of aggregation-based
approaches. An empirical study finds that an instance of UniTE can improve the
accuracies of travel speed distribution and travel time estimation by $40-64\%$
and $3-23\%$, respectively, compared to using function fitting or aggregation
alone
|
We present a theoretical calculation of the influence of ultraviolet
radiative pumping on the excitation of the rotational levels of the ground
vibrational state for HD molecules under conditions of the cold diffuse
interstellar medium (ISM). Two main excitation mechanisms have been taken into
account in our analysis: (i) collisions with atoms and molecules and (ii)
radiative pumping by the interstellar ultraviolet (UV) radiation field. The
calculation of the radiative pumping rate coefficients $\Gamma_{\rm ij}$
corresponding to Drane's model of the field of interstellar UV radiation,
taking into account the self-shielding of HD molecules, is performed. We found
that the population of the first HD rotational level ($J = 1$) is determined
mainly by radiative pumping rather than by collisions if the thermal gas
pressure $p_{\rm
th}\le10^4\left(\frac{I_{\rm{UV}}}{1}\right)\,\mbox{K\,cm}^{-3}$ and the column
density of HD is lower than $\log N({\rm{HD}})<15$. Under this constraint the
populations of rotational levels of HD turns out to be as well a more sensitive
indicator of the UV radiation intensity than the fine-structure levels of
atomic carbon. We suggest that taking into account radiative pumping of HD
rotational levels may be important for the problem of the cooling of primordial
gas at high redshift: ultraviolet radiation from first stars can increase the
rate of HD cooling of the primordial gas in the early Universe.
|
This submission has been removed by arXiv administrators as the submitter did
not have the right to agree to the license at the time of submission.
|
This study aims at revealing the effect of composition on mechanical and
antibacterial properties of hydroxyapatite/gray Titania coating for biomedical
applications. HAp is a bioceramic material used as a plasma-sprayed coating to
promote osseointegra-tion of femoral stems. Biomaterial coatings are fabricated
mostly using atmospheric plasma spray (APS). However, the conventional plasma
spray process requires hydrox-yapatite powder with good flow, and to prepare
such free-flowing powder, agglomer-ation, such as spray drying, fusing, and
crushing is used. Therefore, it is impossible to spray nano-powder using
conventional methods. Here, we designed a suspension-feeding system to feed
nanoparticles using a liquid carrier. Suspension plasma spray (SPS)
successfully deposited homogeneous HAp/Gray Titania coating with less poros-ity
on the surface of titanium substrates. The microstructure of coatings with
differ-ent compositions was then characterized using scanning electron
microscopy, X-ray diffraction, and Raman spectroscopy to identify the crystal
structure. All results con-sistently demonstrated that SPS could transform Ti2
O3 into TiO2 with mixed Magneli phases, such as Ti4 O7 and Ti3 O5, which
usually show photocatalytic activity. Interfacial strength, hardness, young
modulus, and fracture toughness were also improved with a high concentration of
TiO2. Antibacterial test for E.Coli under LED light re-vealed that SPSed
HAp/Gray Titania coating could significantly enhance antibacterial properties.
The possible underlying mechanism of enhanced antibacterial properties can be
attributed to increased Magneli phases and better bacterial adhesion caused by
hydrophilic properties due to submicron size particles. SPS can fabricate
visible light-responsive antibacterial coating, which can be used for medical
devices.
|
In this paper, we investigate the hemodynamics of a left atrium (LA) by
proposing a computational model suitable to provide physically meaningful fluid
dynamics indications and detailed blood flow characterization. In particular,
we consider the incompressible Navier-Stokes equations in Arbitrary Lagrangian
Eulerian (ALE) formulation to deal with the LA domain under prescribed motion.
A Variational Multiscale (VMS) method is adopted to obtain a stable formulation
of the Navier-Stokes equations discretized by means of the Finite Element
method and to account for turbulence modeling based on Large Eddy Simulation
(LES). The aim of this paper is twofold: on one hand to improve the general
understanding of blood flow in the human LA in normal conditions; on the other,
to analyse the effects of the turbulence VMS-LES method on a situation of blood
flow which is neither laminar, nor fully turbulent, but rather transitional as
in LA. Our results suggest that if relatively coarse meshes are adopted, the
additional stabilization terms introduced by the VMS-LES method allow to better
predict transitional effects and cycle-to-cycle blood flow variations than the
standard SUPG stabilization method.
|
The problem of showing the existence of localised modes in nonlinear lattices
has attracted considerable efforts from the physical but also from the
mathematical viewpoint where a rich variety of methods has been employed. In
this paper we prove that a fixed point theory approach based on the celebrated
Schauder's Fixed Point Theorem may provide a general method to establish
concisely not only the existence of localised structures but also a required
rate of spatial localisation. As a case study we consider lattices of coupled
particles with nonlinear nearest neighbour interaction and prove the existence
of exponentially spatially localised breathers exhibiting either even-parity or
odd-parity symmetry under necessary non-resonant conditions accompanied with
the proof of energy bounds of the solutions.
|
We have developed several autotuning benchmarks in CUDA that take into
account performance-relevant source-code parameters and reach near
peak-performance on various GPU architectures. We have used them during the
development and evaluation of a novel search method for tuning space proposed
in [1]. With our framework Kernel Tuning Toolkit, freely available at Github,
we measured computation times and hardware performance counters on several GPUs
for the complete tuning spaces of five benchmarks. These data, which we provide
here, might benefit research of search algorithms for the tuning spaces of GPU
codes or research of relation between applied code optimization, hardware
performance counters, and GPU kernels' performance.
Moreover, we describe the scripts we used for robust evaluation of our
searcher and comparison to others in detail. In particular, the script that
simulates the tuning, i.e., replaces time-demanding compiling and executing the
tuned kernels with a quick reading of the computation time from our measured
data, makes it possible to inspect the convergence of tuning search over a
large number of experiments. These scripts, freely available with our other
codes, make it easier to experiment with search algorithms and compare them in
a robust way.
During our research, we generated models for predicting values of performance
counters from values of tuning parameters of our benchmarks. Here, we provide
the models themselves and describe the scripts we implemented for their
training. These data might benefit researchers who want to reproduce or build
on our research.
|
We first consider a $d$-dimensional branching Brownian motion (BBM) evolving
in an expanding ball, where the particles are killed at the boundary of the
ball, and the expansion is subdiffusive in time. We study the large-time
asymptotic behavior of the mass inside the ball, and obtain a large-deviation
(LD) result as time tends to infinity on the probability that the mass is
aytpically small. Then, we consider the problem of BBM among mild Poissonian
obstacles, where a random `trap field' in $\mathbb{R}^d$ is created via a
Poisson point process. The trap field consists of balls of fixed radius
centered at the atoms of the Poisson point process. The mild obstacle rule is
that when a particle is inside a trap, it branches at a lower rate, which is
allowed to be zero, whereas when outside the trap field it branches at the
normal rate. As an application of our LD result on the mass of BBM inside
expanding balls, we prove upper bounds on the LD probabilities for the mass of
BBM among mild obstacles, which we then use along with the Borel-Cantelli lemma
to prove the corresponding strong law of large numbers. Our results are
quenched, that is, they hold in almost every environment with respect to the
Poisson point process.
|
Exact solutions are obtained in the quadratic theory of gravity with a scalar
field for wave-like models of space-time with spatial homogeneity symmetry and
allowing the integration of the equations of motion of test particles in the
Hamilton-Jacobi formalism by the method of separation of variables with
separation of wave variables (Shapovalov spaces of type II). The form of the
scalar field and the scalar field functions included in the Lagrangian of the
theory is found. The obtained exact solutions can describe the primary
gravitational wave disturbances in the Universe (primary gravitational waves).
|
Massive scalar fields provide excellent dark matter candidates, whose
dynamics are often explored analytically and numerically using nonrelativistic
Schr\"{o}dinger-Poisson (SP) equations in a cosmological context. In this
paper, starting from the nonlinear and fully relativistic Klein-Gordon-Einstein
(KGE) equations in an expanding universe, we provide a systematic framework for
deriving the SP equations, as well as relativistic corrections to them, by
integrating out `fast modes' and including nonlinear metric and matter
contributions. We provide explicit equations for the leading-order relativistic
corrections, which provide insight into deviations from the SP equations as the
system approaches the relativistic regime. Upon including the leading-order
corrections, our equations are applicable beyond the domain of validity of the
SP system, and are simpler to use than the full KGE case in some contexts. As a
concrete application, we calculate the mass-radius relationship of solitons in
scalar dark matter and accurately capture the deviations of this relationship
from the SP system towards the KGE one.
|
The relationship between classical and quantum mechanics is usually
understood via the limit $\hbar \rightarrow 0$. This is the underlying idea
behind the quantization of classical objects. The apparent incompatibility of
general relativity with quantum mechanics and quantum field theory has
challenged for many decades this basic idea. We recently showed the emergence
of classical dynamics for very general quantum lattice systems with mean-field
interactions, without (complete) supression of its quantum features, in the
infinite volume limit. This leads to a theoretical framework in which the
classical and quantum worlds are entangled. Such an entanglement is noteworthy
and is a consequence of the highly non-local character of mean-field
interactions. Therefore, this phenomenon should not be restricted to systems
with mean-field interactions only, but should also appear in presence of
interactions that are sufficiently long-range, yielding effective, classical
background fields, in the spirit of the Higgs mechanism of quantum field
theory. In order to present the result in a less abstract way than in its
original version, here we apply it to a concrete, physically relevant, example
and discuss, by this means, various important aspects of our general approach.
The model we consider is not exactly solvable and the particular results
obtained are new.
|
Aims: We investigate a new method to for obtaining the plasma parameters of
solar prominences observed in the Mgii h&k spectral lines by comparing line
profiles from the IRIS satellite to a bank of profiles computed with a
one-dimensional non-local thermodynamic equilibrium (non-LTE) radiative
transfer code.
Methods: Using a grid of 1007 one-dimensional non-LTE radiative transfer
models we carry out this new method to match computed spectra to observed line
profiles while accounting for line core shifts not present in the models. The
prominence observations were carried out by the IRIS satellite on 19 April
2018.
Results: The prominence is very dynamic with many flows. The models are able
to recover satisfactory matches in areas of the prominence where single line
profiles are observed. We recover: mean temperatures of 6000 to 50,000K; mean
pressures of 0.01 to 0.5 dyne cm$^{-2}$; column masses of 3.7$\times10^{-8}$ to
5$\times10^{-4}$ g cm$^{-2}$; a mean electron density of 7.3$\times10^{8}$ to
1.8$\times10^{11}$ cm$^{-3}$; and an ionisation degree
${n_\text{HII}}/{n_\text{HI}}=0.03 - 4500$. The highest values for the
ionisation degree are found in areas where the line of sight crosses mostly
plasma from the PCTR, correlating with high mean temperatures and
correspondingly no H$\alpha$ emission.
Conclusions: This new method naturally returns information on how closely the
observed and computed profiles match, allowing the user to identify areas where
no satisfactory match between models and observations can be obtained. Regions
where satisfactory fits were found were more likely to contain a model
encompassing a PCTR. The line core shift can also be recovered from this new
method, and it shows a good qualitative match with that of the line core shift
found by the quantile method. This demonstrates the effectiveness of the
approach to line core shifts in the new method.
|
The objective of this work is to find temporal boundaries between signs in
continuous sign language. Motivated by the paucity of annotation available for
this task, we propose a simple yet effective algorithm to improve segmentation
performance on unlabelled signing footage from a domain of interest. We make
the following contributions: (1) We motivate and introduce the task of
source-free domain adaptation for sign language segmentation, in which labelled
source data is available for an initial training phase, but is not available
during adaptation. (2) We propose the Changepoint-Modulated Pseudo-Labelling
(CMPL) algorithm to leverage cues from abrupt changes in motion-sensitive
feature space to improve pseudo-labelling quality for adaptation. (3) We
showcase the effectiveness of our approach for category-agnostic sign
segmentation, transferring from the BSLCORPUS to the BSL-1K and
RWTH-PHOENIX-Weather 2014 datasets, where we outperform the prior state of the
art.
|
This paper presents a mathematic dynamic model of a quadrotor unmanned aerial
vehicle (QUAV) by using the symbolic regression approach and then proposes a
hierarchical control scheme for trajectory tracking. The symbolic regression
approach is capable of constructing analytical quadrotor dynamic equations only
through the collected data, which relieves the burden of first principle
modeling. To facilitate position tracking control of a QUAV, the design of
controller can be decomposed into two parts: a proportional-integral controller
for the position subsystem is first designed to obtain the desired horizontal
position and the backstepping method for the attitude subsystem is developed to
ensure that the Euler angles and the altitude can fast converge to the
reference values. The effectiveness is verified through experiments on a
benchmark multicopter simulator.
|
We link conditional weak mixing and ergodicity of the tensor product in Riesz
spaces. In particular, we characterise conditional weak mixing of a conditional
expectation preserving system by the ergodicity of its tensor product with
itself or other ergodic systems. In order to achieve this we characterise the
components of the weak order units in the tensor product of two Dedekind
complete Riesz spaces with weak order units.
|
Sum rules for structure functions and their twist-2 relations have important
roles in constraining their magnitudes and $x$ dependencies and in studying
higher-twist effects. The Wandzura-Wilczek (WW) relation and the
Burkhardt-Cottingham (BC) sum rule are such examples for the polarized
structure functions $g_1$ and $g_2$. Recently, new twist-3 and twist-4 parton
distribution functions were proposed for spin-1 hadrons, so that it became
possible to investigate spin-1 structure functions including higher-twist ones.
We show in this work that an analogous twist-2 relation and a sum rule exist
for the tensor-polarized parton distribution functions $f_{1LL}$ and $f_{LT}$,
where $f_{1LL}$ is a twist-2 function and $f_{LT}$ is a twist-3 one. Namely,
the twist-2 part of $f_{LT}$ is expressed by an integral of $f_{1LL}$ (or
$b_1$) and the integral of the function $f_{2LT} = (2/3) f_{LT} -f_{1LL}$ over
$x$ vanishes. If the parton-model sum rule for $f_{1LL}$ ($b_1$) is applied by
assuming vanishing tensor-polarized antiquark distributions, another sum rule
also exists for $f_{LT}$ itself. These relations should be valuable for
studying tensor-polarized distribution functions of spin-1 hadrons and for
separating twist-2 components from higher-twist terms, as the WW relation and
BC sum rule have been used for investigating $x$ dependence and higher-twist
effects in $g_2$. In deriving these relations, we indicate that four twist-3
multiparton distribution functions $F_{LT}$, $G_{LT}$, $H_{LL}^\perp$, and
$H_{TT}$ exist for tensor-polarized spin-1 hadrons. These multiparton
distribution functions are also interesting to probe multiparton correlations
in spin-1 hadrons.
|
We present a work-in-progress approach to improving driver attentiveness in
cars provided with automated driving systems. The approach is based on a
control loop that monitors the driver's biometrics (eye movement, heart rate,
etc.) and the state of the car; analyses the driver's attentiveness level using
a deep neural network; plans driver alerts and changes in the speed of the car
using a formally verified controller; and executes this plan using actuators
ranging from acoustic and visual to haptic devices. The paper presents (i) the
self-adaptive system formed by this monitor-analyse-plan-execute (MAPE) control
loop, the car and the monitored driver, and (ii) the use of probabilistic model
checking to synthesise the controller for the planning step of the MAPE loop.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.