abstract
stringlengths 42
2.09k
|
---|
Floquet engineering is the concept of tailoring a system by a periodic drive.
It has been very successful in opening new classes of Hamiltonians to the study
with ultracold atoms in optical lattices, such as artificial gauge fields,
topological band structures and density-dependent tunneling. Furthermore,
driven systems provide new physics without static counterpart such as anomalous
Floquet topological insulators. In this review article, we provide an overview
of the exciting developments in the field and discuss the current challenges
and perspectives.
|
Convex clustering is an attractive clustering algorithm with favorable
properties such as efficiency and optimality owing to its convex formulation.
It is thought to generalize both k-means clustering and agglomerative
clustering. However, it is not known whether convex clustering preserves
desirable properties of these algorithms. A common expectation is that convex
clustering may learn difficult cluster types such as non-convex ones. Current
understanding of convex clustering is limited to only consistency results on
well-separated clusters. We show new understanding of its solutions. We prove
that convex clustering can only learn convex clusters. We then show that the
clusters have disjoint bounding balls with significant gaps. We further
characterize the solutions, regularization hyperparameters, inclusterable cases
and consistency.
|
In this paper we derive sharp lower and upper bounds for the covariance of
two bounded random variables when knowledge about their expected values,
variances or both is available. When only the expected values are known, our
result can be viewed as an extension of the Bhatia-Davis Inequality for
variances. We also provide a number of different ways to standardize
covariance. For a binary pair random variables, one of these standardized
measures of covariation agrees with a frequently used measure of dependence
between genetic variants.
|
The production, application, and/or measurement of polarised X-/gamma rays
are key to the fields of synchrotron science and X-/gamma-ray astronomy. The
design, development and optimisation of experimental equipment utilised in
these fields typically relies on the use of Monte Carlo radiation transport
modelling toolkits such as Geant4. In this work the Geant4 "G4LowEPPhysics"
electromagnetic physics constructor has been reconfigured to offer a "best set"
of electromagnetic physics models for studies exploring the transport of low
energy polarised X-/gamma rays. An overview of the physics models implemented
in "G4LowEPPhysics", and it's experimental validation against Compton X-ray
polarimetry measurements of the BL38B1 beamline at the SPring-8 synchrotron
(Sayo, Japan) is reported. "G4LowEPPhysics" is shown to be able to reproduce
the experimental results obtained at the BL38B1 beamline (SPring-8) to within a
level of accuracy on the same order as Geant4's X-/gamma ray interaction
cross-sectional data uncertainty (approximately $\pm$ 5 \%).
|
The overwhelming amount of biomedical scientific texts calls for the
development of effective language models able to tackle a wide range of
biomedical natural language processing (NLP) tasks. The most recent dominant
approaches are domain-specific models, initialized with general-domain textual
data and then trained on a variety of scientific corpora. However, it has been
observed that for specialized domains in which large corpora exist, training a
model from scratch with just in-domain knowledge may yield better results.
Moreover, the increasing focus on the compute costs for pre-training recently
led to the design of more efficient architectures, such as ELECTRA. In this
paper, we propose a pre-trained domain-specific language model, called
ELECTRAMed, suited for the biomedical field. The novel approach inherits the
learning framework of the general-domain ELECTRA architecture, as well as its
computational advantages. Experiments performed on benchmark datasets for
several biomedical NLP tasks support the usefulness of ELECTRAMed, which sets
the novel state-of-the-art result on the BC5CDR corpus for named entity
recognition, and provides the best outcome in 2 over the 5 runs of the 7th
BioASQ-factoid Challange for the question answering task.
|
Payment channel networks are a promising approach to improve the scalability
of cryptocurrencies: they allow to perform transactions in a peer-to-peer
fashion, along multi-hop routes in the network, without requiring consensus on
the blockchain. However, during the discovery of cost-efficient routes for the
transaction, critical information may be revealed about the transacting
entities.
This paper initiates the study of privacy-preserving route discovery
mechanisms for payment channel networks. In particular, we present LightPIR, an
approach which allows a source to efficiently discover a shortest path to its
destination without revealing any information about the endpoints of the
transaction. The two main observations which allow for an efficient solution in
LightPIR are that: (1) surprisingly, hub labelling algorithms - which were
developed to preprocess "street network like" graphs so one can later
efficiently compute shortest paths - also work well for the graphs underlying
payment channel networks, and that (2) hub labelling algorithms can be directly
combined with private information retrieval.
LightPIR relies on a simple hub labeling heuristic on top of existing hub
labeling algorithms which leverages the specific topological features of
cryptocurrency networks to further minimize storage and bandwidth overheads. In
a case study considering the Lightning network, we show that our approach is an
order of magnitude more efficient compared to a privacy-preserving baseline
based on using private information retrieval on a database that stores all
pairs shortest paths.
|
As an essential characteristics of fractional calculus, the memory effect is
served as one of key factors to deal with diverse practical issues, thus has
been received extensive attention since it was born. By combining the
fractional derivative with memory effects and grey modeling theory, this paper
aims to construct an unified framework for the commonly-used fractional grey
models already in place. In particular, by taking different kernel and
normalization functions, this framework can deduce some other new fractional
grey models. To further improve the prediction performance, the four popular
intelligent algorithms are employed to determine the emerging coefficients for
the UFGM(1,1) model. Two published cases are then utilized to verify the
validity of the UFGM(1,1) model and explore the effects of fractional
accumulation order and initial value on the prediction accuracy, respectively.
Finally, this model is also applied to dealing with two real examples so as to
further explain its efficacy and equally show how to use the unified framework
in practical applications.
|
B-splines are widely used in the fields of reverse engineering and
computer-aided design, due to their superior properties. Traditional B-spline
surface interpolation algorithms usually assume regularity of the data
distribution. In this paper, we introduce a novel B-spline surface
interpolation algorithm: KPI, which can interpolate sparsely and non-uniformly
distributed data points. As a two-stage algorithm, our method generates the
dataset out of the sparse data using Kriging, and uses the proposed KPI
(Key-Point Interpolation) method to generate the control points. Our algorithm
can be extended to higher dimensional data interpolation, such as
reconstructing dynamic surfaces. We apply the method to interpolating the
temperature of Shanxi Province. The generated dynamic surface accurately
interpolates the temperature data provided by the weather stations, and the
preserved dynamic characteristics can be useful for meteorology studies.
|
Software projects are regularly updated with new functionality and bug fixes
through so-called releases. In recent years, many software projects have been
shifting to shorter release cycles and this can affect the bug handling
activity. Past research has focused on the impact of switching from traditional
to rapid release cycles with respect to bug handling activity, but the effect
of the rapid release cycle duration has not yet been studied. We empirically
investigate releases of 420 open source projects with rapid release cycles to
understand the effect of variable and rapid release cycle durations on bug
handling activity. We group the releases of these projects into five categories
of release cycle durations. For each project, we investigate how the sequence
of releases is related to bug handling activity metrics and we study the effect
of the variability of cycle durations on bug fixing. Our results did not reveal
any statistically significant difference for the studied bug handling activity
metrics in the presence of variable rapid release cycle durations. This
suggests that the duration of fast release cycles does not seem to impact bug
handling activity.
|
We derive major parts of the eigenvalue spectrum of the operators on the
squashed seven-sphere that appear in the compactification of eleven-dimensional
supergravity. These spectra determine the mass spectrum of the fields in
$AdS_4$ and are important for the corresponding ${\mathcal N} =1$
supermultiplet structure. This work is a continuation of the work in [1] where
the complete spectrum of irreducible isometry representations of the fields in
$AdS_4$ was derived for this compactification. Some comments are also made
concerning the $G_2$ holonomy and its implications on the structure of the
operator equations on the squashed seven-sphere.
|
We present a quantum error correcting code with dynamically generated logical
qubits. When viewed as a subsystem code, the code has no logical qubits.
Nevertheless, our measurement patterns generate logical qubits, allowing the
code to act as a fault-tolerant quantum memory. Our particular code gives a
model very similar to the two-dimensional toric code, but each measurement is a
two-qubit Pauli measurement.
|
Dilatancy associated with fault slip produces a transient pore pressure drop
which increases frictional strength. This effect is analysed in a steadily
propagating rupture model that includes frictional weakening, slip-dependent
fault dilation and fluid flow. Dilatancy is shown to increase the stress
intensity factor required to propagate the rupture tip. With increasing rupture
speed, an undrained (strengthened) region develops near the tip and extends
beyond the frictionally weakened zone. Away from the undrained region, pore
fluid diffusion gradually recharges the fault and strength returns to the
drained, weakened value. For sufficiently large rupture dimensions, the
dilation-induced strength increase near the tip is equivalent to an increase in
toughness that is proportional to the square root of the rupture speed. In
general, dilation has the effect of increasing the stress required for rupture
growth by decreasing the stress drop along the crack. Thermal pressurisation
has the potential to compensate for the dilatant strengthening effect, at the
expense of an increased heating rate, which might lead to premature frictional
melting. Using reasonable laboratory parameters, the dilatancy-toughening
effect leads to rupture dynamics that is quantitatively consistent with the
dynamics of observed slow slip events in subduction zones.
|
We develop a geometrical micro-local analysis of contact Anosov flow, such as
geodesic flow on negatively curved manifold. We use the method of wave-packet
transform discussed in arXiv:1706.09307 and observe that the transfer operator
is well approximated (in the high frequency limit) by the quantization of an
induced transfer operator acting on sections of some vector bundle on the
trapped set. This gives a few important consequences: The discrete eigenvalues
of the generator of transfer operators, called Ruelle spectrum, are structured
into vertical bands. If the right-most band is isolated from the others, most
of the Ruelle spectrum in it concentrate along a line parallel to the imaginary
axis and, further, the density satisfies a Weyl law as the imaginary part tend
to infinity. Some of these results were announced in arXiv:1301.5525.
|
While sophisticated Visual Question Answering models have achieved remarkable
success, they tend to answer questions only according to superficial
correlations between question and answer. Several recent approaches have been
developed to address this language priors problem. However, most of them
predict the correct answer according to one best output without checking the
authenticity of answers. Besides, they only explore the interaction between
image and question, ignoring the semantics of candidate answers. In this paper,
we propose a select-and-rerank (SAR) progressive framework based on Visual
Entailment. Specifically, we first select the candidate answers relevant to the
question or the image, then we rerank the candidate answers by a visual
entailment task, which verifies whether the image semantically entails the
synthetic statement of the question and each candidate answer. Experimental
results show the effectiveness of our proposed framework, which establishes a
new state-of-the-art accuracy on VQA-CP v2 with a 7.55% improvement.
|
The architecture of circuital quantum computers requires computing layers
devoted to compiling high-level quantum algorithms into lower-level circuits of
quantum gates. The general problem of quantum compiling is to approximate any
unitary transformation that describes the quantum computation, as a sequence of
elements selected from a finite base of universal quantum gates. The existence
of an approximating sequence of one qubit quantum gates is guaranteed by the
Solovay-Kitaev theorem, which implies sub-optimal algorithms to establish it
explicitly. Since a unitary transformation may require significantly different
gate sequences, depending on the base considered, such a problem is of great
complexity and does not admit an efficient approximating algorithm. Therefore,
traditional approaches are time-consuming tasks, unsuitable to be employed
during quantum computation. We exploit the deep reinforcement learning method
as an alternative strategy, which has a significantly different trade-off
between search time and exploitation time. Deep reinforcement learning allows
creating single-qubit operations in real time, after an arbitrary long training
period during which a strategy for creating sequences to approximate unitary
operators is built. The deep reinforcement learning based compiling method
allows for fast computation times, which could in principle be exploited for
real-time quantum compiling.
|
We solve the large deviations of the Kardar-Parisi-Zhang (KPZ) equation in
one dimension at short time by introducing an approach which combines field
theoretical, probabilistic and integrable techniques. We expand the program of
the weak noise theory, which maps the large deviations onto a non-linear
hydrodynamic problem, and unveil its complete solvability through a connection
to the integrability of the Zakharov-Shabat system. Exact solutions, depending
on the initial condition of the KPZ equation, are obtained using the inverse
scattering method and a Fredholm determinant framework recently developed.
These results, explicit in the case of the droplet geometry, open the path to
obtain the complete large deviations for general initial conditions.
|
Grounding natural language instructions on the web to perform previously
unseen tasks enables accessibility and automation. We introduce a task and
dataset to train AI agents from open-domain, step-by-step instructions
originally written for people. We build RUSS (Rapid Universal Support Service)
to tackle this problem. RUSS consists of two models: First, a BERT-LSTM with
pointers parses instructions to ThingTalk, a domain-specific language we design
for grounding natural language on the web. Then, a grounding model retrieves
the unique IDs of any webpage elements requested in ThingTalk. RUSS may
interact with the user through a dialogue (e.g. ask for an address) or execute
a web operation (e.g. click a button) inside the web runtime. To augment
training, we synthesize natural language instructions mapped to ThingTalk. Our
dataset consists of 80 different customer service problems from help websites,
with a total of 741 step-by-step instructions and their corresponding actions.
RUSS achieves 76.7% end-to-end accuracy predicting agent actions from single
instructions. It outperforms state-of-the-art models that directly map
instructions to actions without ThingTalk. Our user study shows that RUSS is
preferred by actual users over web navigation.
|
The goal of this paper is to open up a new research direction aimed at
understanding the power of preprocessing in speeding up algorithms that solve
NP-hard problems exactly. We explore this direction for the classic Feedback
Vertex Set problem on undirected graphs, leading to a new type of graph
structure called antler decomposition, which identifies vertices that belong to
an optimal solution. It is an analogue of the celebrated crown decomposition
which has been used for Vertex Cover. We develop the graph structure theory
around such decompositions and develop fixed-parameter tractable algorithms to
find them, parameterized by the number of vertices for which they witness
presence in an optimal solution. This reduces the search space of
fixed-parameter tractable algorithms parameterized by the solution size that
solve Feedback Vertex Set.
|
Experiments have shown that hepatitis C virus (HCV) infections in vitro
disseminate both distally via the release and diffusion of cell-free virus
through the medium, and locally via direct, cell-to-cell transmission. To
determine the relative contribution of each mode of infection to HCV
dissemination, we developed an agent-based model (ABM) that explicitly
incorporates both distal and local modes of infection. The ABM tracks the
concentration of extracellular infectious virus in the supernatant and the
number of intracellular HCV RNA segments within each infected cell over the
course of simulated in vitro HCV infections. Experimental data for in vitro HCV
infections conducted in the presence and absence of free-virus neutralizing
antibodies was used to validate the ABM and constrain the value of its
parameters. We found that direct, cell-to-cell infection accounts for 99%
(84%$-$100%, 95% credible interval) of infection events, making it the dominant
mode of HCV dissemination in vitro. Yet, when infection via the free-virus
route is blocked, a 57% reduction in the number of infection events at 72 hpi
is observed experimentally; a result consistent with that found by our ABM.
Taken together, these findings suggest that while HCV spread via cell-free
virus contributes little to the total number of infection events in vitro, it
plays a critical role in enhancing cell-to-cell HCV dissemination by providing
access to distant, uninfected areas, away from the already established large
infection foci.
|
We investigate the second harmonic generation of light field carrying orbital
angular momentum in bulk $\chi^{(2)}$ material. We show that due to
conservation of energy and momentum, the frequency doubled beam light has a
modified spatial distribution and mode characteristics. Through rigorous phase
matching conditions, we demonstrate efficient mode and frequency conversion
based on three wave nonlinear optical mixing.
|
We develop two parallel machine-learning pipelines to estimate the
contribution of cosmic strings (CSs), conveniently encoded in their tension
($G\mu$), to the anisotropies of the cosmic microwave background radiation
observed by {\it Planck}. The first approach is tree-based and feeds on certain
map features derived by image processing and statistical tools. The second uses
convolutional neural network with the goal to explore possible non-trivial
features of the CS imprints. The two pipelines are trained on {\it Planck}
simulations and when applied to {\it Planck} \texttt{SMICA} map yield the
$3\sigma$ upper bound of $G\mu\lesssim 8.6\times 10^{-7}$. We also train and
apply the pipelines to make forecasts for futuristic CMB-S4-like surveys and
conservatively find their minimum detectable tension to be $G\mu_{\rm min}\sim
1.9\times 10^{-7}$.
|
We explore quantitative descriptors that herald when a many-particle system
in $d$-dimensional Euclidean space $\mathbb{R}^d$ approaches a hyperuniform
state as a function of the relevant control parameter. We establish
quantitative criteria to ascertain the extent of hyperuniform and
nonhyperuniform distance-scaling regimes n terms of the ratio $B/A$, where $A$
is "volume" coefficient and $B$ is "surface-area" coefficient associated with
the local number variance $\sigma^2(R)$ for a spherical window of radius $R$.
To complement the known direct-space representation of the coefficient $B$ in
terms of the total correlation function $h({\bf r})$, we derive its
corresponding Fourier representation in terms of the structure factor $S({\bf
k})$, which is especially useful when scattering information is available
experimentally or theoretically. We show that the free-volume theory of the
pressure of equilibrium packings of identical hard spheres that approach a
strictly jammed state either along the stable crystal or metastable disordered
branch dictates that such end states be exactly hyperuniform. Using the ratio
$B/A$, the hyperuniformity index $H$ and the direct-correlation function length
scale $\xi_c$, we study three different exactly solvable models as a function
of the relevant control parameter, either density or temperature, with end
states that are perfectly hyperuniform. We analyze equilibrium hard rods and
"sticky" hard-sphere systems in arbitrary space dimension $d$ as a function of
density. We also examine low-temperature excited states of many-particle
systems interacting with "stealthy" long-ranged pair interactions as the
temperature tends to zero. The capacity to identify hyperuniform scaling
regimes should be particularly useful in analyzing experimentally- or
computationally-generated samples that are necessarily of finite size.
|
We develop an approach to choice principles and their contrapositive
bar-induction principles as extensionality schemes connecting an "intensional"
or "effective" view of respectively ill-and well-foundedness properties to an
"extensional" or "ideal" view of these properties. After classifying and
analysing the relations between different intensional definitions of
ill-foundedness and well-foundedness, we introduce, for a domain $A$, a
codomain $B$ and a "filter" $T$ on finite approximations of functions from $A$
to $B$, a generalised form GDC$_{A,B,T}$ of the axiom of dependent choice and
dually a generalised bar induction principle GBI$_{A,B,T}$ such that:
GDC$_{A,B,T}$ intuitionistically captures the strength of
$\bullet$ the general axiom of choice expressed as $\forall a\exists b R(a,
b) \Rightarrow\exists\alpha\forall \alpha R(\alpha,\alpha(a))$ when $T$ is a
filter that derives point-wise from a relation $R$ on $A \times B$ without
introducing further constraints,
$\bullet$ the Boolean Prime Filter Theorem / Ultrafilter Theorem if $B$ is
the two-element set $\mathbb{B}$ (for a constructive definition of prime
filter),
$\bullet$ the axiom of dependent choice if $A = \mathbb{N}$,
$\bullet$ Weak K{\"o}nig's Lemma if $A = \mathbb{N}$ and $B = \mathbb{B}$ (up
to weak classical reasoning)
GBI$_{A,B,T}$ intuitionistically captures the strength of
$\bullet$ G{\"o}del's completeness theorem in the form validity implies
provability for entailment relations if $B = \mathbb{B}$,
$\bullet$ bar induction when $A = \mathbb{N}$,
$\bullet$ the Weak Fan Theorem when $A = \mathbb{N}$ and $B = \mathbb{B}$.
Contrastingly, even though GDC$_{A,B,T}$ and GBI$_{A,B,T}$ smoothly capture
several variants of choice and bar induction, some instances are inconsistent,
e.g. when $A$ is $\mathbb{B}^\mathbb{N}$ and $B$ is $\mathbb{N}$.
|
The Cyber Science Lab (CSL) and Smart Cyber-Physical System (SCPS) Lab at the
University of Guelph conduct a market study of cybersecurity technology
adoption and requirements for smart and precision farming in Canada. We
conducted 17 stakeholder/key opinion leader interviews in Canada and the USA,
as well as conducting extensive secondary research, to complete this study.
Each interview generally required 15-20 minutes to complete. Interviews were
conducted using a client-approved interview guide. Secondary and primary
research focussed on the following areas of investigation: Market size and
segmentation Market forecast and growth rate Competitive landscape Market
challenges/barriers to entry Market trends/growth drivers
Adoption/commercialization of the technology
|
AGBs and YSOs often share the same domains in IR color-magnitude or
color-color diagrams leading to potential mis-classification. We extracted a
list of AGB interlopers from the published YSO catalogues using the periodogram
analysis on NEOWISE time series data. YSO IR variability is typically
stochastic and linked to episodic mass accretion. Furthermore, most variable
YSOs are at an early evolutionary stage, with significant surrounding envelope
and/or disk material. In contrast, AGBs are often identified by a well defined
sinusoidal variability with periods of a few hundreds days. From our
periodogram analysis of all known low mass YSOs in the Gould Belt, we find 85
AGB candidates, out of which 62 were previously classified as late-stage Class
III YSOs. Most of these new AGB candidates have similar IR colors to O-rich
AGBs. We observed 73 of these AGB candidates in the H2O, CH3OH and SiO maser
lines to further reveal their nature. The SiO maser emission was detected in 10
sources, confirming them as AGBs since low mass YSOs, especially Class III
YSOs, do not show such maser emission. The H2O and CH3OH maser lines were
detected in none of our targets.
|
In multi-component scalar dark matter scenarios, a single $Z_N$ ($N\geq 4$)
symmetry may account for the stability of different dark matter particles. Here
we study the case where $N$ is even ($N=2n$) and two species, a complex scalar
and a real scalar, contribute to the observed dark matter density. We perform a
phenomenological analysis of three scenarios based on the $Z_4$ and $Z_6$
symmetries, characterizing their viable parameter spaces and analyzing their
detection prospects. Our results show that, thanks to the new interactions
allowed by the $Z_{2n}$ symmetry, current experimental constraints can be
satisfied over a wider range of dark matter masses, and that these scenarios
may lead to observable signals in direct detection experiments. Finally, we
argue that these three scenarios serve as prototypes for other two-component
$Z_{2n}$ models with one complex and one real dark matter particle.
|
Astrophysical black holes are thought to be the Kerr black holes predicted by
general relativity, but macroscopic deviations from the Kerr solution can be
expected from a number of scenarios involving new physics. In Paper I, we
studied the reflection features in NuSTAR and XMM-Newton spectra of the
supermassive black hole at the center of the galaxy MCG-06-30-15 and we
constrained a set of deformation parameters proposed by Konoplya, Rezzolla &
Zhidenko (Phys. Rev. D93, 064015, 2016). In the present work, we analyze the
X-ray data of a stellar-mass black hole within the same theoretical framework
in order to probe a different curvature regime. We consider a NuSTAR
observation of the X-ray binary EXO 1846-031 during its outburst in 2019. As in
the case of Paper I, all our fits are consistent with the Kerr black hole
hypothesis, but some deformation parameters cannot be constrained well.
|
Intuitively, one would expect the accuracy of a trained neural network's
prediction on a test sample to correlate with how densely that sample is
surrounded by seen training samples in representation space. In this work we
provide theory and experiments that support this hypothesis. We propose an
error function for piecewise linear neural networks that takes a local region
in the network's input space and outputs smooth empirical training error, which
is an average of empirical training errors from other regions weighted by
network representation distance. A bound on the expected smooth error for each
region scales inversely with training sample density in representation space.
Empirically, we verify this bound is a strong predictor of the inaccuracy of
the network's prediction on test samples. For unseen test sets, including those
with out-of-distribution samples, ranking test samples by their local region's
error bound and discarding samples with the highest bounds raises prediction
accuracy by up to 20% in absolute terms, on image classification datasets.
|
Current deep learning models for classification tasks in computer vision are
trained using mini-batches. In the present article, we take advantage of the
relationships between samples in a mini-batch, using graph neural networks to
aggregate information from similar images. This helps mitigate the adverse
effects of alterations to the input images on classification performance.
Diverse experiments on image-based object and scene classification show that
this approach not only improves a classifier's performance but also increases
its robustness to image perturbations and adversarial attacks. Further, we also
show that mini-batch graph neural networks can help to alleviate the problem of
mode collapse in Generative Adversarial Networks.
|
Two main methods have been proposed to derive the acoustical radiation force
and torque applied by an arbitrary acoustic field on a particle: The first one
relies on the plane wave angular spectrum decomposition of the incident field
(see [Sapozhnikov and Bailey, J. Acoust. Soc. Am. 133, 661 (2013)] for the
force and [Gong and Baudoin, J. Acoust. Soc. Am. 148, 3131 (2020)] for the
torque), while the second one relies on the decomposition of the incident field
into a sum of spherical waves, the so-called multipole expansion (see [Silva,
J. Acoust. Soc. Am. 130, 3541 (2011)] and [Baresh et al., J. Acoust. Soc. Am.
133, 25 (2013)] for the force, and [Silva et al., EPL 97, 54003 (2012)] and
[Gong et al., Phys. Rev. Applied 11, 064022 (2019)] for the torque). In this
paper, we formally establish the equivalence between the expressions obtained
with these two methods for both the force and torque.
|
B-doped $\delta$-layers were fabricated in Si(100) using BCl$_{3}$ as a
dopant precursor in ultrahigh vacuum. BCl$_{3}$ adsorbed readily at room
temperature, as revealed by scanning tunneling microscopy (STM) imaging.
Annealing at elevated temperatures facilitated B incorporation into the Si
substrate. Secondary ion mass spectrometry (SIMS) depth profiling demonstrated
a peak B concentration $>$ 1.2(1) $\times$ 10$^{21}$ cm$^{-3}$ with a total
areal dose of 1.85(1) $\times$ 10$^{14}$ cm$^{-2}$ resulting from a 30 L
BCl$_{3}$ dose at 150 $^{\circ}$C. Hall bar measurements of a similar sample
were performed at 3.0 K revealing a sheet resistance of $R_{\mathrm{s}}$ = 1.91
k$\Omega\square^{-1}$, a hole concentration of $n$ = 1.90 $\times$ 10$^{14}$
cm$^{-2}$ and a hole mobility of $\mu$ = 38.0 cm$^{2}$V$^{-1}$s$^{-1}$ without
performing an incorporation anneal. Further, the conductivity of several
B-doped $\delta$-layers showed a log dependence on temperature suggestive of a
two-dimensional system. Selective-area deposition of BCl$_{3}$ was also
demonstrated using both H- and Cl-based monatomic resists. In comparison to a
dosed area on bare Si, adsorption selectivity ratios for H and Cl resists were
determined by SIMS to be 310(10):1 and 1529(5):1, respectively, further
validating the use of BCl$_{3}$ as a dopant precursor for atomic precision
fabrication of acceptor-doped devices in Si.
|
This paper proposes a novel way to solve transient linear, and non-linear
solid dynamics for compressible, nearly incompressible, and incompressible
material in the updated Lagrangian framework for tetrahedral unstructured
finite elements. It consists of a mixed formulation in both displacement and
pressure, where the momentum equation of the continuum is complemented with a
pressure equation that handles incompresibility inherently. It is obtained
through the deviatoric and volumetric split of the stress, that enables us to
solve the problem in the incompressible limit. The Varitaional Multi-Scale
method (VMS) is developed based on the orthogonal decomposition of the
variables, which damps out spurious pressure fields for piece wise linear
tetrahedral elements. Various numerical examples are presented to assess the
robustness, accuracy and capabilities of our scheme in bending dominated
problems, and for complex geometries.
|
Coronary artery disease (CAD) has posed a leading threat to the lives of
cardiovascular disease patients worldwide for a long time. Therefore, automated
diagnosis of CAD has indispensable significance in clinical medicine. However,
the complexity of coronary artery plaques that cause CAD makes the automatic
detection of coronary artery stenosis in Coronary CT angiography (CCTA) a
difficult task. In this paper, we propose a Transformer network (TR-Net) for
the automatic detection of significant stenosis (i.e. luminal narrowing > 50%)
while practically completing the computer-assisted diagnosis of CAD. The
proposed TR-Net introduces a novel Transformer, and tightly combines
convolutional layers and Transformer encoders, allowing their advantages to be
demonstrated in the task. By analyzing semantic information sequences, TR-Net
can fully understand the relationship between image information in each
position of a multiplanar reformatted (MPR) image, and accurately detect
significant stenosis based on both local and global information. We evaluate
our TR-Net on a dataset of 76 patients from different patients annotated by
experienced radiologists. Experimental results illustrate that our TR-Net has
achieved better results in ACC (0.92), Spec (0.96), PPV (0.84), F1 (0.79) and
MCC (0.74) indicators compared with the state-of-the-art methods. The source
code is publicly available from the link (https://github.com/XinghuaMa/TR-Net).
|
We introduce large-scale Augmented Granger Causality (lsAGC) as a method for
connectivity analysis in complex systems. The lsAGC algorithm combines
dimension reduction with source time-series augmentation and uses predictive
time-series modeling for estimating directed causal relationships among
time-series. This method is a multivariate approach, since it is capable of
identifying the influence of each time-series on any other time-series in the
presence of all other time-series of the underlying dynamic system. We
quantitatively evaluate the performance of lsAGC on synthetic directional
time-series networks with known ground truth. As a reference method, we compare
our results with cross-correlation, which is typically used as a standard
measure of connectivity in the functional MRI (fMRI) literature. Using
extensive simulations for a wide range of time-series lengths and two different
signal-to-noise ratios of 5 and 15 dB, lsAGC consistently outperforms
cross-correlation at accurately detecting network connections, using Receiver
Operator Characteristic Curve (ROC) analysis, across all tested time-series
lengths and noise levels. In addition, as an outlook to possible clinical
application, we perform a preliminary qualitative analysis of connectivity
matrices for fMRI data of Autism Spectrum Disorder (ASD) patients and typical
controls, using a subset of 59 subjects of the Autism Brain Imaging Data
Exchange II (ABIDE II) data repository. Our results suggest that lsAGC, by
extracting sparse connectivity matrices, may be useful for network analysis in
complex systems, and may be applicable to clinical fMRI analysis in future
research, such as targeting disease-related classification or regression tasks
on clinical data.
|
Frustrated Mott insulators such as, transition metal dichalcogenide
1T-TaS$_{2}$ present an ideal platform for the experimental realization of
disorder induced insulator-metal transition. In this letter we present the
first non perturbative theoretical investigation of the disorder induced
insulator-metal transition in copper (Cu) intercalated 1T-TaS$_{2}$, in the
framework of Anderson-Hubbard model on a triangular lattice. Based on the
magnetic, spectroscopic and transport signatures we map out the thermal phase
diagram of this system. Our results show that over a regime of moderate
disorder strength this material hosts an antiferromagnetic metal. The emergent
metal is a non Fermi liquid, governed by resilient quasiparticles, that survive
as the relevant low energy excitations even after the break down of Fermi
liquid description. The system undergoes a crossover from a non Fermi liquid
metal to a bad metallic phase, as a function of temperature. Our results on
spectral line shape are found to be in excellent agreement with the
experimental observations on Cu intercalated 1T-TaS$_{2}$. The optical and
spectroscopic signatures discussed in this letter are expected to serve as
important benchmark for future experiments on this and related class of
materials. The numerical technique discussed herein serves as a computational
breakthrough to address systems for which most of the existing methods fall
short.
|
In this work, we propose a Model Predictive Control (MPC)-based Reinforcement
Learning (RL) method for Autonomous Surface Vehicles (ASVs). The objective is
to find an optimal policy that minimizes the closed-loop performance of a
simplified freight mission, including collision-free path following, autonomous
docking, and a skillful transition between them. We use a parametrized
MPC-scheme to approximate the optimal policy, which considers
path-following/docking costs and states (position, velocity)/inputs (thruster
force, angle) constraints. The Least Squares Temporal Difference (LSTD)-based
Deterministic Policy Gradient (DPG) method is then applied to update the policy
parameters. Our simulation results demonstrate that the proposed MPC-LSTD-based
DPG method could improve the closed-loop performance during learning for the
freight mission problem of ASV.
|
Pore structures and gas transport properties in porous separators for polymer
electrolyte fuel cells are evaluated both experimentally and through
simulations. In the experiments, the gas permeabilities of two porous samples,
a conventional sample and one with low electrical resistivity, are measured by
a capillary flow porometer, and the pore size distributions are evaluated with
mercury porosimetry. Local pore structures are directly observed with micro
X-ray computed tomography (CT). In the simulations, the effective diffusion
coefficients of oxygen and the air permeability in porous samples are
calculated using random walk Monte Carlo simulations and computational fluid
dynamics (CFD) simulations, respectively, based on the X-ray CT images. The
calculated porosities and air permeabilities of the porous samples are in good
agreement with the experimental values. The simulation results also show that
the in-plane permeability is twice the through-plane permeability in the
conventional sample, whereas it is slightly higher in the low-resistivity
sample. The results of this study show that CFD simulation based on micro X-ray
CT images makes it possible to evaluate anisotropic gas permeabilities in
anisotropic porous media.
|
We describe how some problems (interpretability,lack of object-orientedness)
of modern deep networks potentiallycould be solved by adapting a biologically
plausible saccadicmechanism of perception. A sketch of such a saccadic
visionmodel is proposed. Proof of concept experimental results areprovided to
support the proposed approach.
|
The knapsack problem for groups was introduced by Miasnikov, Nikolaev, and
Ushakov. It is defined for each finitely generated group $G$ and takes as input
group elements $g_1,\ldots,g_n,g\in G$ and asks whether there are
$x_1,\ldots,x_n\ge 0$ with $g_1^{x_1}\cdots g_n^{x_n}=g$. We study the knapsack
problem for wreath products $G\wr H$ of groups $G$ and $H$. Our main result is
a characterization of those wreath products $G\wr H$ for which the knapsack
problem is decidable. The characterization is in terms of decidability
properties of the indiviual factors $G$ and $H$. To this end, we introduce two
decision problems, the intersection knapsack problem and its restriction, the
positive intersection knapsack problem. Moreover, we apply our main result to
$H_3(\mathbb{Z})$, the discrete Heisenberg group, and to Baumslag-Solitar
groups $\mathsf{BS}(1,q)$ for $q\ge 1$. First, we show that the knapsack
problem is undecidable for $G\wr H_3(\mathbb{Z})$ for any $G\ne 1$. This
implies that for $G\ne 1$ and for infinite and virtually nilpotent groups $H$,
the knapsack problem for $G\wr H$ is decidable if and only if $H$ is virtually
abelian and solvability of systems of exponent equations is decidable for $G$.
Second, we show that the knapsack problem is decidable for
$G\wr\mathsf{BS}(1,q)$ if and only if solvability of systems of exponent
equations is decidable for $G$.
|
The first of a two-part series, this paper assumes a weak local energy decay
estimate holds and proves that solutions to the linear wave equation with
variable coefficients in $\mathbb R^{1+3}$, first-order terms, and a potential
decay at a rate depending on how rapidly the vector fields of the metric,
first-order terms, and potential decay at spatial infinity. We prove results
for both stationary and nonstationary metrics. The proof uses local energy
decay to prove an initial decay rate, and then uses the one-dimensional
reduction repeatedly to achieve the full decay rate.
|
The effect of coupling between pairing and quadrupole triaxial shape
vibrations on the low-energy collective states of $\gamma$-soft nuclei is
investigated using a model based on the framework of nuclear energy density
functionals (EDFs). Employing a constrained self-consistent mean-field (SCMF)
method that uses universal EDFs and pairing interactions, potential energy
surfaces of characteristic $\gamma$-soft Os and Pt nuclei with $A\approx190$
are calculated as functions of the pairing and triaxial quadrupole
deformations. Collective spectroscopic properties are computed using a
number-nonconserving interacting boson model (IBM) Hamiltonian, with parameters
determined by mapping the SCMF energy surface onto the expectation value of the
Hamiltonian in the boson condensate state. It is shown that, by simultaneously
considering both the shape and pairing collective degrees of freedom, the
EDF-based IBM successfully reproduces data on collective structures based on
low-energy $0^{+}$ states, as well as $\gamma$-vibrational bands.
|
We consider the capacity of entanglement in models related with the
gravitational phase transitions. The capacity is labeled by the replica
parameter which plays a similar role to the inverse temperature in
thermodynamics. In the end of the world brane model of a radiating black hole
the capacity has a peak around the Page time indicating the phase transition
between replica wormhole geometries of different types of topology. Similarly,
in a moving mirror model describing Hawking radiation the capacity typically
shows a discontinuity when the dominant saddle switches between two phases,
which can be seen as a formation of island regions. In either case we find the
capacity can be an invaluable diagnostic for a black hole evaporation process.
|
The tension between inferences of Hubble constant ($H_0$) is found in a large
array of datasets combinations. Modification to the late expansion history is
the most direct solution to this discrepancy. In this work we examine the
viability of restoring the cosmological concordance within the scenarios of
late dark energy. We explore two representative parameterizations: a novel
version of transitional dark energy (TDE) and modified emergent dark energy
(MEDE). We find that, the main anchors for the cosmic distance scale: cosmic
microwave background (CMB), baryon acoustic oscillation (BAO), and SNe Ia
calibrated by Cepheids form a ``impossible trinity'', i.e., it's plausible to
reconcile with any of the two but unlikely to accommodate them all.
Particularly, the tension between BAO and the calibrated SNe Ia can not be
reconciled within the scenarios of late dark energy. Nevertheless, we still
find a positive evidence for TDE model in analysis of all datasets
combinations, while with the the exclusion of BOSS datasets, the tensions with
SH0ES drops from $3.1\sigma$ to $1.1\sigma$. For MEDE model, the tension with
$H_0$ is much alleviated with the exclusion of SNe dataset. But unfortunately,
in both TDE and MEDE scenarios, the $S_8$ tension is not relieved nor
exacerbated.
|
The Raman peak position and linewidth provide insight into phonon
anharmonicity and electron-phonon interactions (EPI) in materials. For
monolayer graphene, prior first-principles calculations have yielded decreasing
linewidth with increasing temperature, which is opposite to measurement
results. Here, we explicitly consider four-phonon anharmonicity, phonon
renormalization, and electron-phonon coupling, and find all to be important to
successfully explain both the $G$ peak frequency shift and linewidths in our
suspended graphene sample at a wide temperature range. Four-phonon scattering
contributes a prominent linewidth that increases with temperature, while
temperature dependence from EPI is found to be reversed above a doping
threshold ($\hbar\omega_G/2$, with $\omega_G$ being the frequency of the $G$
phonon).
|
We propose a novel neural network module that transforms an existing
single-frame semantic segmentation model into a video semantic segmentation
pipeline. In contrast to prior works, we strive towards a simple, fast, and
general module that can be integrated into virtually any single-frame
architecture. Our approach aggregates a rich representation of the semantic
information in past frames into a memory module. Information stored in the
memory is then accessed through an attention mechanism. In contrast to previous
memory-based approaches, we propose a fast local attention layer, providing
temporal appearance cues in the local region of prior frames. We further fuse
these cues with an encoding of the current frame through a second
attention-based module. The segmentation decoder processes the fused
representation to predict the final semantic segmentation. We integrate our
approach into two popular semantic segmentation networks: ERFNet and PSPNet. We
observe an improvement in segmentation performance on Cityscapes by 1.7% and
2.1% in mIoU respectively, while increasing inference time of ERFNet by only
1.5ms.
|
We study the problem of dynamically trading multiple futures whose underlying
asset price follows a multiscale central tendency Ornstein-Uhlenbeck (MCTOU)
model. Under this model, we derive the closed-form no-arbitrage prices for the
futures contracts. Applying a utility maximization approach, we solve for the
optimal trading strategies under different portfolio configurations by
examining the associated system of Hamilton-Jacobi-Bellman (HJB) equations. The
optimal strategies depend on not only the parameters of the underlying asset
price process but also the risk premia embedded in the futures prices.
Numerical examples are provided to illustrate the investor's optimal positions
and optimal wealth over time.
|
Let $ K $ be a number field over $ \mathbb{Q} $ and let $ a_K(m) $ denote the
number of integral ideals of $ K $ of norm equal to $ m\in\mathbb{N} $. In this
paper we obtain asymptotic formulae for sums of the form $ \sum_{m\leq X}
a^l_K(m) $ thereby generalizing the previous works on the problem. Previously
such asymptotics were known only in the case when $ K $ is Galois or when $K$
was a non normal cubic extension and $ l=2,3 $. The present work subsumes both
these cases.
|
Automatic transcription of monophonic/polyphonic music is a challenging task
due to the lack of availability of large amounts of transcribed data. In this
paper, we propose a data augmentation method that converts natural speech to
singing voice based on vocoder based speech synthesizer. This approach, called
voice to singing (V2S), performs the voice style conversion by modulating the
F0 contour of the natural speech with that of a singing voice. The V2S model
based style transfer can generate good quality singing voice thereby enabling
the conversion of large corpora of natural speech to singing voice that is
useful in building an E2E lyrics transcription system. In our experiments on
monophonic singing voice data, the V2S style transfer provides a significant
gain (relative improvements of 21%) for the E2E lyrics transcription system. We
also discuss additional components like transfer learning and lyrics based
language modeling to improve the performance of the lyrics transcription
system.
|
This paper provides a multivariate extension of Bertoin's pathwise
construction of a L\'evy process conditioned to stay positive/negative. Thus
obtained processes conditioned to stay in half-spaces are closely related to
the original process on a compact time interval seen from its directional
extremal points. In the case of a correlated Brownian motion the law of the
conditioned process is obtained by a linear transformation of a standard
Brownian motion and an independent Bessel-3 process. Further motivation is
provided by a limit theorem corresponding to zooming in on a L\'evy process
with a Brownian part at the point of its directional infimum. Applications to
zooming in at the point furthest from the origin are envisaged.
|
The US Census Bureau plans to protect the privacy of 2020 Census respondents
through its Disclosure Avoidance System (DAS), which attempts to achieve
differential privacy guarantees by adding noise to the Census microdata. By
applying redistricting simulation and analysis methods to DAS-protected 2010
Census data, we find that the protected data are not of sufficient quality for
redistricting purposes. We demonstrate that the injected noise makes it
impossible for states to accurately comply with the One Person, One Vote
principle. Our analysis finds that the DAS-protected data are biased against
certain areas, depending on voter turnout and partisan and racial composition,
and that these biases lead to large and unpredictable errors in the analysis of
partisan and racial gerrymanders. Finally, we show that the DAS algorithm does
not universally protect respondent privacy. Based on the names and addresses of
registered voters, we are able to predict their race as accurately using the
DAS-protected data as when using the 2010 Census data. Despite this, the
DAS-protected data can still inaccurately estimate the number of
majority-minority districts. We conclude with recommendations for how the
Census Bureau should proceed with privacy protection for the 2020 Census.
|
For analytic functions $g$ on the unit disc with non-negative Maclaurin
coefficients, we describe the boundedness and compactness of the integral
operator $T_g(f)(z)=\int_0^zf(\zeta)g'(\zeta)\,d\zeta$ from a space $X$ of
analytic functions in the unit disc to $H^\infty$, in terms of neat and useful
conditions on the Maclaurin coefficients of $g$. The choices of $X$ that will
be considered contain the Hardy and the Hardy-Littlewood spaces, the
Dirichlet-type spaces $D^p_{p-1}$, as well as the classical Bloch and BMOA
spaces.
|
We consider a multiphysics model for the flow of Newtonian fluid coupled with
Biot consolidation equations through an interface, and incorporating total
pressure as an unknown in the poroelastic region. A new mixed-primal finite
element scheme is proposed solving for the pairs fluid velocity - pressure and
displacement - total poroelastic pressure using Stokes-stable elements, and
where the formulation does not require Lagrange multipliers to set up the usual
transmission conditions on the interface. The stability and well-posedness of
the continuous and semi-discrete problems are analysed in detail. Our numerical
study is framed in the context of applicative problems pertaining to
heterogeneous geophysical flows and to eye poromechanics. For the latter, we
investigate different interfacial flow regimes in Cartesian and axisymmetric
coordinates that could eventually help describe early morphologic changes
associated with glaucoma development in canine species.
|
In this work, we explore macroscopic transport phenomena associated with a
rotational system in the presence of an external orthogonal electromagnetic
field. Simply based on the lowest Landau level approximation, we derive
nontrivial expressions for chiral density and various currents consistently by
adopting small angular velocity expansion or Kubo formula. While the generation
of anomalous electric current is due to the pseudo gauge field effect of the
spin-rotation coupling, the chiral density and current can be simply explained
with the help of Lorentz boosts. Finally, Lorentz covariant forms can be
obtained by unifying our results and the magnetovorticity effect.
|
We find an explicit formula that produces inductively the elliptic stable
envelopes of an arbitrary Nakajima variety associated to a quiver Q from the
ones of those Nakajima varieties whose framing vectors are the fundamental
vectors of the quiver Q, i.e. the dimension vectors with just one unitary
nonzero entry. The result relies on abelianization of stable envelopes. As an
application, we combine our result with Smirnov's formula for the elliptic
stable envelopes of the Hilbert scheme of points on the plane to produce the
elliptic stable envelopes of the instanton moduli space.
|
Small-scale magnetic fields are not only the fundamental element of the solar
magnetism, but also closely related to the structure of the solar atmosphere.
The observations have shown that there is a ubiquitous tangled small-scale
magnetic field with a strength of 60 $\sim$ 130\,G in the canopy forming layer
of the quiet solar photosphere. On the other hand, the multi-dimensional MHD
simulations show that the convective overshooting expels the magnetic field to
form the magnetic canopies at a height of about 500\,km in the upper
photosphere. However, the distribution of such small-scale ``canopies" in the
solar photosphere cannot be rigorously constrained by either observations and
numerical simulations. Based on stellar standard models, we identify that these
magnetic canopies can act as a global magnetic-arch splicing layer, and find
that the reflections of the solar p-mode oscillations at this magnetic-arch
splicing layer results in significant improvement on the discrepancy between
the observed and calculated p-mode frequencies. The location of the
magnetic-arch splicing layer is determined at a height of about 630\,km, and
the inferred strength of the magnetic field is about 90\,G. These features of
the magnetic-arch splicing layer derived independently in the present study are
quantitatively in agreement with the presence of small-scale magnetic canopies
as those obtained by the observations and 3-D MHD simulations.
|
In many real-world scenarios, the utility of a user is derived from the
single execution of a policy. In this case, to apply multi-objective
reinforcement learning, the expected utility of the returns must be optimised.
Various scenarios exist where a user's preferences over objectives (also known
as the utility function) are unknown or difficult to specify. In such
scenarios, a set of optimal policies must be learned. However, settings where
the expected utility must be maximised have been largely overlooked by the
multi-objective reinforcement learning community and, as a consequence, a set
of optimal solutions has yet to be defined. In this paper we address this
challenge by proposing first-order stochastic dominance as a criterion to build
solution sets to maximise expected utility. We also propose a new dominance
criterion, known as expected scalarised returns (ESR) dominance, that extends
first-order stochastic dominance to allow a set of optimal policies to be
learned in practice. We then define a new solution concept called the ESR set,
which is a set of policies that are ESR dominant. Finally, we define a new
multi-objective distributional tabular reinforcement learning (MOT-DRL)
algorithm to learn the ESR set in a multi-objective multi-armed bandit setting.
|
This paper presents a theoretical framework for the design and analysis of
gradient descent-based algorithms for coverage control tasks involving robot
swarms. We adopt a multiscale approach to analysis and design to ensure
consistency of the algorithms in the large-scale limit. First, we represent the
macroscopic configuration of the swarm as a probability measure and formulate
the macroscopic coverage task as the minimization of a convex objective
function over probability measures. We then construct a macroscopic dynamics
for swarm coverage, which takes the form of a proximal descent scheme in the
$L^2$-Wasserstein space. Our analysis exploits the generalized geodesic
convexity of the coverage objective function, proving convergence in the
$L^2$-Wasserstein sense to the target probability measure. We then obtain a
consistent gradient descent algorithm in the Euclidean space that is
implementable by a finite collection of agents, via a "variational"
discretization of the macroscopic coverage objective function. We establish the
convergence properties of the gradient descent and its behavior in the
continuous-time and large-scale limits. Furthermore, we establish a connection
with well-known Lloyd-based algorithms, seen as a particular class of
algorithms within our framework, and demonstrate our results via numerical
experiments.
|
The higher-order generalized singular value decomposition (HO-GSVD) is a
matrix factorization technique that extends the GSVD to $N \ge 2$ data
matrices, and can be used to identify shared subspaces in multiple large-scale
datasets with different row dimensions. The standard HO-GSVD factors $N$
matrices $A_i\in\mathbb{R}^{m_i\times n}$ as $A_i=U_i\Sigma_i V^\text{T}$, but
requires that each of the matrices $A_i$ has full column rank. We propose a
reformulation of the HO-GSVD that extends its applicability to rank-deficient
data matrices $A_i$. If the matrix of stacked $A_i$ has full rank, we show that
the properties of the original HO-GSVD extend to our reformulation. The HO-GSVD
captures shared right singular vectors of the matrices $A_i$, and we show that
our method also identifies directions that are unique to the image of a single
matrix. We also extend our results to the higher-order cosine-sine
decomposition (HO-CSD), which is closely related to the HO-GSVD. Our extension
of the standard HO-GSVD allows its application to datasets with $m_i < n$, such
as are encountered in bioinformatics, neuroscience, control theory or
classification problems.
|
Multi-frame human pose estimation in complicated situations is challenging.
Although state-of-the-art human joints detectors have demonstrated remarkable
results for static images, their performances come short when we apply these
models to video sequences. Prevalent shortcomings include the failure to handle
motion blur, video defocus, or pose occlusions, arising from the inability in
capturing the temporal dependency among video frames. On the other hand,
directly employing conventional recurrent neural networks incurs empirical
difficulties in modeling spatial contexts, especially for dealing with pose
occlusions. In this paper, we propose a novel multi-frame human pose estimation
framework, leveraging abundant temporal cues between video frames to facilitate
keypoint detection. Three modular components are designed in our framework. A
Pose Temporal Merger encodes keypoint spatiotemporal context to generate
effective searching scopes while a Pose Residual Fusion module computes
weighted pose residuals in dual directions. These are then processed via our
Pose Correction Network for efficient refining of pose estimations. Our method
ranks No.1 in the Multi-frame Person Pose Estimation Challenge on the
large-scale benchmark datasets PoseTrack2017 and PoseTrack2018. We have
released our code, hoping to inspire future research.
|
Versions of the following problem appear in several topics such as Gamma
Knife radiosurgery, studying objects with the X-ray transform, the 3SUM
problem, and the $k$-linear degeneracy testing. Suppose there are $n$ points on
a plane whose specific locations are unknown. We are given all the lines that
go through the points with a given slope. We show that the minimum number of
slopes needed, in general, to find all the point locations is $n+1$ and we
provide an algorithm to do so.
|
Speech disorders often occur at the early stage of Parkinson's disease (PD).
The speech impairments could be indicators of the disorder for early diagnosis,
while motor symptoms are not obvious. In this study, we constructed a new
speech corpus of Mandarin Chinese and addressed classification of patients with
PD. We implemented classical machine learning methods with ranking algorithms
for feature selection, convolutional and recurrent deep networks, and an end to
end system. Our classification accuracy significantly surpassed
state-of-the-art studies. The result suggests that free talk has stronger
classification power than standard speech tasks, which could help the design of
future speech tasks for efficient early diagnosis of the disease. Based on
existing classification methods and our natural speech study, the automatic
detection of PD from daily conversation could be accessible to the majority of
the clinical population.
|
In this paper we consider convex co-compact subgroups of the projective
linear group. We prove that such a group is relatively hyperbolic with respect
to a collection of virtually Abelian subgroups of rank two if and only if each
open face in the ideal boundary has dimension at most one. We also introduce
the "coarse Hilbert dimension" of a subset of a convex set and use it to
characterize when a naive convex co-compact subgroup is word hyperbolic or
relatively hyperbolic with respect to a collection of virtually Abelian
subgroups of rank two.
|
Let $I(G)^{[k]}$ denote the $k$th squarefree power of the edge ideal of $G$.
When $G$ is a forest, we provide a sharp upper bound for the regularity of
$I(G)^{[k]}$ in terms of the $k$-admissable matching number of $G$. For any
positive integer $k$, we classify all forests $G$ such that $I(G)^{[k]}$ has
linear resolution. We also give a combinatorial formula for the regularity of
$I(G)^{[2]}$ for any forest $G$.
|
Recently, deep learning approaches have become the main research frontier for
biological image reconstruction and enhancement problems thanks to their high
performance, along with their ultra-fast inference times. However, due to the
difficulty of obtaining matched reference data for supervised learning, there
has been increasing interest in unsupervised learning approaches that do not
need paired reference data. In particular, self-supervised learning and
generative models have been successfully used for various biological imaging
applications. In this paper, we overview these approaches from a coherent
perspective in the context of classical inverse problems, and discuss their
applications to biological imaging, including electron, fluorescence and
deconvolution microscopy, optical diffraction tomography and functional
neuroimaging.
|
Given a graph with a source vertex $s$, the Single Source Replacement Paths
(SSRP) problem is to compute, for every vertex $t$ and edge $e$, the length
$d(s,t,e)$ of a shortest path from $s$ to $t$ that avoids $e$. A Single-Source
Distance Sensitivity Oracle (Single-Source DSO) is a data structure that
answers queries of the form $(t,e)$ by returning the distance $d(s,t,e)$. We
show how to deterministically compress the output of the SSRP problem on
$n$-vertex, $m$-edge graphs with integer edge weights in the range $[1,M]$ into
a Single-Source DSO of size $O(M^{1/2}n^{3/2})$ with query time
$\widetilde{O}(1)$. The space requirement is optimal (up to the word size) and
our techniques can also handle vertex failures.
Chechik and Cohen [SODA 2019] presented a combinatorial, randomized
$\widetilde{O}(m\sqrt{n}+n^2)$ time SSRP algorithm for undirected and
unweighted graphs. Grandoni and Vassilevska Williams [FOCS 2012, TALG 2020]
gave an algebraic, randomized $\widetilde{O}(Mn^\omega)$ time SSRP algorithm
for graphs with integer edge weights in the range $[1,M]$, where $\omega<2.373$
is the matrix multiplication exponent. We derandomize both algorithms for
undirected graphs in the same asymptotic running time and apply our compression
to obtain deterministic Single-Source DSOs. The $\widetilde{O}(m\sqrt{n}+n^2)$
and $\widetilde{O}(Mn^\omega)$ preprocessing times are polynomial improvements
over previous $o(n^2)$-space oracles.
On sparse graphs with $m=O(n^{5/4-\varepsilon}/M^{7/4})$ edges, for any
constant $\varepsilon > 0$, we reduce the preprocessing to randomized
$\widetilde{O}(M^{7/8}m^{1/2}n^{11/8})=O(n^{2-\varepsilon/2})$ time. This is
the first truly subquadratic time algorithm for building Single-Source DSOs on
sparse graphs.
|
End-to-end DNN architectures have pushed the state-of-the-art in speech
technologies, as well as in other spheres of AI, leading researchers to train
more complex and deeper models. These improvements came at the cost of
transparency. DNNs are innately opaque and difficult to interpret. We no longer
understand what features are learned, where they are preserved, and how they
inter-operate. Such an analysis is important for better model understanding,
debugging and to ensure fairness in ethical decision making. In this work, we
analyze the representations trained within deep speech models, towards the task
of speaker recognition, dialect identification and reconstruction of masked
signals. We carry a layer- and neuron-level analysis on the utterance-level
representations captured within pretrained speech models for speaker, language
and channel properties. We study: is this information captured in the learned
representations? where is it preserved? how is it distributed? and can we
identify a minimal subset of network that posses this information. Using
diagnostic classifiers, we answered these questions. Our results reveal: (i)
channel and gender information is omnipresent and is redundantly distributed
(ii) complex properties such as dialectal information is encoded only in the
task-oriented pretrained network and is localised in the upper layers (iii) a
minimal subset of neurons can be extracted to encode the predefined property
(iv) salient neurons are sometimes shared between properties and can highlights
presence of biases in the network. Our cross-architectural comparison indicates
that (v) the pretrained models captures speaker-invariant information and (vi)
the pretrained CNNs models are competitive to the Transformers for encoding
information for the studied properties. To the best of our knowledge, this is
the first study to investigate neuron analysis on the speech models.
|
AOSAT is a python package for the analysis of single-conjugate adaptive
optics (SCAO) simulation results. Python is widely used in the astronomical
community these days, and AOSAT may be used stand-alone, integrated into a
simulation environment, or can easily be extended according to a user's needs.
Standalone operation requires the user to provide the residual wavefront frames
provided by the SCAO simulation package used, the aperture mask (pupil) used
for the simulation, and a custom setup file describing the simulation/analysis
configuration. In its standard form, AOSAT's "tearsheet" functionality will
then run all standard analyzers, providing an informative plot collection on
properties such as the point-spread function (PSF) and its quality, residual
tip-tilt, the impact of pupil fragmentation, residual optical aberration modes
both static and dynamic, the expected high-contrast performance of suitable
instrumentation with and without coronagraphs, and the power spectral density
of residual wavefront errors.
AOSAT fills the gap between the simple numerical outputs provided by most
simulation packages, and the full-scale deployment of instrument simulators and
data reduction suites operating on SCAO residual wavefronts. It enables
instrument designers and end-users to quickly judge the impact of design or
configuration decisions on the final performance of down-stream
instrumentation.
|
It has recently been pointed out that Gaia is capable of detecting a
stochastic gravitational wave background in the sensitivity band between the
frequency of pulsar timing arrays and LISA. We argue that Gaia and THEIA has
great potential for early universe cosmology, since such a frequency range is
ideal for probing phase transitions in asymmetric dark matter, SIMP and the
cosmological QCD transition. Furthermore, there is the potential for detecting
primordial black holes in the solar mass range produced during such an early
universe transition and distinguish them from those expected from the QCD
epoch. Finally, we discuss the potential for Gaia and THEIA to probe
topological defects and the ability of Gaia to potentially shed light on the
recent NANOGrav results.
|
This paper studies the recovery of a joint piece-wise linear trend from a
time series using L1 regularization approach, called L1 trend filtering (Kim,
Koh and Boyd, 2009). We provide some sufficient conditions under which a L1
trend filter can be well-behaved in terms of mean estimation and change point
detection. The result is two-fold: for the mean estimation, an almost optimal
consistent rate is obtained; for the change point detection, the slope change
in direction can be recovered in a high probability. In addition, we show that
the weak irrepresentable condition, a necessary condition for LASSO model to be
sign consistent (Zhao and Yu, 2006), is not necessary for the consistent change
point detection. The performance of the L1 trend filter is evaluated by some
finite sample simulations studies.
|
It has recently been shown that superconductivity in magic-angle twisted
trilayer graphene survives to in-plane magnetic fields that are well in excess
of the Pauli limit, and much stronger than the in-plane critical magnetic
fields of magic-angle twisted bilayer graphene. The difference is surprising
because twisted bilayers and trilayers both support the magic-angle flat bands
thought to be the fountainhead of twisted graphene superconductivity. We show
here that the difference in critical magnetic fields can be traced to a
$\mathcal{C}_2 \mathcal{M}_{h}$ symmetry in trilayers that survives in-plane
magnetic fields, and also relative displacements between top and bottom layers
that are not under experimental control at present. An gate electric field
breaks the $\mathcal{C}_2 \mathcal{M}_{h}$ symmetry and therefore limits the
in-plane critical magnetic field.
|
We analyze possibilities of second-order quantifier elimination for formulae
containing parameters -- constants or functions. For this, we use a constraint
resolution calculus obtained from specializing the hierarchical superposition
calculus. If saturation terminates, we analyze possibilities of obtaining
weakest constraints on parameters which guarantee satisfiability. If the
saturation does not terminate, we identify situations in which finite
representations of infinite saturated sets exist. We identify situations in
which entailment between formulae expressed using second-order quantification
can be effectively checked. We illustrate the ideas on a series of examples
from wireless network research.
|
In the context of supervised learning of a function by a Neural Network (NN),
we claim and empirically justify that a NN yields better results when the
distribution of the data set focuses on regions where the function to learn is
steeper. We first traduce this assumption in a mathematically workable way
using Taylor expansion. Then, theoretical derivations allow to construct a
methodology that we call Variance Based Samples Weighting (VBSW). VBSW uses
local variance of the labels to weight the training points. This methodology is
general, scalable, cost effective, and significantly increases the performances
of a large class of NNs for various classification and regression tasks on
image, text and multivariate data. We highlight its benefits with experiments
involving NNs from shallow linear NN to Resnet or Bert.
|
Multiple transition phenomena in divalent Eu compound EuAl$_4$ with the
tetragonal structure were investigated via the single-crystal time-of-flight
neutron Laue technique. At 30.0 K below a charge-density-wave (CDW) transition
temperature of $T_{\rm CDW}$ = 140 K, superlattice peaks emerge near nuclear
Bragg peaks described by an ordering vector $q_{\rm CDW}$=(0 0 ${\delta}_c$)
with ${\delta}_c{\sim}$0.19. In contrast, magnetic peaks appear at $q_2 =
({\delta}_2 {\delta}_2 0)$ with ${\delta}_2$ = 0.085 in a magnetic-ordered
phase at 13.5 K below $T_{\rm N1}$ = 15.4 K. By further cooling to below
$T_{\rm N3}$ = 12.2 K, the magnetic ordering vector changes into $q_1 =
({\delta}_1 0 0)$ with ${\delta}_1$ = 0.17 at 11.5 K and slightly shifts to
${\delta}_1$ = 0.194 at 4.3 K. No distinct change in the magnetic Bragg peak
was detected at $T_{\rm N2}$=13.2 K and $T_{\rm N4}$=10.0 K. The structural
modulation below $T_{\rm CDW}$ with $q_{\rm CDW}$ is characterized by the
absence of the superlattice peak in the (0 0 $l$) axis. As a similar CDW
transition was observed in SrAl$_4$, the structural modulation with $q_{\rm
CDW}$ could be mainly ascribed to the displacement of Al ions within the
tetragonal $ab$-plane. Complex magnetic transitions are in stark contrast to a
simple collinear magnetic structure in isovalent EuGa$_4$. This could stem from
different electronic structures with the CDW transition between two compounds.
|
We propose a gauged $B-L$ extension of the standard model (SM) where light
neutrinos are of Dirac type by virtue of tiny Yukawa couplings with the SM
Higgs. To achieve leptogenesis, we include additional heavy Majorana fermions
without introducing any $B-L$ violation by two units. An additional scalar
doublet with appropriate $B-L$ charge can allow heavy fermion coupling with the
SM leptons so that out of equilibrium decay of the former can lead to
generation of lepton asymmetry. Due to the $B-L$ gauge interactions of the
decaying fermion, the criteria of successful Dirac leptogenesis can also
constrain the gauge sector couplings so as to keep the corresponding washout
processes under control. The same $B-L$ gauge sector parameter space can also
be constrained from dark matter requirements if the latter is assumed to be a
SM singlet particle with non-zero $B-L$ charge. The same $B-L$ gauge
interactions also lead to additional thermalised relativistic degrees of
freedom $\Delta N_{\rm eff}$ from light Dirac neutrinos which are tightly
constrained by Planck 2018 data. While there exists parameter space from the
criteria of successful low scale Dirac leptogenesis, dark matter and $\Delta
N_{\rm eff}$ even after incorporating the latest collider bounds, all the
currently allowed parameters can be probed by future measurements of $\Delta
N_{\rm eff}$.
|
In two-dimensional loop models, the scaling properties of critical random
curves are encoded in the correlators of connectivity operators. In the dense
O($n$) loop model, any such operator is naturally associated to a standard
module of the periodic Temperley-Lieb algebra. We introduce a new family of
representations of this algebra, with connectivity states that have two marked
points, and argue that they define the fusion of two standard modules. We
obtain their decomposition on the standard modules for generic values of the
parameters, which in turn yields the structure of the operator product
expansion of connectivity operators.
|
We seek to investigate the scalability of neuromorphic computing for computer
vision, with the objective of replicating non-neuromorphic performance on
computer vision tasks while reducing power consumption. We convert the deep
Artificial Neural Network (ANN) architecture U-Net to a Spiking Neural Network
(SNN) architecture using the Nengo framework. Both rate-based and spike-based
models are trained and optimized for benchmarking performance and power, using
a modified version of the ISBI 2D EM Segmentation dataset consisting of
microscope images of cells. We propose a partitioning method to optimize
inter-chip communication to improve speed and energy efficiency when deploying
multi-chip networks on the Loihi neuromorphic chip. We explore the advantages
of regularizing firing rates of Loihi neurons for converting ANN to SNN with
minimum accuracy loss and optimized energy consumption. We propose a percentile
based regularization loss function to limit the spiking rate of the neuron
between a desired range. The SNN is converted directly from the corresponding
ANN, and demonstrates similar semantic segmentation as the ANN using the same
number of neurons and weights. However, the neuromorphic implementation on the
Intel Loihi neuromorphic chip is over 2x more energy-efficient than
conventional hardware (CPU, GPU) when running online (one image at a time).
These power improvements are achieved without sacrificing the task performance
accuracy of the network, and when all weights (Loihi, CPU, and GPU networks)
are quantized to 8 bits.
|
We put forward the concept of work extraction from thermal noise by
phase-sensitive (homodyne) measurements of the noisy input followed by
(outcome-dependent) unitary manipulations of the post-measured state. For
optimized measurements, noise input with more than one quantum on average is
shown to yield heat-to-work conversion with efficiency and power that grow with
the mean number of input quanta, detector efficiency and its inverse
temperature. This protocol is shown to be advantageous compared to common
models of information and heat engines.
|
Different regimes of entanglement growth under measurement have been
demonstrated for quantum many-body systems, with an entangling phase for low
measurement rates and a disentangling phase for high rates (quantum Zeno
effect). Here we study entanglement growth on a disordered Bose-Fermi mixture
with the bosons playing the role of the effective self-induced measurement for
the fermions. Due to the interplay between the disorder and a non-Abelian
symmetry, the model features an entanglement growth resonance when the
boson-fermion interaction strength is varied. With the addition of a magnetic
field, the model acquires a dynamical symmetry leading to experimentally
measurable long-time local oscillations. At the entanglement growth resonance,
we demonstrate the emergence of the cleanest oscillations. Furthermore, we show
that this resonance is distinct from both noise enhanced transport and a
standard stochastic resonance. Our work paves the way for experimental
realizations of self-induced correlated phases in multi-species systems.
|
Saturn's E ring consists of micron-sized particles launched from Enceladus by
that moon's geological activity. A variety of small-scale structures in the
E-ring's brightness have been attributed to tendrils of material recently
launched from Enceladus. However, one of these features occurs at a location
where Enceladus' gravitational perturbations should concentrate background
E-ring particles into structures known as satellite wakes. While satellite
wakes have been observed previously in ring material drifting past other moons,
these E-ring structures would be the first examples of wakes involving
particles following horseshoe orbits near Enceladus' orbit. The predicted
intensity of these wake signatures are particularly sensitive to the fraction
E-ring particles' on orbits with low eccentricities and semi-major axes just
outside of Enceladus' orbit, and so detailed analyses of these and other
small-scale E-ring features should place strong constraints on the orbital
properties and evolution of E-ring particles.
|
In \cite{BH20} an elegant choice-free construction of a canonical extension
of a boolean algebra $B$ was given as the boolean algebra of regular open
subsets of the Alexandroff topology on the poset of proper filters of $B$. We
make this construction point-free by replacing the Alexandroff space of proper
filters of $B$ with the free frame $\mathcal{L}$ generated by the bounded
meet-semilattice of all filters of $B$ (ordered by reverse inclusion) and prove
that the booleanization of $\mathcal{L}$ is a canonical extension of $B$. Our
main result generalizes this approach to the category
$\boldsymbol{\mathit{ba}\ell}$ of bounded archimedean $\ell$-algebras, thus
yielding a point-free construction of canonical extensions in
$\boldsymbol{\mathit{ba}\ell}$. We conclude by showing that the algebra of
normal functions on the Alexandroff space of proper archimedean $\ell$-ideals
of $A$ is a canonical extension of $A\in\boldsymbol{\mathit{ba}\ell}$, thus
providing a generalization of the result of \cite{BH20} to
$\boldsymbol{\mathit{ba}\ell}$.
|
We report on the measurement of inclusive charmless semileptonic B decays $B
\to X_{u} \ell \nu$. The analysis makes use of hadronic tagging and is
performed on the full data set of the Belle experiment comprising 772 million
$B\bar{B}$ pairs. In the proceedings, the preliminary results of measurements
of partial branching fractions and the CKM matrix element $|V_{ub}|$ are
presented.
|
Superconductivity and magnetism are generally incompatible because of the
opposing requirement on electron spin alignment. When combined, they produce a
multitude of fascinating phenomena, including unconventional superconductivity
and topological superconductivity. The emergence of two-dimensional (2D)layered
superconducting and magnetic materials that can form nanoscale junctions with
atomically sharp interfaces presents an ideal laboratory to explore new
phenomena from coexisting superconductivity and magnetic ordering. Here we
report tunneling spectroscopy under an in-plane magnetic field of
superconductor-ferromagnet-superconductor (S/F/S) tunnel junctions that are
made of 2D Ising superconductor NbSe2 and ferromagnetic insulator CrBr3. We
observe nearly 100% tunneling anisotropic magnetoresistance (AMR), that is,
difference in tunnel resistance upon changing magnetization direction from
out-of-plane to inplane. The giant tunneling AMR is induced by
superconductivity, particularly, a result of interfacial magnetic exchange
coupling and spin-dependent quasiparticle scattering. We also observe an
intriguing magnetic hysteresis effect in superconducting gap energy and
quasiparticle scattering rate with a critical temperature that is 2 K below the
superconducting transition temperature. Our study paves the path for exploring
superconducting spintronic and unconventional superconductivity in van der
Waals heterostructures.
|
This paper explores how different ideas of racial equity in machine learning,
in justice settings in particular, can present trade-offs that are difficult to
solve computationally. Machine learning is often used in justice settings to
create risk assessments, which are used to determine interventions, resources,
and punitive actions. Overall aspects and performance of these machine
learning-based tools, such as distributions of scores, outcome rates by levels,
and the frequency of false positives and true positives, can be problematic
when examined by racial group. Models that produce different distributions of
scores or produce a different relationship between level and outcome are
problematic when those scores and levels are directly linked to the restriction
of individual liberty and to the broader context of racial inequity. While
computation can help highlight these aspects, data and computation are unlikely
to solve them. This paper explores where values and mission might have to fill
the spaces computation leaves.
|
Affect recognition based on subjects' facial expressions has been a topic of
major research in the attempt to generate machines that can understand the way
subjects feel, act and react. In the past, due to the unavailability of large
amounts of data captured in real-life situations, research has mainly focused
on controlled environments. However, recently, social media and platforms have
been widely used. Moreover, deep learning has emerged as a means to solve
visual analysis and recognition problems. This paper exploits these advances
and presents significant contributions for affect analysis and recognition
in-the-wild. Affect analysis and recognition can be seen as a dual knowledge
generation problem, involving: i) creation of new, large and rich in-the-wild
databases and ii) design and training of novel deep neural architectures that
are able to analyse affect over these databases and to successfully generalise
their performance on other datasets. The paper focuses on large in-the-wild
databases, i.e., Aff-Wild and Aff-Wild2 and presents the design of two classes
of deep neural networks trained with these databases. The first class refers to
uni-task affect recognition, focusing on prediction of the valence and arousal
dimensional variables. The second class refers to estimation of all main
behavior tasks, i.e. valence-arousal prediction; categorical emotion
classification in seven basic facial expressions; facial Action Unit detection.
A novel multi-task and holistic framework is presented which is able to jointly
learn and effectively generalize and perform affect recognition over all
existing in-the-wild databases. Large experimental studies illustrate the
achieved performance improvement over the existing state-of-the-art in affect
recognition.
|
The Segerdahl process (Segerdahl (1955)), characterized by exponential claims
and affine drift, has drawn a considerable amount of interest -- see, for
example, (Tichy (1984); Avram and Usabel (2008), due to its economic interest
(it is the simplest risk process which takes into account the effect of
interest rates). It is also the simplest non-Levy, non-diffusion example of a
spectrally negative Markov risk model. Note that for both spectrally negative
Levy and diffusion processes, first passage theories which are based on
identifying two basic monotone harmonic functions/martingales have been
developped. This means that for these processes many control problems involving
dividends, capital injections, etc., may be solved explicitly once the two
basic functions have been obtained. Furthermore, extensions to general
spectrally negative Markov processes are possible (Landriault et al. (2017),
Avram et al. (2018); Avram and Goreac (2019); Avram et al. (2019b).
Unfortunately, methods for computing the basic functions are still lacking
outside the Levy and diffusion classes, with the notable exception of the
Segerdahl process, for which the ruin probability has been computed (Paulsen
and Gjessing (1997). However, there is a striking lack of numerical results in
both cases. This motivated us to review several approaches, with the purpose of
drawing attention to connections between them, and underlying open problems.
|
Ultralight bosons are possible fundamental building blocks of nature, and
promising dark matter candidates. They can trigger superradiant instabilities
of spinning black holes (BHs) and form long-lived "bosonic clouds" that slowly
dissipate energy through the emission of gravitational waves (GWs). Previous
studies constrained ultralight bosons by searching for the stochastic
gravitational wave background (SGWB) emitted by these sources in LIGO data,
focusing on the most unstable dipolar and quadrupolar modes. Here we focus on
scalar bosons and extend previous works by: (i) studying in detail the impact
of higher modes in the SGWB; (ii) exploring the potential of future proposed
ground-based GW detectors, such as the Neutron Star Extreme Matter Observatory,
the Einstein Telescope and Cosmic Explorer, to detect this SGWB. We find that
higher modes largely dominate the SGWB for bosons with masses $\gtrsim
10^{-12}$ eV, which is particularly relevant for future GW detectors. By
estimating the signal-to-noise ratio of this SGWB, due to both stellar-origin
BHs and from a hypothetical population of primordial BHs, we find that future
ground-based GW detectors could observe or constrain bosons in the mass range
$\sim [7\times 10^{-14}, 2\times 10^{-11}]$ eV and significantly improve on
current and future constraints imposed by LIGO and Virgo observations.
|
Army cadets obtain occupations through a centralized process. Three
objectives -- increasing retention, aligning talent, and enhancing trust --
have guided reforms to this process since 2006. West Point's mechanism for the
Class of 2020 exacerbated challenges implementing Army policy aims. We
formulate these desiderata as axioms and study their implications theoretically
and with administrative data. We show that the Army's objectives not only
determine an allocation mechanism, but also a specific priority policy, a
uniqueness result that integrates mechanism and priority design. These results
led to a re-design of the mechanism, now adopted at both West Point and ROTC.
|
We associate a certain tensor product lattice to any primitive integer
lattice and ask about its typical shape. These lattices are related to the
tangent bundle of Grassmannians and their study is motivated by Peyre's
programme on "freeness" for rational points of bounded height on Fano
varieties.
|
Static (DC) and dynamic (AC, at 14 MHz and 8 GHz) magnetic susceptibilities
of single crystals of a ferromagnetic superconductor,
$\textrm{EuFe}_{2}(\textrm{As}_{1-x}\textrm{P}_{x})_{2}$ (x = 0.23), were
measured in pristine state and after different doses of 2.5 MeV electron or 3.5
MeV proton irradiation. The superconducting transition temperature, $T_{c}(H)$,
shows an extraordinarily large decrease. It starts at
$T_{c}(H=0)\approx24\:\textrm{K}$ in the pristine sample for both AC and DC
measurements, but moves to almost half of that value after moderate irradiation
dose. Our results suggest that in
$\textrm{EuFe}_{2}(\textrm{As}_{1-x}\textrm{P}_{x})_{2}$ superconductivity is
affected by local-moment ferromagnetism mostly via the spontaneous internal
magnetic fields induced by the FM subsystem. Another mechanism is revealed upon
irradiation where magnetic defects created in ordered $\text{Eu}^{2+}$ lattice
act as efficient pairbreakers leading to a significant $T_{c}$ reduction upon
irradiation compared to other 122 compounds. On the other hand, the exchange
interactions seem to be weakly screened by the superconducting phase leading to
a modest increase of $T_{m}$ (less than 1 K) after the irradiation drives
$T_{c}$ to below $T_{m}$. The results suggest that FM and SC phases coexist
microscopically in the same volume.
|
We propose a new network architecture, the Fractal Pyramid Networks (PFNs)
for pixel-wise prediction tasks as an alternative to the widely used
encoder-decoder structure. In the encoder-decoder structure, the input is
processed by an encoding-decoding pipeline that tries to get a semantic
large-channel feature. Different from that, our proposed PFNs hold multiple
information processing pathways and encode the information to multiple separate
small-channel features. On the task of self-supervised monocular depth
estimation, even without ImageNet pretrained, our models can compete or
outperform the state-of-the-art methods on the KITTI dataset with much fewer
parameters. Moreover, the visual quality of the prediction is significantly
improved. The experiment of semantic segmentation provides evidence that the
PFNs can be applied to other pixel-wise prediction tasks, and demonstrates that
our models can catch more global structure information.
|
Black-box quantum state preparation is a fundamental primitive in quantum
algorithms. Starting from Grover, a series of techniques have been devised to
reduce the complexity. In this work, we propose to perform black-box state
preparation using the technique of linear combination of unitaries (LCU). We
provide two algorithms based on a different structure of LCU. Our algorithms
improve upon the existed best results by reducing the required additional
qubits and Toffoli gates to 2log(n) and n, respectively, in the bit precision
n. We demonstrate the algorithms using the IBM Quantum Experience cloud
services. The further reduced complexity of the present algorithms brings the
black-box quantum state preparation closer to reality.
|
Three-dimensional topological insulators (TIs) host helical Dirac surface
states at the interface with a trivial insulator. In quasi-one-dimensional TI
nanoribbon structures the wave function of surface charges extends
phase-coherently along the perimeter of the nanoribbon, resulting in a
quantization of transverse surface modes. Furthermore, as the inherent
spin-momentum locking results in a Berry phase offset of $\pi$ of
self-interfering charge carriers an energy gap within the surface state
dispersion appears and all states become spin-degenerate. We investigate and
compare the magnetic field dependent surface state dispersion in selectively
deposited Bi$_2$Te$_3$ TI micro- and nanoribbon structures by analysing the
gate voltage dependent magnetoconductance at cryogenic temperatures. While in
wide microribbon devices the field effect mainly changes the amount of bulk
charges close to the top surface we identify coherent transverse surface states
along the perimeter of the nanoribbon devices responding to a change in top
gate potential. We quantify the energetic spacing in between these quantized
transverse subbands by using an electrostatic model that treats an initial
difference in charge carrier densities on the top and bottom surface as well as
remaining bulk charges. In the gate voltage dependent transconductance we find
oscillations that change their relative phase by $\pi$ at half-integer values
of the magnetic flux quantum applied coaxial to the nanoribbon, which is a
signature for a magnetic flux dependent topological phase transition in narrow,
selectively deposited TI nanoribbon devices.
|
Interference between light waves is one of the widely known phenomena in
physics, which is widely used in modern optics, ranging from precise detection
at the nanoscale to gravitational-wave observation. Akin to light, both
classical and quantum interferences between surface plasmon polaritons (SPPs)
have been demonstrated. However, to actively control the SPP interference
within subcycle in time (usually less than several femtoseconds in the visible
range) is still missing, which hinders the ultimate manipulation of SPP
interference on ultrafast time scale. In this paper, the interference between
SPPs launched by a hole dimer, which was excited by a grazing incident free
electron beam without direct contact, was manipulated through both propagation
and initial phase difference control. Particularly, using cathodoluminescence
spectroscopy, the appearance of higher-order interference orders was obtained
through propagation phase control by increasing separation distances of the
dimer. Meanwhile, the peak-valley-peak evolution at a certain wavelength
through changing the accelerating voltages was observed, which originates from
the initial phase difference control of hole launched SPPs. In particular, the
time resolution of this kind of control is shown to be in the ultrafast
attosecond (as) region. Our work suggests that fast electron beams can be an
efficient tool to control polarition interference in subcycle temporal scale,
which can be potentially used in ultrafast optical processing or sensing.
|
We investigate Bayesian predictive distributions for Wishart distributions
under the Kullback--Leibler divergence. We consider a recently introduced class
of prior distributions, called the family of enriched standard conjugate prior
distributions and compare the Bayesian predictive distributions based on these
prior distributions. We study the performance of the Bayesian predictive
distribution based on the reference prior distribution in the family. We show
that there exists a prior distribution in the family that dominates the
reference prior distribution.
|
The construction of an ontology of scientific knowledge objects, presented
here, is part of the development of an approach oriented towards the
visualization of scientific knowledge. It is motivated by the fact that the
concepts that are used to organize scientific knowledge (theorem, law,
experience, proof, etc.) appear in existing ontologies but that none of these
ontologies is centered on this topic and presents them in a simple and easily
understandable organization. This ontology has been constructed by 1) selecting
concepts that appear in high level ontologies or in ontologies of knowledge
objects of specific fields and 2) by interviewing scientists in different
fields. We have aligned this ontology with some of the sources used, which has
allowed us to verify its consistency with respect to them. The validation of
the ontology consists in using it to formalize knowledge from various sources,
which we have begun to do in the field of physics.
|
We consider an input-constrained differential-drive robot with actuator
dynamics. For this system, we establish asymptotic stability of the origin on
arbitrary compact, convex sets using Model Predictive Control (MPC) without
stabilizing terminal conditions despite the presence of state constraints and
actuator dynamics. We note that the problem without those two additional
ingredients was essentially solved beforehand, despite the fact that the
linearization is not stabilizable. We propose an approach successfully solving
the task at hand by combining the theory of barriers to characterize the
viability kernel and an MPC framework based on so-called cost controllability.
Moreover, we present a numerical case study to derive quantitative bounds on
the required length of the prediction horizon. To this end, we investigate the
boundary of the viability kernel and a neighbourhood of the origin, i.e. the
most interesting areas.
|
Anomaly detection is a crucial and challenging subject that has been studied
within diverse research areas. In this work, we explore the task of log anomaly
detection (especially computer system logs and user behavior logs) by analyzing
logs' sequential information. We propose LAMA, a multi-head attention based
sequential model to process log streams as template activity (event) sequences.
A next event prediction task is applied to train the model for anomaly
detection. Extensive empirical studies demonstrate that our new model
outperforms existing log anomaly detection methods including statistical and
deep learning methodologies, which validate the effectiveness of our proposed
method in learning sequence patterns of log data.
|
In this paper, we consider fully connected feed-forward deep neural networks
where weights and biases are independent and identically distributed according
to Gaussian distributions. Extending previous results (Matthews et al.,
2018a;b; Yang, 2019) we adopt a function-space perspective, i.e. we look at
neural networks as infinite-dimensional random elements on the input space
$\mathbb{R}^I$. Under suitable assumptions on the activation function we show
that: i) a network defines a continuous Gaussian process on the input space
$\mathbb{R}^I$; ii) a network with re-scaled weights converges weakly to a
continuous Gaussian process in the large-width limit; iii) the limiting
Gaussian process has almost surely locally $\gamma$-H\"older continuous paths,
for $0 < \gamma <1$. Our results contribute to recent theoretical studies on
the interplay between infinitely wide deep neural networks and Gaussian
processes by establishing weak convergence in function-space with respect to a
stronger metric.
|
In this article, we wish to establish some first order differential
subordination relations for certain Carath\'{e}odory functions with nice
geometrical properties. Moreover, several implications are determined so that
the normalized analytic function belongs to various subclasses of starlike
functions.
|
We study the Du Bois complex of a hypersurface $Z$ in a smooth complex
algebraic variety in terms of the minimal exponent $\widetilde{\alpha}(Z)$ and
give various applications. We show that if $\widetilde{\alpha}(Z)\geq p+1$,
then the canonical morphism $\Omega_Z^p\to \underline{\Omega}_Z^p$ is an
isomorphism. On the other hand, if $Z$ is singular and
$\widetilde{\alpha}(Z)>p\geq 2$, then ${\mathcal
H}^{p-1}(\underline{\Omega}_Z^{n-p})\neq 0$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.