abstract
stringlengths 42
2.09k
|
---|
The analysis of Magnetic Resonance Imaging (MRI) sequences enables clinical
professionals to monitor the progression of a brain tumor. As the interest for
automatizing brain volume MRI analysis increases, it becomes convenient to have
each sequence well identified. However, the unstandardized naming of MRI
sequences makes their identification difficult for automated systems, as well
as makes it difficult for researches to generate or use datasets for machine
learning research. In the face of that, we propose a system for identifying
types of brain MRI sequences based on deep learning. By training a
Convolutional Neural Network (CNN) based on 18-layer ResNet architecture, our
system can classify a volumetric brain MRI as a FLAIR, T1, T1c or T2 sequence,
or whether it does not belong to any of these classes. The network was
evaluated on publicly available datasets comprising both, pre-processed (BraTS
dataset) and non-pre-processed (TCGA-GBM dataset), image types with diverse
acquisition protocols, requiring only a few slices of the volume for training.
Our system can classify among sequence types with an accuracy of 96.81%.
|
Neural machine translation (NMT) has recently gained widespread attention
because of its high translation accuracy. However, it shows poor performance in
the translation of long sentences, which is a major issue in low-resource
languages. It is assumed that this issue is caused by insufficient number of
long sentences in the training data. Therefore, this study proposes a simple
data augmentation method to handle long sentences. In this method, we use only
the given parallel corpora as the training data and generate long sentences by
concatenating two sentences. Based on the experimental results, we confirm
improvements in long sentence translation by the proposed data augmentation
method, despite its simplicity. Moreover, the translation quality is further
improved by the proposed method, when combined with back-translation.
|
It is commonly known that Killing vectors and tensors are in one-to-one
correspondence with polynomial first integrals of the geodesic equation. In
this work, metrics admitting nonpolynomial first integrals of the geodesic
equation are constructed, each of which revealing a chain of generalised
Killing vectors.
|
Motivated by the philosophy that $C^*$-algebras reflect noncommutative
topology, we investigate the stable homotopy theory of the (opposite) category
of $C^*$-algebras. We focus on $C^*$-algebras which are non-commutative
CW-complexes in the sense of [ELP]. We construct the stable $\infty$-category
of noncommutative CW-spectra, which we denote by $\mathtt{NSp}$. Let
$\mathcal{M}$ be the full spectral subcategory of $\mathtt{NSp}$ spanned by
"noncommutative suspension spectra" of matrix algebras. Our main result is that
$\mathtt{NSp}$ is equivalent to the $\infty$-category of spectral presheaves on
$\mathcal{M}$.
To prove this we first prove a general result which states that any compactly
generated stable $\infty$-category is naturally equivalent to the
$\infty$-category of spectral presheaves on a full spectral subcategory spanned
by a set of compact generators. This is an $\infty$-categorical version of a
result by Schwede and Shipley [ScSh1]. In proving this we use the language of
enriched $\infty$-categories as developed by Hinich [Hin2,Hin3].
We end by presenting a "strict" model for $\mathcal{M}$. That is, we define a
category $\mathcal{M}_s$ strictly enriched in a certain monoidal model category
of spectra $\mathtt{Sp^M}$. We give a direct proof that the category of
$\mathtt{Sp^M}$-enriched presheaves $\mathcal{M}_s^{op}\to\mathtt{Sp^M}$ with
the projective model structure models $\mathtt{NSp}$ and conclude that
$\mathcal{M}_s$ is a strict model for $\mathcal{M}$.
|
We consider a hierarchy of four typed call-by-value languages with either
higher-order or ground-type references and with either callcc or no control
operator.Our first result is a fully abstract trace model for the most
expressive setting, featuring both higher-order references and callcc,
constructed in the spirit of operational game semantics. Next we examine the
impact of suppressing higher-order references and callcc in contexts and
provide an operational explanation for the game-semantic conditions known as
visibility and bracketing respectively.This allows us to refine the original
model to provide fully abstract trace models of interaction with contexts that
need not use higher-order references or callcc. Along the way, we discuss the
relationship between error- and termination-based contextual testing in each
case, and relate the two to trace and complete trace equivalence
respectively.Overall, the paper provides a systematic development of
operational game semantics for all four cases, which represent the state-based
face of the so-called semantic cube.
|
We describe a functional framework suitable to the analysis of the
Cahn-Hilliard equation on an evolving surface whose evolution is assumed to be
given \textit{a priori}. The model is derived from balance laws for an order
parameter with an associated Cahn-Hilliard energy functional and we establish
well-posedness for general regular potentials, satisfying some prescribed
growth conditions, and for two singular nonlinearities -- the thermodynamically
relevant logarithmic potential and a double obstacle potential. We identify,
for the singular potentials, necessary conditions on the initial data and the
evolution of the surfaces for global-in-time existence of solutions, which
arise from the fact that integrals of solutions are preserved over time, and
prove well-posedness for initial data on a suitable set of admissible initial
conditions. We then briefly describe an alternative derivation leading to a
model that instead preserves a weighted integral of the solution, and explain
how our arguments can be adapted in order to obtain global-in-time existence
without restrictions on the initial conditions. Some illustrative examples and
further research directions are given in the final sections.
|
While unbiased machine learning models are essential for many applications,
bias is a human-defined concept that can vary across tasks. Given only
input-label pairs, algorithms may lack sufficient information to distinguish
stable (causal) features from unstable (spurious) features. However, related
tasks often share similar biases -- an observation we may leverage to develop
stable classifiers in the transfer setting. In this work, we explicitly inform
the target classifier about unstable features in the source tasks.
Specifically, we derive a representation that encodes the unstable features by
contrasting different data environments in the source task. We achieve
robustness by clustering data of the target task according to this
representation and minimizing the worst-case risk across these clusters. We
evaluate our method on both text and image classifications. Empirical results
demonstrate that our algorithm is able to maintain robustness on the target
task, outperforming the best baseline by 22.9% in absolute accuracy across 12
transfer settings. Our code is available at https://github.com/YujiaBao/Tofu.
|
A comb domain is defined to be the entire complex plain with a collection of
vertical slits, symmetric over the real axis, removed. In this paper, we
consider the question of determining whether the exit time of planar Brownian
motion from such a domain has finite $p$-th moment. This question has been
addressed before in relation to starlike domains, but these previous results do
not apply to comb domains. Our main result is a sufficient condition on the
location of the slits which ensures that the $p$-th moment of the exit time is
finite. Several auxiliary results are also presented, including a construction
of a comb domain whose exit time has infinite $p$-th moment for all $p \geq
1/2$.
|
Word embeddings are often used in natural language processing as a means to
quantify relationships between words. More generally, these same word embedding
techniques can be used to quantify relationships between features. In this
paper, we first consider multiple different word embedding techniques within
the context of malware classification. We use hidden Markov models to obtain
embedding vectors in an approach that we refer to as HMM2Vec, and we generate
vector embeddings based on principal component analysis. We also consider the
popular neural network based word embedding technique known as Word2Vec. In
each case, we derive feature embeddings based on opcode sequences for malware
samples from a variety of different families. We show that we can obtain better
classification accuracy based on these feature embeddings, as compared to HMM
experiments that directly use the opcode sequences, and serve to establish a
baseline. These results show that word embeddings can be a useful feature
engineering step in the field of malware analysis.
|
Machine learning inference is increasingly being executed locally on mobile
and embedded platforms, due to the clear advantages in latency, privacy and
connectivity. In this paper, we present approaches for online resource
management in heterogeneous multi-core systems and show how they can be applied
to optimise the performance of machine learning workloads. Performance can be
defined using platform-dependent (e.g. speed, energy) and platform-independent
(accuracy, confidence) metrics. In particular, we show how a Deep Neural
Network (DNN) can be dynamically scalable to trade-off these various
performance metrics. Achieving consistent performance when executing on
different platforms is necessary yet challenging, due to the different
resources provided and their capability, and their time-varying availability
when executing alongside other workloads. Managing the interface between
available hardware resources (often numerous and heterogeneous in nature),
software requirements, and user experience is increasingly complex.
|
Basic physics of drift-wave turbulence and zonal flows has long been studied
within the framework of wave-kinetic theory. Recently, this framework has been
re-examined from first principles, which has led to more accurate yet still
tractable "improved" wave-kinetic equations. In particular, these equations
reveal an important effect of the zonal-flow "curvature" (the second radial
derivative of the flow velocity) on dynamics and stability of drift waves and
zonal flows. We overview these recent findings and present a consolidated
high-level picture of (mostly quasilinear) zonal-flow physics within reduced
models of drift-wave turbulence.
|
A search for the single material system that simultaneously exhibits
topological phase and intrinsic superconductivity has been largely limited,
although such a system is far more favorable especially for the quantum device
applications. Except artificially engineered topological superconductivity in
heterostructure systems, another alternative is to have superconductivity
arising from the topological materials by pressure or other clean technology.
Here, based on first-principles calculations, we first show that
quasi-one-dimensional compound (NbSe4)2I represents a rare example of a chiral
Weyl semimetal in which the set of symmetry-related Weyl points (WPs) exhibit
the same chiral charge at a certain energy. The net chiral charge (NCC) of the
below Fermi level EF (or a certain energy) can be tuned by pressure. In
addition, a partial disorder induced by pressure accompanied with
superconductivity emerges. Although amorphization of the iodine sub-lattice
under high pressure, the one-dimensional NbSe4 chains in (NbSe4)2I remain
intact and provide a superconducting channel in one dimension. Our combined
theoretical and experimental research provide critical insight into a new phase
of the one-dimensional system, in which distinctive phase transitions and
correlated topological states emerge upon compression.
|
Most chatbot literature that focuses on improving the fluency and coherence
of a chatbot, is dedicated to making chatbots more human-like. However, very
little work delves into what really separates humans from chatbots -- humans
intrinsically understand the effect their responses have on the interlocutor
and often respond with an intention such as proposing an optimistic view to
make the interlocutor feel better. This paper proposes an innovative framework
to train chatbots to possess human-like intentions. Our framework includes a
guiding chatbot and an interlocutor model that plays the role of humans. The
guiding chatbot is assigned an intention and learns to induce the interlocutor
to reply with responses matching the intention, for example, long responses,
joyful responses, responses with specific words, etc. We examined our framework
using three experimental setups and evaluated the guiding chatbot with four
different metrics to demonstrate flexibility and performance advantages.
Additionally, we performed trials with human interlocutors to substantiate the
guiding chatbot's effectiveness in influencing the responses of humans to a
certain extent. Code will be made available to the public.
|
We study the key features of the Josephson transport through a curved
semiconducting nanowire. Based on numerical simulations and analytical
estimates within the framework of the Bogoliubov-de Gennes equations we find
the ground-state phase difference $\varphi_0$ between the superconducting leads
tuned by the spin splitting field $h$ driving the system from the topologically
trivial to the nontrivial superconducting state. The phase $\varphi_0$ vanishes
for rather small $h$, grows in a certain field range around the topological
transition, and then saturates at large $h$ in the Kitaev regime. Both the
subgap and the continuum quasiparticle levels are responsible for the above
behavior of the anomalous Josephson phase. It is demonstrated that the
crossover region on $\varphi_0(h)$ dependencies reveals itself in the
superconducting diode effect. The resulting tunable phase battery can be used
as a probe of topological transitions in Majorana networks and can become a
useful element of various quantum computation devices.
|
In this paper we give an alternative construction of a certain class of
Deformed Double Current Algebras. These algebras are deformations of $U({\rm
End}(\Bbbk^r)[x,y])$ and they were initially defined and studied by N.Guay in
his papers. Here we construct them as algebras of endomorphisms in Deligne
category. We do this by taking an ultraproduct of spherical subalgebras of the
extended Cherednik algebras of finite rank.
|
Call centers, in which human operators attend clients using textual chat, are
very common in modern e-commerce. Training enough skilled operators who are
able to provide good service is a challenge. We suggest an algorithm and a
method to train and implement an assisting agent that provides on-line advice
to operators while they attend clients. The agent is domain-independent and can
be introduced to new domains without major efforts in design, training and
organizing structured knowledge of the professional discipline. We demonstrate
the applicability of the system in an experiment that realizes its full
life-cycle on a specific domain and analyze its capabilities.
|
Currently several Bayesian approaches are available to estimate large sparse
precision matrices, including Bayesian graphical Lasso (Wang, 2012), Bayesian
structure learning (Banerjee and Ghosal, 2015), and graphical horseshoe (Li et
al., 2019). Although these methods have exhibited nice empirical performances,
in general they are computationally expensive. Moreover, we have limited
knowledge about the theoretical properties, e.g., posterior contraction rate,
of graphical Bayesian Lasso and graphical horseshoe. In this paper, we propose
a new method that integrates some commonly used continuous shrinkage priors
into a quasi-Bayesian framework featured by a pseudo-likelihood. Under mild
conditions, we establish an optimal posterior contraction rate for the proposed
method. Compared to existing approaches, our method has two main advantages.
First, our method is computationally more efficient while achieving similar
error rate; second, our framework is more amenable to theoretical analysis.
Extensive simulation experiments and the analysis on a real data set are
supportive of our theoretical results.
|
3D face recognition has shown its potential in many application scenarios.
Among numerous 3D face recognition methods, deep-learning-based methods have
developed vigorously in recent years. In this paper, an end-to-end deep
learning network entitled Sur3dNet-Face for point-cloud-based 3D face
recognition is proposed. The network uses PointNet as the backbone, which is a
successful point cloud classification solution but does not work properly in
face recognition. Supplemented with modifications in network architecture and a
few-data guided learning framework based on Gaussian process morphable model,
the backbone is successfully modified for 3D face recognition. Different from
existing methods training with a large amount of data in multiple datasets, our
method uses Spring2003 subset of FRGC v2.0 for training which contains only 943
facial scans, and the network is well trained with the guidance of such a small
amount of real data. Without fine-tuning on the test set, the Rank-1
Recognition Rate (RR1) is achieved as follows: 98.85% on FRGC v2.0 dataset and
99.33% on Bosphorus dataset, which proves the effectiveness and the
potentiality of our method.
|
In this perspective we discuss recent theoretical and experimental concepts
giving a route to a better understanding of conventional and unconventional
pairing mechanisms between opposite-spin fermions arising in one-dimensional
mesoscopic systems. With special attention, we focus on the problem of
experimental detectability of correlations between particles. We argue that
state-of-the-art experiments with few ultracold fermions may finally break an
impasse and give pioneering and unquestionable verification of the existence of
correlated pairs with non-zero center-of-mass momentum.
|
Current intent classification approaches assign binary intent class
memberships to natural language utterances while disregarding the inherent
vagueness in language and the corresponding vagueness in intent class
boundaries. In this work, we propose a scheme to address the ambiguity in
single-intent as well as multi-intent natural language utterances by creating
degree memberships over fuzzified intent classes. To our knowledge, this is the
first work to address and quantify the impact of the fuzzy nature of natural
language utterances over intent category memberships. Additionally, our
approach overcomes the sparsity of multi-intent utterance data to train
classification models by using a small database of single intent utterances to
generate class memberships over multi-intent utterances. We evaluate our
approach over two task-oriented dialog datasets, across different fuzzy
membership generation techniques and approximate string similarity measures.
Our results reveal the impact of lexical overlap between utterances of
different intents, and the underlying data distributions, on the fuzzification
of intent memberships. Moreover, we evaluate the accuracy of our approach by
comparing the defuzzified memberships to their binary counterparts, across
different combinations of membership functions and string similarity measures.
|
We study neologism use in two samples of early English correspondence, from
1640--1660 and 1760--1780. Of especial interest are the early adopters of new
vocabulary, the social groups they represent, and the types and functions of
their neologisms. We describe our computer-assisted approach and note the
difficulties associated with massive variation in the corpus. Our findings
include that while male letter-writers tend to use neologisms more frequently
than women, the eighteenth century seems to have provided more opportunities
for women and the lower ranks to participate in neologism use as well. In both
samples, neologisms most frequently occur in letters written between close
friends, which could be due to this less stable relationship triggering more
creative language use. In the seventeenth-century sample, we observe the
influence of the English Civil War, while the eighteenth-century sample appears
to reflect the changing functions of letter-writing, as correspondence is
increasingly being used as a tool for building and maintaining social
relationships in addition to exchanging information.
|
With the aim of matching a pair of instances from two different modalities,
cross modality mapping has attracted growing attention in the computer vision
community. Existing methods usually formulate the mapping function as the
similarity measure between the pair of instance features, which are embedded to
a common space. However, we observe that the relationships among the instances
within a single modality (intra relations) and those between the pair of
heterogeneous instances (inter relations) are insufficiently explored in
previous approaches. Motivated by this, we redefine the mapping function with
relational reasoning via graph modeling, and further propose a GCN-based
Relational Reasoning Network (RR-Net) in which inter and intra relations are
efficiently computed to universally resolve the cross modality mapping problem.
Concretely, we first construct two kinds of graph, i.e., Intra Graph and Inter
Graph, to respectively model intra relations and inter relations. Then RR-Net
updates all the node features and edge features in an iterative manner for
learning intra and inter relations simultaneously. Last, RR-Net outputs the
probabilities over the edges which link a pair of heterogeneous instances to
estimate the mapping results. Extensive experiments on three example tasks,
i.e., image classification, social recommendation and sound recognition,
clearly demonstrate the superiority and universality of our proposed model.
|
High-altitude balloon experiments are becoming very popular among
universities and research institutes as they can be used for testing
instruments eventually intended for space, and for simple astronomical
observations of Solar System objects like the Moon, comets, and asteroids,
difficult to observe from the ground due to atmosphere. Further, they are one
of the best platforms for atmospheric studies. In this experiment, we build a
simple 1U CubeSat and, by flying it on a high-altitude balloon to an altitude
of about 30 km, where the total payload weighted 4.9 kg and examine how some
parameters, such as magnetic field, humidity, temperature or pressure, vary as
a function of altitude. We also calibrate the magnetometer to remove the hard
iron and soft iron errors. Such experiments and studies through a stratospheric
balloon flights can also be used to study the performance of easily available
commercial sensors in extreme conditions as well. We present the results of the
first flight, which helped us study the functionality of the various sensors
and electronics at low temperatures reaching about -40 degrees Celsius. Further
the motion of the payload has been tracked throughout this flight. This
experiment took place on 8 March 2020 from the CREST campus of the Indian
Institute of Astrophysics, Bangalore. Using the results from this flight, we
identify and rectify the errors to obtain better results from the subsequent
flights.
|
Matrix elements between nonorthogonal Slater determinants represent an
essential component of many emerging electronic structure methods. However,
evaluating nonorthogonal matrix elements is conceptually and computationally
harder then their orthogonal counterparts. While several different approaches
have been developed, these are predominantly derived from the first-quantised
generalised Slater-Condon rules and usually require biorthogonal occupied
orbitals to be computed for each matrix element. For coupling terms between
nonorthogonal excited configurations, a second-quantised approach such as the
nonorthogonal Wick's theorem is more desirable, but this fails when the two
reference determinants have a zero many-body overlap. In this contribution, we
derive an entirely generalised extension to the nonorthogonal Wick's theorem
that is applicable to all pairs of determinants with nonorthogonal orbitals.
Our approach creates a universal methodology for evaluating any nonorthogonal
matrix element and allows Wick's theorem and the generalised Slater-Condon
rules to be unified for the first time. Furthermore, we present a simple
well-defined protocol for deriving arbitrary coupling terms between
nonorthogonal excited configurations. In the case of overlap and one-body
operators, this protocol recovers efficient formulae with reduced scaling,
promising significant computational acceleration for methods that rely on such
terms.
|
We present a new class of Langevin based algorithms, which overcomes many of
the known shortcomings of popular adaptive optimizers that are currently used
for the fine tuning of deep learning models. Its underpinning theory relies on
recent advances of Euler's polygonal approximations for stochastic differential
equations (SDEs) with monotone coefficients. As a result, it inherits the
stability properties of tamed algorithms, while it addresses other known
issues, e.g. vanishing gradients in neural networks. In particular, we provide
a nonasymptotic analysis and full theoretical guarantees for the convergence
properties of an algorithm of this novel class, which we named TH$\varepsilon$O
POULA (or, simply, TheoPouLa). Finally, several experiments are presented with
different types of deep learning models, which show the superior performance of
TheoPouLa over many popular adaptive optimization algorithms.
|
Distributing points on a (possibly high-dimensional) sphere with minimal
energy is a long-standing problem in and outside the field of mathematics. This
paper considers a novel energy function that arises naturally from statistics
and combinatorial optimization, and studies its theoretical properties. Our
result solves both the exact optimal spherical point configurations in certain
cases and the minimal energy asymptotics under general assumptions. Connections
between our results and the L1-Principle Component Analysis and Quasi-Monte
Carlo methods are also discussed.
|
Detecting cancers at early stages can dramatically reduce mortality rates.
Therefore, practical cancer screening at the population level is needed. Here,
we develop a comprehensive detection system to classify all common cancer
types. By integrating artificial intelligence deep learning neural network and
noncoding RNA biomarkers selected from massive data, our system can accurately
detect cancer vs healthy object with 96.3% of AUC of ROC (Area Under Curve of a
Receiver Operating Characteristic curve). Intriguinely, with no more than 6
biomarkers, our approach can easily discriminate any individual cancer type vs
normal with 99% to 100% AUC. Furthermore, a comprehensive marker panel can
simultaneously multi-classify all common cancers with a stable 78% of accuracy
at heterological cancerous tissues and conditions. This provides a valuable
framework for large scale cancer screening. The AI models and plots of results
were available in https://combai.org/ai/cancerdetection/
|
We present a multi-instrument spectroscopic analysis of the unique Li/Na-rich
giant star 25664 in Omega Centauri using spectra acquired with FLAMES-GIRAFFE,
X-SHOOTER, UVES and HARPS. Li and Na abundances have been derived from the UVES
spectrum using transitions weakly sensitive to non-local thermodynamic
equilibrium and assumed isotopic ratio. This new analysis confirms the
surprising Li and Na abundances of this star (A(Li) =+2.71+-0.07 dex,
[Na/Fe]=+1.00+-0.05 dex). Additionally, we provide new pieces of evidence for
its chemical characterisation. The 12C/13C isotopic ratio (15+-2) shows that
this star has not yet undergone the extra-mixing episode usually associated
with the red giant branch bump. Therefore, we can rule out the scenario of
efficient deep extra-mixing during the red giant branch phase envisaged to
explain the high Li and Na abundances. Also, the star exhibits high abundances
of both C and N ([C/Fe]=+0.45+-0.16 dex and [N/Fe]=+0.99+-0.20 dex), not
compatible with the typical C-N anticorrelation observed in globular cluster
stars. We found evidence of a radial velocity variability in 25664, suggesting
that the star could be part of a binary system, likely having accreted material
from a more massive companion when the latter was evolving in the AGB phase.
Viable candidates for the donor star are AGB stars with 3-4 Msun and super-AGB
stars (~7-8 Msun), both able to produce Li- and Na-rich material.
Alternatively, the star could be formed from the pure ejecta of a super-AGB
stars, before the dilution with primordial gas occurs.
|
A sample of 46 stars, host of exoplanets, is used to search for a connection
between their formation process and the formation of the planets rotating
around them. Separating our sample in two, stars hosting high-mass exoplanets
(HMEs) and low-mass exoplanets (LMEs), we found the former to be more massive
and to rotate faster than the latter. We also found the HMEs to have higher
orbital angular momentum than the LMEs and to have lost more angular momentum
through migration. These results are consistent with the view that the more
massive the star and higher its rotation, the more massive was its
protoplanetarys disk and rotation, and the more efficient the extraction of
angular momentum from the planets.
|
We present 63 new multi-site radial velocity measurements of the K1III giant
HD 76920, which was recently reported to host the most eccentric planet known
to orbit an evolved star. We focussed our observational efforts on the time
around the predicted periastron passage and achieved near-continuous phase
coverage of the corresponding radial velocity peak. By combining our radial
velocity measurements from four different instruments with previously published
ones, we confirm the highly eccentric nature of the system, and find an even
higher eccentricity of $e=0.8782 \pm 0.0025$, an orbital period of
$415.891^{+0.043}_{-0.039}\,\mathrm{d}$, and a minimum mass of
$3.13^{+0.41}_{-0.43}\,\mathrm{M_J}$ for the planet. The uncertainties in the
orbital elements are greatly reduced, especially for the period and
eccentricity. We also performed a detailed spectroscopic analysis to derive
atmospheric stellar parameters, and thus the fundamental stellar parameters
($M_*, R_*, L_*$), taking into account the parallax from Gaia DR2, and
independently determined the stellar mass and radius using asteroseismology.
Intriguingly, at periastron the planet comes to within 2.4 stellar radii of its
host star's surface. However, we find that the planet is not currently
experiencing any significant orbital decay and will not be engulfed by the
stellar envelope for at least another $50-80$ Myr. Finally, while we calculate
a relatively high transit probability of $16\%$, we did not detect a transit in
the TESS photometry.
|
In this paper, we first address the general fractional integrals and
derivatives with the Sonine kernels that possess the integrable singularities
of power function type at the point zero. Both particular cases and
compositions of these operators are discussed. Then we proceed with a
construction of an operational calculus of the Mikusi\'nski type for the
general fractional derivatives with the Sonine kernels. This operational
calculus is applied for analytical treatment of some initial value problems for
the fractional differential equations with the general fractional derivatives.
The solutions are expressed in form of the convolution series that generalize
the power series for the exponential and the Mittag-Leffler functions.
|
Beam matching is a common technique that is routinely employed in accelerator
design with the aim of minimizing beam losses and preservation of beam
brightness. Despite being widely used, a full theoretical understanding of beam
matching in 6D remains elusive. Here, we present an analytical treatment of 6D
beam matching of a high-intensity beam onto an RF structure. We begin our
analysis within the framework of a linear model, and apply the averaging method
to a set of 3D beam envelope equations. Accordingly, we obtain a matched
solution that is comprised of smoothed envelopes and periodic terms, describing
envelope oscillations with the period of the focusing structure. We then
consider the nonlinear regime, where the beam size is comparable with the
separatrix size. Stating with a Hamiltonian analysis in 6D phase space, we
attain a self-consistent beam profile and show that it is significantly
different from the commonly used ellipsoidal shape. Subsequently, we analyze
the special case of an equilibrium with equal space charge depression between
all degrees of freedom. Comparison of beam dynamics for equipartitioned, equal
space charge depression, and equal emittances beams is given. Finally, we
present experimental results on beam matching in the LANSCE linac.
|
Due to the global pandemic, in March 2020 we in academia and industry were
abruptly forced into working from home. Yet teaching never stopped, and neither
did developing software, fixing software, and expanding into new markets.
Demands for flexible ways of working, responding to new requirements, have
never been so high. How did we manage to continue working, when we had to
suddenly switch all communication to online and virtual forms of contact? In
this short paper we describe how Ocuco Ltd., a medium-sized organization
headquartered in Ireland, managed our software development teams--distributed
throughout Ireland, Europe, Asia and America during the COVID-19 pandemic. We
describe how we expanded, kept our customers happy, and our teams motivated. We
made changes, some large, such as providing emergency financial support; others
small, like implementing regular online social pizza evenings. Technology and
process changes were minor, an advantage of working in globally distributed
teams since 2016, when development activities were coordinated according to the
Scaled Agile Framework (SAFe). The results of implementing the changes were
satisfying; productivity went up, we gained new customers, and preliminary
results from our wellness survey indicate that everyone feels extremely
well-supported by management to achieve their goals. However, the anonymised
survey responses did show some developers' anxiety levels were slightly raised,
and many are working longer hours. Administering this survey is very
beneficial, as now we know, so we can act.
|
We show that the cross section for diffractive dissociation of a small onium
off a large nucleus at total rapidity $Y$ and requiring a minimum rapidity gap
$Y_{\text{gap}}$ can be identified, in a well-defined parametric limit, with a
simple classical observable on the stochastic process representing the
evolution of the state of the onium, as its rapidity increases, in the form of
color dipole branchings: It formally coincides with twice the probability that
an even number of these dipoles effectively participate in the scattering, when
viewed in a frame in which the onium is evolved to the rapidity
$Y-Y_{\text{gap}}$. Consequently, finding asymptotic solutions to the
Kovchegov-Levin equation, which rules the $Y$-dependence of the diffractive
cross section, boils down to solving a probabilistic problem. Such a
formulation authorizes the derivation of a parameter-free analytical expression
for the gap distribution. Interestingly enough, events in which many dipoles
interact simultaneously play an important role, since the distribution of the
number $k$ of dipoles participating in the interaction turns out to be
proportional to $1/[k(k-1)]$.
|
In this paper, the effect on the ultrasonic attenuation of the grain size
heterogeneity in polycrystals is analyzed. First, new analytical developments
allowing the extension of the unified theory of Stanke and Kino to general
grain size distributions are presented. It is then shown that one can
additively decompose the attenuation coefficient provided that groups of grains
are defined. Second, the study is specialized to a bimodal distribution of the
grain size for which microstructures are numerically modeled by means of the
software Neper. The additive partition of the attenuation coefficient into
contributions coming from large and small grains motivates the derivation of an
optimization procedure for characterizing the grain size distribution. The
aforementioned approach, which is based on a least squares minimization, is at
last presented and illustrated on both analytical and numerical attenuation
data. It is thus shown that the method provides satisfying approximations of
volume fractions of large grains and modal equivalent diameters from the
frequency-dependent attenuation coefficient.
|
This paper proposes an enhanced coarray transformation model (EDCTM) and a
mixed greedy maximum likelihood algorithm called List-Based Maximum Likelihood
Orthogonal Matching Pursuit (LBML-OMP) for direction-of-arrival estimation with
non-uniform linear arrays (NLAs). The proposed EDCTM approach obtains improved
estimates when Khatri-Rao product-based models are used to generate difference
coarrays under the assumption of uncorrelated sources. In the proposed LBML-OMP
technique, for each iteration a set of candidates is generated based on the
correlation-maximization between the dictionary and the residue vector.
LBML-OMP then chooses the best candidate based on a reduced-complexity
asymptotic maximum likelihood decision rule. Simulations show the improved
results of EDCTM over existing approaches and that LBML-OMP outperforms
existing sparse recovery algorithms as well as Spatial Smoothing Multiple
Signal Classification with NLAs.
|
Let $0 \in \Gamma$ and $\Gamma \setminus \{0\}$ be an abelian group under
multiplication, where $\Gamma \setminus \{0\} \subseteq \{ z\in \mathbb{C}:
|z|=1 \}$. Define $\mathcal{H}_{n}(\Gamma)$ to be the set of all $n\times n$
Hermitian matrices with entries in $\Gamma$, whose diagonal entries are zero.
We introduce the notion of switching equivalence on $\mathcal{H}_{n}(\Gamma)$.
We find a characterization, in terms of fundamental cycles of graphs, of
switching equivalence of matrices in $\mathcal{H}_{n}(\Gamma)$. We give
sufficient conditions to characterize the cospectral matrices in
$\mathcal{H}_{n}(\Gamma)$. We find bounds on the number of switching
equivalence classes of all mixed graphs with the same underlying graph. We also
provide the size of all switching equivalence classes of mixed cycles, and give
a formula that calculates the size of a switching equivalence class of a mixed
planar graph. We also discuss the action of automorphism group of a graph on
switching equivalence classes.
|
In photosynthetic organisms, the energy of light during illumination is
absorbed by the antenna complexes, which is transmitted by excitons and is
either absorbed by the reaction centers (RCs), which have been closed in this
way, or emitted by fluorescence. The basic components of the dynamics of light
absorption have been integrated into a simple model of exciton migration, which
contains two parameters: the exciton hopping probability and the exciton
lifetime. During continuous radiation with light the fraction of closed RCs,
$x$, continuously increases and at a critical threshold, $x_c$, a percolation
transition takes place. Performing extensive Monte Carlo simulations we study
the properties of the transition in this correlated percolation model. We
measure the spanning probability in the vicinity of $x_c$, as well as the
fractal properties of the critical percolating cluster, both in the bulk and at
the surface.
|
Consider a truck filled with boxes of varying size and unknown mass and an
industrial robot with end-effectors that can unload multiple boxes from any
reachable location. In this work, we investigate how would the robot with the
help of a simulator, learn to maximize the number of boxes unloaded by each
action. Most high-fidelity robotic simulators like ours are time-consuming.
Therefore, we investigate the above learning problem with a focus on minimizing
the number of simulation runs required. The optimal decision-making problem
under this setting can be formulated as a multi-class classification problem.
However, to obtain the outcome of any action requires us to run the
time-consuming simulator, thereby restricting the amount of training data that
can be collected. Thus, we need a data-efficient approach to learn the
classifier and generalize it with a minimal amount of data. A high-fidelity
physics-based simulator is common in general for complex manipulation tasks
involving multi-body interactions. To this end, we train an optimal decision
tree as the classifier, and for each branch of the decision tree, we reason
about the confidence in the decision using a Probably Approximately Correct
(PAC) framework to determine whether more simulator data will help reach a
certain confidence level. This provides us with a mechanism to evaluate when
simulation can be avoided for certain decisions, and when simulation will
improve the decision making. For the truck unloading problem, our experiments
show that a significant reduction in simulator runs can be achieved using the
proposed method as compared to naively running the simulator to collect data to
train equally performing decision trees.
|
We report on a Rosenbluth separation using previously published data by the
CLAS collaboration in Hall B, Jefferson Lab for exclusive $\pi^{0}$ deeply
virtual electroproduction (DVEP) from the proton at a mean $Q^{2}$ of $\approx$
2 (GeV/c)$^{2}$. The central question we address is the applicability of
factorization in $\pi^0$ DVEP at these kinematics. The results of our
Rosenbluth separation clearly demonstrate the dominance of the longitudinal
contribution to the cross section. The extracted longitudinal and transverse
contributions are in agreement with previous data from Hall A at Jefferson Lab,
but over a much wider $-t$ range (0.12 - 1.8 (GeV/c)$^{2}$). The measured
dominance of the longitudinal contribution at $Q^{2} \approx$ 2 (GeV/c)$^{2}$
is consistent with the expectation of the handbag factorization theorem. We
find that $\sigma_L(t) \sim 1/(-t)$ for $-t >$ 0.5 (GeV/c)$^2$. Determination
of both longitudinal and transverse contributions to the deeply virtual
$\pi^{0}$ electroproduction cross section allows extraction of additional GPDs.
|
Accurate segmentation for medical images is important for clinical diagnosis.
Existing automatic segmentation methods are mainly based on fully supervised
learning and have an extremely high demand for precise annotations, which are
very costly and time-consuming to obtain. To address this problem, we proposed
an automatic CT segmentation method based on weakly supervised learning, by
which one could train an accurate segmentation model only with weak annotations
in the form of bounding boxes. The proposed method is composed of two steps: 1)
generating pseudo masks with bounding box annotations by k-means clustering,
and 2) iteratively training a 3D U-Net convolutional neural network as a
segmentation model. Some data pre-processing methods are used to improve
performance. The method was validated on four datasets containing three types
of organs with a total of 627 CT volumes. For liver, spleen and kidney
segmentation, it achieved an accuracy of 95.19%, 92.11%, and 91.45%,
respectively. Experimental results demonstrate that our method is accurate,
efficient, and suitable for clinical use.
|
A key open issue in condensed matter physics is how quantum and classical
correlations emerge in an unconventional superconductor from the underlying
normal state. We study this problem in a doped Mott insulator with information
theory tools on the two-dimensional Hubbard model at finite temperature with
cluster dynamical mean-field theory. We find that the local entropy detects the
superconducting state and that the difference in the local entropy between the
superconducting and normal states follows the same difference in the potential
energy. We find that the thermodynamic entropy is suppressed in the
superconducting state and monotonically decreases with decreasing doping. The
maximum in entropy found in the normal state above the overdoped region of the
superconducting dome is obliterated by superconductivity. The total mutual
information, which quantifies quantum and classical correlations, is amplified
in the superconducting state of the doped Mott insulator for all doping levels,
and shows a broad peak versus doping, as a result of competing quantum and
classical effects.
|
Knowledge Graph (KG) alignment aims at finding equivalent entities and
relations (i.e., mappings) between two KGs. The existing approaches utilize
either reasoning-based or semantic embedding-based techniques, but few studies
explore their combination. In this demonstration, we present PRASEMap, an
unsupervised KG alignment system that iteratively computes the Mappings with
both Probabilistic Reasoning (PR) And Semantic Embedding (SE) techniques.
PRASEMap can support various embedding-based KG alignment approaches as the SE
module, and enables easy human computer interaction that additionally provides
an option for users to feed the mapping annotations back to the system for
better results. The demonstration showcases these features via a stand-alone
Web application with user friendly interfaces. The demo is available at
https://prasemap.qizhy.com.
|
Face forgery detection is raising ever-increasing interest in computer vision
since facial manipulation technologies cause serious worries. Though recent
works have reached sound achievements, there are still unignorable problems: a)
learned features supervised by softmax loss are separable but not
discriminative enough, since softmax loss does not explicitly encourage
intra-class compactness and interclass separability; and b) fixed filter banks
and hand-crafted features are insufficient to capture forgery patterns of
frequency from diverse inputs. To compensate for such limitations, a novel
frequency-aware discriminative feature learning framework is proposed in this
paper. Specifically, we design a novel single-center loss (SCL) that only
compresses intra-class variations of natural faces while boosting inter-class
differences in the embedding space. In such a case, the network can learn more
discriminative features with less optimization difficulty. Besides, an adaptive
frequency feature generation module is developed to mine frequency clues in a
completely data-driven fashion. With the above two modules, the whole framework
can learn more discriminative features in an end-to-end manner. Extensive
experiments demonstrate the effectiveness and superiority of our framework on
three versions of the FF++ dataset.
|
When collaborating with an AI system, we need to assess when to trust its
recommendations. If we mistakenly trust it in regions where it is likely to
err, catastrophic failures may occur, hence the need for Bayesian approaches
for probabilistic reasoning in order to determine the confidence (or epistemic
uncertainty) in the probabilities in light of the training data. We propose an
approach to overcome the independence assumption behind most of the approaches
dealing with a large class of probabilistic reasoning that includes Bayesian
networks as well as several instances of probabilistic logic. We provide an
algorithm for Bayesian learning from sparse, albeit complete, observations, and
for deriving inferences and their confidences keeping track of the dependencies
between variables when they are manipulated within the unifying computational
formalism provided by probabilistic circuits. Each leaf of such circuits is
labelled with a beta-distributed random variable that provides us with an
elegant framework for representing uncertain probabilities. We achieve better
estimation of epistemic uncertainty than state-of-the-art approaches, including
highly engineered ones, while being able to handle general circuits and with
just a modest increase in the computational effort compared to using point
probabilities.
|
We propose and demonstrate a single-photon sensitive technique for optical
vibrometry. It uses high speed photon counting to sample the modulated
backscattering from a vibrating target. Designed for remote vibration sensing
with ultralow photon flux, we show that this technique can detect small
displacements down to 110 nm and resolve vibration frequencies from DC up to
several kilohertz, with less than 0.01 detected photons per pulse. This
single-photon sensitive optical vibrometry may find important applications in
acousto-optic sensing and imaging, especially in photon-starved environments.
|
We present a novel algorithm to solve a non-linear system of equations, whose
solution can be interpreted as a tight lower bound on the vector of expected
hitting times of a Markov chain whose transition probabilities are only
partially specified. We also briefly sketch how this method can be modified to
solve a conjugate system of equations that gives rise to the corresponding
upper bound. We prove the correctness of our method, and show that it converges
to the correct solution in a finite number of steps under mild conditions on
the system. We compare the runtime complexity of our method to a previously
published method from the literature, and identify conditions under which our
novel method is more efficient.
|
In its most traditional setting, the main concern of optimization theory is
the search for optimal solutions for instances of a given computational
problem. A recent trend of research in artificial intelligence, called solution
diversity, has focused on the development of notions of optimality that may be
more appropriate in settings where subjectivity is essential. The idea is that
instead of aiming at the development of algorithms that output a single optimal
solution, the goal is to investigate algorithms that output a small set of
sufficiently good solutions that are sufficiently diverse from one another. In
this way, the user has the opportunity to choose the solution that is most
appropriate to the context at hand. It also displays the richness of the
solution space.
When combined with techniques from parameterized complexity theory, the
paradigm of diversity of solutions offers a powerful algorithmic framework to
address problems of practical relevance. In this work, we investigate the
impact of this combination in the field of Kemeny Rank Aggregation, a
well-studied class of problems lying in the intersection of order theory and
social choice theory and also in the field of order theory itself. In
particular, we show that the Kemeny Rank Aggregation problem is fixed-parameter
tractable with respect to natural parameters providing natural formalizations
of the notions of diversity and of the notion of a sufficiently good solution.
Our main results work both when considering the traditional setting of
aggregation over linearly ordered votes, and in the more general setting where
votes are partially ordered.
|
In this paper we present a novel approach to emulating a universal quantum
computer with a classical system, one that uses a signal of bounded duration
and amplitude to represent an arbitrary quantum state. The signal may be of any
modality (e.g., acoustic, electromagnetic, etc), but we focus our discussion
here on electronic signals. Unitary gate operations are performed using analog
electronic circuit devices, such as four-quadrant multipliers, operational
amplifiers, and analog filters, although non-unitary operations may be
performed as well. In this manner, the Hilbert space structure of the quantum
state, as well as a universal set of gate operations, may be fully emulated
classically. The required bandwidth scales exponentially with the number of
qubits, however, thereby limiting the scalability of the approach, but the
intrinsic parallelism, ease of construction, and classical robustness to
decoherence may nevertheless lead to capabilities and efficiencies rivaling
that of current high performance computers.
|
We present a scalable technique for upper bounding the Lipschitz constant of
generative models. We relate this quantity to the maximal norm over the set of
attainable vector-Jacobian products of a given generative model. We approximate
this set by layerwise convex approximations using zonotopes. Our approach
generalizes and improves upon prior work using zonotope transformers and we
extend to Lipschitz estimation of neural networks with large output dimension.
This provides efficient and tight bounds on small networks and can scale to
generative models on VAE and DCGAN architectures.
|
Despite quantum networking concepts, designs, and hardware becoming
increasingly mature, there is no consensus on the optimal wavelength for
free-space systems. We present an in-depth analysis of a daytime free-space
quantum channel as a function of wavelength and atmospheric spatial coherence
(Fried coherence length). We choose decoy-state quantum key distribution bit
yield as a performance metric in order to reveal the ideal wavelength choice
for an actual qubit-based protocol under realistic atmospheric conditions. Our
analysis represents a rigorous framework to analyze requirements for spatial,
spectral, and temporal filtering. These results will help guide the development
of free-space quantum communication and networking systems. In particular, our
results suggest that shorter wavelengths in the optical band should be
considered for free-space quantum communication systems. Our results are also
interpreted in the context of atmospheric compensation by higher-order adaptive
optics.
|
Duality in the entanglement of identical particles manifests that
entanglement in only one variable can be revealed at a time. We demonstrate
this using polarization and orbital angular momentum (OAM) variables of
indistinguishable photons generated from parametric down conversion. We show
polarization entanglement by sorting photons in even and odd OAM basis, while
sorting them in two orthogonal polarization modes reveals the OAM entanglement.
The duality assisted observation of entanglement can be used as a verification
for the preservation of quantum indistinguishability over communication
channels. Indistinguishable photons entangled in complementary variables could
also evoke interest in distributed quantum sensing protocols and remote
entanglement generation.
|
Customer product reviews play a role in improving the quality of products and
services for business organizations or their brands. Complaining is an attitude
that expresses dissatisfaction with an event or a product not meeting customer
expectations. In this paper, we build a Open-domain Complaint Detection dataset
(UIT-ViOCD), including 5,485 human-annotated reviews on four categories about
product reviews on e-commerce sites. After the data collection phase, we
proceed to the annotation task and achieve the inter-annotator agreement Am of
87%. Then, we present an extensive methodology for the research purposes and
achieve 92.16% by F1-score for identifying complaints. With the results, in the
future, we aim to build a system for open-domain complaint detection in
E-commerce websites.
|
We construct a new set of asymptotically flat, static vacuum solutions to the
Einstein equations in dimensions 4 and 5, which may be interpreted as a
superposition of positive and negative mass black holes. The resulting
spacetimes are axisymmetric in 4-dimensions and bi-axisymmetric in
5-dimensions, and are regular away from the negative mass singularities, for
instance conical singularities are absent along the axes. In 5-dimensions, the
topologies of signed mass black holes used in the construction may be either
spheres $S^3$ or rings $S^1 \times S^2$; in particular, the negative mass
static black ring solution is introduced. A primary observation that
facilitates the superposition is the fact that, in Weyl-Papapetrou coordinates,
negative mass singularities arise as overlapping singular support for a
particular type of Green's function. Furthermore, a careful analysis of conical
singularities along axes is performed, and formulas are obtained for their
propagation across horizons, negative mass singularities, and corners. The
methods are robust, and may be used to construct a multitude of further
examples. Lastly, we show that balancing does not occur between any two signed
mass black holes of the type studied here in 4 dimensions, while in 5
dimensions two-body balancing is possible.
|
We study static and spherically symmetric charged stars with a nontrivial
profile of the scalar field $\phi$ in Einstein-Maxwell-scalar theories. The
scalar field is coupled to a $U(1)$ gauge field $A_{\mu}$ with the form
$-\alpha(\phi)F_{\mu \nu}F^{\mu \nu}/4$, where $F_{\mu
\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu} A_{\mu}$ is the field strength
tensor. Analogous to the case of charged black holes, we show that this type of
interaction can induce spontaneous scalarization of charged stars under the
conditions $({\rm d}\alpha/{\rm d}\phi) (0)=0$ and $({\rm d}^2\alpha/{\rm
d}\phi^2) (0)>0$. For the coupling $\alpha (\phi)=\exp (-\beta \phi^2/M_{\rm
pl}^2)$, where $\beta~(<0)$ is a coupling constant and $M_{\rm pl}$ is a
reduced Planck mass, there is a branch of charged star solutions with a
nontrivial profile of $\phi$ approaching $0$ toward spatial infinity, besides a
branch of general relativistic solutions with a vanishing scalar field, i.e.,
solutions in the Einstein-Maxwell model. As the ratio $\rho_c/\rho_m$ between
charge density $\rho_c$ and matter density $\rho_m$ increases toward its
maximum value, the mass $M$ of charged stars in general relativity tends to be
enhanced due to the increase of repulsive Coulomb force against gravity. In
this regime, the appearance of nontrivial branches induced by negative $\beta$
of order $-1$ effectively reduces the Coulomb force for a wide range of central
matter densities, leading to charged stars with smaller masses and radii in
comparison to those in the general relativistic branch. Our analysis indicates
that spontaneous scalarization of stars can be induced not only by the coupling
to curvature invariants but also by the scalar-gauge coupling in Einstein
gravity.
|
The purpose of this study is to explore the relationship between the first
affiliation and the corresponding affiliation at the different levels via the
scientometric analysis We select over 18 million papers in the core collection
database of Web of Science (WoS) published from 2000 to 2015, and measure the
percentage of match between the first and the corresponding affiliation at the
country and institution level. We find that a paper's the first affiliation and
the corresponding affiliation are highly consistent at the country level, with
over 98% of the match on average. However, the match at the institution level
is much lower, which varies significantly with time and country. Hence, for
studies at the country level, using the first and corresponding affiliations
are almost the same. But we may need to take more cautions to select
affiliation when the institution is the focus of the investigation. In the
meanwhile, we find some evidence that the recorded corresponding information in
the WoS database has undergone some changes since 2013, which sheds light on
future studies on the comparison of different databases or the affiliation
accuracy of WoS. Our finding relies on the records of WoS, which may not be
entirely accurate. Given the scale of the analysis, our findings can serve as a
useful reference for further studies when country allocation or institute
allocation is needed. Existing studies on comparisons of straight counting
methods usually cover a limited number of papers, a particular research field
or a limited range of time. More importantly, using the number counted can not
sufficiently tell if the corresponding and first affiliation are similar. This
paper uses a metric similar to Jaccard similarity to measure the percentage of
the match and performs a comprehensive analysis based on a large-scale
bibliometric database.
|
The development of intelligent tutoring system has greatly influenced the way
students learn and practice, which increases their learning efficiency. The
intelligent tutoring system must model learners' mastery of the knowledge
before providing feedback and advices to learners, so one class of algorithm
called "knowledge tracing" is surely important. This paper proposed Deep
Self-Attentive Knowledge Tracing (DSAKT) based on the data of PTA, an online
assessment system used by students in many universities in China, to help these
students learn more efficiently. Experimentation on the data of PTA shows that
DSAKT outperforms the other models for knowledge tracing an improvement of AUC
by 2.1% on average, and this model also has a good performance on the ASSIST
dataset.
|
The Landau-Lifshitz equation governing magnetization dynamics is written in
terms of the amplitudes of normal modes associated with the micromagnetic
system's appropriate ground state. This results in a system of nonlinear
ordinary differential equations (ODEs), the right-hand side of which can be
expressed as the sum of a linear term and nonlinear terms with increasing order
of nonlinearity (quadratic, cubic, etc.). The application of the method to
nanostructured magnetic systems demonstrates that the accurate description of
magnetization dynamics requires a limited number of normal modes, which results
in a considerable improvement in computational speed. The proposed method can
be used to obtain a reduced-order dynamical description of magnetic
nanostructures which allows to adjust the accuracy between low-dimensional
models, such as macrospin, and micromagnetic models with full spatial
discretization. This new paradigm for micromagnetic simulations is tested for
three problems relevant to the areas of spintronics and magnonics: directional
spin-wave coupling in magnonic waveguides, high power ferromagnetic resonance
in a magnetic nanodot, and injection-locking in spin-torque nano-oscillators.
The case studies considered demonstrate the validity of the proposed approach
to systematically obtain an intermediate order dynamical model based on normal
modes for the analysis of magnetic nanosystems. The time-consuming calculation
of the normal modes has to be done only one time for the system. These modes
can be used to optimize and predict the system response for all possible
time-varying external excitations (magnetic fields, spin currents). This is of
utmost importance for applications where fast and accurate system simulations
are required, such as in electronic circuits including magnetic devices.
|
We investigate the motivation and means through which individuals expand
their skill-set by analyzing a survey of applicants from the Facebook Jobs
product. Individuals who report being influenced by their networks or local
economy are over 29% more likely to have a postsecondary degree, but peer
effects still exist among those who do not acknowledge such influences. Users
with postsecondary degrees are more likely to upskill in general, by continuing
coursework or applying to higher-skill jobs, though the latter is more common
among users across all education backgrounds. These findings indicate that
policies aimed at connecting individuals with different educational backgrounds
can encourage upskilling. Policies that encourage users to enroll in coursework
may not be as effective among individuals with a high school degree or less.
Instead, connecting such individuals to opportunities that value skills
acquired outside of a formal education, and allow for on-the-job training, may
be more effective.
|
Single-photon emitters are essential for enabling several emerging
applications in quantum information technology, quantum sensing and quantum
communication. Scalable photonic platforms capable of hosting intrinsic or
directly embedded sources of single-photon emission are of particular interest
for the realization of integrated quantum photonic circuits. Here, we report on
the first-time observation of room-temperature single-photon emitters in
silicon nitride (SiN) films grown on silicon dioxide substrates. As SiN has
recently emerged as one of the most promising materials for integrated quantum
photonics, the proposed platform is suitable for scalable fabrication of
quantum on-chip devices. Photophysical analysis reveals bright (>$10^5$
counts/s), stable, linearly polarized, and pure quantum emitters in SiN films
with the value of the second-order autocorrelation function at zero time delay
$g^{(2)}(0)$ below 0.2 at room temperatures. The emission is suggested to
originate from a specific defect center in silicon nitride due to the narrow
wavelength distribution of the observed luminescence peak. Single-photon
emitters in silicon nitride have the potential to enable direct, scalable and
low-loss integration of quantum light sources with the well-established
photonic on-chip platform.
|
Quantum techniques can be used to enhance the signal-to-noise ratio in
optical imaging. Leveraging the latest advances in single photon avalanche
diode array cameras and multi-photon detection techniques, here we introduce a
super-sensitive phase imager, which uses space-polarization hyper-entanglement
to operate over a large field-of-view without the need of scanning operation.
We show quantum-enhanced imaging of birefringent and non-birefringent phase
samples over large areas, with sensitivity improvements over equivalent
classical measurements carried out with equal number of photons. The practical
applicability is demonstrated by imaging a biomedical protein microarray
sample. Our quantum-enhanced phase imaging technology is inherently scalable to
high resolution images, and represents an essential step towards practical
quantum imaging.
|
Interactive Task Learning (ITL) is an emerging research agenda that studies
the design of complex intelligent robots that can acquire new knowledge through
natural human teacher-robot learner interactions. ITL methods are particularly
useful for designing intelligent robots whose behavior can be adapted by humans
collaborating with them. Various research communities are contributing methods
for ITL and a large subset of this research is \emph{robot-centered} with a
focus on developing algorithms that can learn online, quickly. This paper
studies the ITL problem from a \emph{human-centered} perspective to provide
guidance for robot design so that human teachers can naturally teach ITL
robots. In this paper, we present 1) a qualitative bidirectional analysis of an
interactive teaching study (N=10) through which we characterize various aspects
of actions intended and executed by human teachers when teaching a robot; 2) an
in-depth discussion of the teaching approach employed by two participants to
understand the need for personal adaptation to individual teaching styles; and
3) requirements for ITL robot design based on our analyses and informed by a
computational theory of collaborative interactions, SharedPlans.
|
The Debye sheath is shown to vanish completely in magnetised plasmas for a
sufficiently small electron gyroradius and small angle between the magnetic
field and the wall. This angle depends on the current onto the wall. When the
Debye sheath vanishes, there is still a potential drop between the wall and the
plasma across the magnetic presheath. The magnetic field angle corresponding to
sheath collapse is shown to be much smaller than previous estimates, scaling
with the electron-ion mass ratio and not with the square root of the mass
ratio. This is shown to be a consequence of the finite ion orbit width effects,
which are not captured by fluid models. The wall potential with respect to the
bulk plasma at which the Debye sheath vanishes is calculated. Above this wall
potential, it is possible that the Debye sheath will invert.
|
Beloved Curry--Howard correspondence tells that types are intuitionistic
propositions, and in constructive math, a proof of proposition can be seen as
some kind of a construction, or witness, conveying the information of the
proposition. We demonstrate how useful this point of view is as the guiding
principle for developing dependently-typed programs.
|
Manifold learning algorithms are valuable tools for the analysis of
high-dimensional data, many of which include a step where nearest neighbors of
all observations are found. This can present a computational bottleneck when
the number of observations is large or when the observations lie in more
general metric spaces, such as statistical manifolds, which require all
pairwise distances between observations to be computed. We resolve this problem
by using a broad range of approximate nearest neighbor algorithms within
manifold learning algorithms and evaluating their impact on embedding accuracy.
We use approximate nearest neighbors for statistical manifolds by exploiting
the connection between Hellinger/Total variation distance for discrete
distributions and the L2/L1 norm. Via a thorough empirical investigation based
on the benchmark MNIST dataset, it is shown that approximate nearest neighbors
lead to substantial improvements in computational time with little to no loss
in the accuracy of the embedding produced by a manifold learning algorithm.
This result is robust to the use of different manifold learning algorithms, to
the use of different approximate nearest neighbor algorithms, and to the use of
different measures of embedding accuracy. The proposed method is applied to
learning statistical manifolds data on distributions of electricity usage. This
application demonstrates how the proposed methods can be used to visualize and
identify anomalies and uncover underlying structure within high-dimensional
data in a way that is scalable to large datasets.
|
With the rapid advancement of technology, different biometric user
authentication, and identification systems are emerging. Traditional biometric
systems like face, fingerprint, and iris recognition, keystroke dynamics, etc.
are prone to cyber-attacks and suffer from different disadvantages.
Electroencephalography (EEG) based authentication has shown promise in
overcoming these limitations. However, EEG-based authentication is less
accurate due to signal variability at different psychological and physiological
conditions. On the other hand, keystroke dynamics-based identification offers
high accuracy but suffers from different spoofing attacks. To overcome these
challenges, we propose a novel multimodal biometric system combining EEG and
keystroke dynamics. Firstly, a dataset was created by acquiring both keystroke
dynamics and EEG signals from 10 users with 500 trials per user at 10 different
sessions. Different statistical, time, and frequency domain features were
extracted and ranked from the EEG signals and key features were extracted from
the keystroke dynamics. Different classifiers were trained, validated, and
tested for both individual and combined modalities for two different
classification strategies - personalized and generalized. Results show that
very high accuracy can be achieved both in generalized and personalized cases
for the combination of EEG and keystroke dynamics. The identification and
authentication accuracies were found to be 99.80% and 99.68% for Extreme
Gradient Boosting (XGBoost) and Random Forest classifiers, respectively which
outperform the individual modalities with a significant margin (around 5
percent). We also developed a binary template matching-based algorithm, which
gives 93.64% accuracy 6X faster. The proposed method is secured and reliable
for any kind of biometric authentication.
|
We report here the discovery of a hot Jupiter at an orbital period of
$3.208666\pm0.000016$ days around TOI-1789 (TYC 1962-00303-1, $TESS_{mag}$ =
9.1) based on the TESS photometry, ground-based photometry, and high-precision
radial velocity observations. The high-precision radial velocity observations
were obtained from the high-resolution spectrographs, PARAS at Physical
Research Laboratory (PRL), India, and TCES at Th\"uringer Landessternwarte
Tautenburg (TLS), Germany, and the ground-based transit observations were
obtained using the 0.43~m telescope at PRL with the Bessel-$R$ filter. The host
star is a slightly evolved ($\log{g_*}$ = $3.939^{+0.024}_{-0.046}$), late
F-type ($T_{eff}$ = $5984^{+55}_{-57}$ K), metal-rich star ([Fe/H] =
$0.370^{+0.073}_{-0.089}$ dex) with a radius of {\ensuremath{$R_{*}$}} =
$2.172^{+0.037}_{-0.035}$ \(R_\odot\) located at a distance of
$223.56^{+0.91}_{-0.90}$ pc. The simultaneous fitting of the multiple light
curves and the radial velocity data of TOI-1789 reveals that TOI-1789b has a
mass of $M_{P}$ = $0.70\pm0.16 $ $M_{J}$, a radius of $R_{P}$ =
$1.40^{+0.22}_{-0.13}$ $R_{J}$, and a bulk density of $\rho_P$ =
$0.31^{+0.15}_{-0.13}$ g cm$^{-3}$ with an orbital separation of a =
$0.04873^{+0.00065}_{-0.0016}$ AU. This puts TOI-1789b in the category of
inflated hot Jupiters. It is one of the few nearby evolved stars with a
close-in planet. The detection of such systems will contribute to our
understanding of mechanisms responsible for inflation in hot Jupiters and also
provide an opportunity to understand the evolution of planets around stars
leaving the main sequence branch.
|
We present a method for approximating outcomes of road traffic simulations
using BERT-based models, which may find applications in, e.g., optimizing
traffic signal settings, especially with the presence of autonomous and
connected vehicles. The experiments were conducted on a dataset generated using
the Traffic Simulation Framework software runs on a realistic road network. The
BERT-based models were compared with 4 other types of machine learning models
(LightGBM, fully connected neural networks and 2 types of graph neural
networks) and gave the best results in terms of all the considered metrics.
|
Optical antennas made of low-loss dielectrics have several advantages over
plasmonic antennas, including high radiative quantum efficiency, negligible
heating and excellent photostability. However, due to weak spatial confinement,
conventional dielectric antennas fail to offer light-matter interaction
strengths on par with those of plasmonic antennas. We propose here an
all-dielectric antenna configuration that can support strongly confined modes
($V\sim10^{-4}\lambda_{0}^3$) while maintaining unity antenna quantum
efficiency. This configuration consists of a high-index pillar structure with a
transverse gap that is filled with a low-index material, where the contrast of
indices induces a strong enhancement of the electric field perpendicular to the
gap. We provide a detailed explanation of the operation principle of such
Photonic Gap Antennas (PGAs) based on the dispersion relation of symmetric and
asymmetric horizontal slot-waveguides. To discuss the properties of PGAs, we
consider silicon pillars with air or CYTOP as the gap-material. We show by
full-wave simulations that PGAs with an emitter embedded in the gap can enhance
the spontaneous emission rate by a factor of $\sim$1000 for air gaps and
$\sim$400 for CYTOP gaps over a spectral bandwidth of $\Delta\lambda\approx300$
nm at $\lambda=1.25$ \textmu m. Furthermore, the PGAs can be designed to
provide unidirectional out-of-plane radiation across a substantial portion of
their spectral bandwidth. This is achieved by setting the position of the gap
at an optimized off-centered position of the pillar so as to properly break the
vertical symmetry of the structure. We also demonstrate that, when acting as
receivers, PGAs can lead to a near-field intensity enhancement by a factor of
$\sim$3000 for air gaps and $\sim$1200 for CYTOP gaps.
|
We study the multiple populations of $\omega$ Cen by using the abundances of
Fe, C, N, O, Mg, Al, Si, K, Ca, and Ce from the high-resolution, high
signal-to-noise (S/N$>$70) spectra of 982 red giant stars observed by the
SDSS-IV/APOGEE-2 survey. We find that the shape of the Al-Mg and N-C
anticorrelations changes as a function of metallicity, continuous for the
metal-poor groups, but bimodal (or unimodal) at high metallicities. There are
four Fe populations, similar to what has been found in previously published
investigations, but we find seven populations based on Fe, Al, and Mg
abundances. The evolution of Al in $\omega$ Cen is compared to its evolution in
the Milky Way and in five representative globular clusters. We find that the
distribution of Al in metal-rich stars of $\omega$ Cen closely follows what is
observed in the Galaxy. Other $\alpha-$elements and C, N, O, and Ce are also
compared to the Milky Way, and significantly elevated abundances are observed
over what is found in the thick disk for almost all elements. However, we also
find some stars with high metallicity and low [Al/Fe], suggesting that $\omega$
Cen could be the remnant core of a dwarf galaxy, but the existence of these
peculiar stars needs an independent confirmation. We also confirm the increase
in the sum of CNO as a function of metallicity previously reported in the
literature and find that the [C/N] ratio appears to show opposite correlations
between Al-poor and Al-rich stars as a function of metallicity.
|
A weak topological insulator in dimension $3$ is known to have a
topologically protected gapless mode along the screw dislocation. In this paper
we formulate and prove this fact with the language of C*-algebra K-theory. The
proof is based on the coarse index theory of the helical surface.
|
A study of the dynamical formation of networks of friends and enemies in
social media, in this case Twitter, is presented. We characterise the single
node properties of such networks, as the clustering coefficient and the degree,
to investigate the structure of links. The results indicate that the network is
made from three kinds of nodes: one with high clustering coefficient but very
small degree, a second group has zero clustering coefficient with variable
degree, and finally, a third group in which the clustering coefficient as a
function of the degree decays as a power law. This third group represents
$\sim2\%$ of the nodes and is characteristic of dynamical networks with
feedback. This part of the lattice seemingly represents strongly interacting
friends in a real social network.
|
We consider the problem of recovering equations of motion from multivariate
time series of oscillators interacting on sparse networks. We reconstruct the
network from an initial guess which can include expert knowledge about the
system such as main motifs and hubs. When sparsity is taken into account the
number of data points needed is drastically reduced when compared to the
least-squares recovery. We show that the sparse solution is stable under basis
extensions, that is, once the correct network topology is obtained, the result
does not change if further motifs are considered.
|
This paper focuses on finite-time in-network computation of linear transforms
of distributed graph data. Finite-time transform computation problems are of
interest in graph-based computing and signal processing applications in which
the objective is to compute, by means of distributed iterative methods, various
(linear) transforms of the data distributed at the agents or nodes of the
graph. While finite-time computation of consensus-type or more generally
rank-one transforms have been studied, systematic approaches toward scalable
computing of general linear transforms, specifically in the case of
heterogeneous agent objectives in which each agent is interested in obtaining a
different linear combination of the network data, are relatively less explored.
In this paper, by employing ideas from algebraic geometry, we develop a
systematic characterization of linear transforms that are amenable to
distributed in-network computation in finite-time using linear iterations.
Further, we consider the general case of directed inter-agent communication
graphs. Specifically, it is shown that \emph{almost all} linear transformations
of data distributed on the nodes of a digraph containing a Hamiltonian cycle
may be computed using at most $N$ linear distributed iterations. Finally, by
studying an associated matrix factorization based reformulation of the
transform computation problem, we obtain, as a by-product, certain results and
characterizations on sparsity-constrained matrix factorization that are of
independent interest.
|
Not only does cold climate pose a problem for outdoor plants during winter in
the northern hemisphere, but for indoor plants as well: low sunlight, low
humidity, and simultaneous cold breezes from windows and heat from radiators
all cause problems for indoor plants. People often treat their indoor plants
like mere decoration, which can often lead to health issues for the plant or
even death of the plant, especially during winter. A plant monitoring system
was developed to solve this problem, collecting information on plants' indoor
environmental conditions (light, humidity, and temperature) and providing this
information in an accessible format for the user. Preliminary functional tests
were conducted in similar settings where the system would be used. In addition,
the concept was evaluated by interviewing an expert in the field of
horticulture.
The evaluation results indicate that this kind of system could prove useful;
however, the tests indicated that the system requires further development to
achieve more practical value and wider usage.
|
Most work in NLP makes the assumption that it is desirable to develop
solutions in the native language in question. There is consequently a strong
trend towards building native language models even for low-resource languages.
This paper questions this development, and explores the idea of simply
translating the data into English, thereby enabling the use of pretrained, and
large-scale, English language models. We demonstrate empirically that a large
English language model coupled with modern machine translation outperforms
native language models in most Scandinavian languages. The exception to this is
Finnish, which we assume is due to inferior translation quality. Our results
suggest that machine translation is a mature technology, which raises a serious
counter-argument for training native language models for low-resource
languages. This paper therefore strives to make a provocative but important
point. As English language models are improving at an unprecedented pace, which
in turn improves machine translation, it is from an empirical and environmental
stand-point more effective to translate data from low-resource languages into
English, than to build language models for such languages.
|
In neural machine translation, cross entropy (CE) is the standard loss
function in two training methods of auto-regressive models, i.e., teacher
forcing and scheduled sampling. In this paper, we propose mixed cross entropy
loss (mixed CE) as a substitute for CE in both training approaches. In teacher
forcing, the model trained with CE regards the translation problem as a
one-to-one mapping process, while in mixed CE this process can be relaxed to
one-to-many. In scheduled sampling, we show that mixed CE has the potential to
encourage the training and testing behaviours to be similar to each other, more
effectively mitigating the exposure bias problem. We demonstrate the
superiority of mixed CE over CE on several machine translation datasets, WMT'16
Ro-En, WMT'16 Ru-En, and WMT'14 En-De in both teacher forcing and scheduled
sampling setups. Furthermore, in WMT'14 En-De, we also find mixed CE
consistently outperforms CE on a multi-reference set as well as a challenging
paraphrased reference set. We also found the model trained with mixed CE is
able to provide a better probability distribution defined over the translation
output space. Our code is available at https://github.com/haorannlp/mix.
|
Current storytelling systems focus more ongenerating stories with coherent
plots regard-less of the narration style, which is impor-tant for controllable
text generation. There-fore, we propose a new task, stylized story gen-eration,
namely generating stories with speci-fied style given a leading context. To
tacklethe problem, we propose a novel generationmodel that first plans the
stylized keywordsand then generates the whole story with theguidance of the
keywords. Besides, we pro-pose two automatic metrics to evaluate theconsistency
between the generated story andthe specified style. Experiments
demonstratesthat our model can controllably generateemo-tion-driven
orevent-driven stories based onthe ROCStories dataset (Mostafazadeh et
al.,2016). Our study presents insights for stylizedstory generation in further
research.
|
In this paper, we completely solve the problem when a Cantor dynamical system
$(X,f)$ can be embedded in $\mathbb{R}$ with vanishing derivative everywhere.
For this purpose, we construct a refining sequence of marked clopen partitions
of $X$ which is adapted to a dynamical system of this kind. It turns out that
there is a huge class of such systems.
|
Recently, distributed controller architectures have been quickly gaining
popularity in Software-Defined Networking (SDN). However, the use of
distributed controllers introduces a new and important Request Dispatching (RD)
problem with the goal for every SDN switch to properly dispatch their requests
among all controllers so as to optimize network performance. This goal can be
fulfilled by designing an RD policy to guide distribution of requests at each
switch. In this paper, we propose a Multi-Agent Deep Reinforcement Learning
(MA-DRL) approach to automatically design RD policies with high adaptability
and performance. This is achieved through a new problem formulation in the form
of a Multi-Agent Markov Decision Process (MA-MDP), a new adaptive RD policy
design and a new MA-DRL algorithm called MA-PPO. Extensive simulation studies
show that our MA-DRL technique can effectively train RD policies to
significantly outperform man-made policies, model-based policies, as well as RD
policies learned via single-agent DRL algorithms.
|
We extend Fring-Tenney approach of constructing invariants of constant mass
time-dependent system to the case of a time-dependent mass particle. From a
coupled set of equations described in terms of guiding parameter functions, we
track down a modified Ermakov-Pinney equation involving a time-dependent mass
function. As a concrete example we focus on an exponential choice of the mass
function.
|
We consider the problem of collective exploration of a known $n$-node
edge-weighted graph by $k$ mobile agents that have limited energy but are
capable of energy transfers. The agents are initially placed at an arbitrary
subset of nodes in the graph, and each agent has an initial, possibly
different, amount of energy. The goal of the exploration problem is for every
edge in the graph to be traversed by at least one agent. The amount of energy
used by an agent to travel distance $x$ is proportional to $x$. In our model,
the agents can {\em share} energy when co-located: when two agents meet, one
can transfer part of its energy to the other.
For an $n$-node path, we give an $O(n+k)$ time algorithm that either finds an
exploration strategy, or reports that one does not exist. For an $n$-node tree
with $\ell $ leaves, we give an $O(n+ \ell k^2)$ algorithm that finds an
exploration strategy if one exists. Finally, for the general graph case, we
show that the problem of deciding if exploration is possible by energy-sharing
agents is NP-hard, even for 3-regular graphs. In addition, we show that it is
always possible to find an exploration strategy if the total energy of the
agents is at least twice the total weight of the edges; moreover, this is
asymptotically optimal.
|
We study some semi-linear equations for the $(m,p)$-Laplacian operator on
locally finite weighted graphs. We prove existence of weak solutions for all
$m\in\mathbb{N}$ and $p\in(1,+\infty)$ via a variational method already known
in the literature by exploiting the continuity properties of the energy
functionals involved. When $m=1$, we also establish a uniqueness result in the
spirit of the Brezis-Strauss Theorem. We finally provide some applications of
our main results by dealing with some Yamabe-type and Kazdan-Warner-type
equations on locally finite weighted graphs.
|
Recent work in AI safety has highlighted that in sequential decision making,
objectives are often underspecified or incomplete. This gives discretion to the
acting agent to realize the stated objective in ways that may result in
undesirable outcomes. We contend that to learn to act safely, a reinforcement
learning (RL) agent should include contemplation of the impact of its actions
on the wellbeing and agency of others in the environment, including other
acting agents and reactive processes. We endow RL agents with the ability to
contemplate such impact by augmenting their reward based on expectation of
future return by others in the environment, providing different criteria for
characterizing impact. We further endow these agents with the ability to
differentially factor this impact into their decision making, manifesting
behavior that ranges from self-centred to self-less, as demonstrated by
experiments in gridworld environments.
|
Most modern reinforcement learning algorithms optimize a cumulative
single-step cost along a trajectory. The optimized motions are often
'unnatural', representing, for example, behaviors with sudden accelerations
that waste energy and lack predictability. In this work, we present a novel
paradigm of controlling nonlinear systems via the minimization of the Koopman
spectrum cost: a cost over the Koopman operator of the controlled dynamics.
This induces a broader class of dynamical behaviors that evolve over stable
manifolds such as nonlinear oscillators, closed loops, and smooth movements. We
demonstrate that some dynamics realizations that are not possible with a
cumulative cost are feasible in this paradigm. Moreover, we present a provably
efficient online learning algorithm for our problem that enjoys a sub-linear
regret bound under some structural assumptions.
|
Bayesian optimization (BO) is a popular method for optimizing
expensive-to-evaluate black-box functions. BO budgets are typically given in
iterations, which implicitly assumes each evaluation has the same cost. In
fact, in many BO applications, evaluation costs vary significantly in different
regions of the search space. In hyperparameter optimization, the time spent on
neural network training increases with layer size; in clinical trials, the
monetary cost of drug compounds vary; and in optimal control, control actions
have differing complexities. Cost-constrained BO measures convergence with
alternative cost metrics such as time, money, or energy, for which the sample
efficiency of standard BO methods is ill-suited. For cost-constrained BO, cost
efficiency is far more important than sample efficiency. In this paper, we
formulate cost-constrained BO as a constrained Markov decision process (CMDP),
and develop an efficient rollout approximation to the optimal CMDP policy that
takes both the cost and future iterations into account. We validate our method
on a collection of hyperparameter optimization problems as well as a sensor set
selection application.
|
Annotated datasets have become one of the most crucial preconditions for the
development and evaluation of machine learning-based methods designed for the
automated interpretation of remote sensing data. In this paper, we review the
historic development of such datasets, discuss their features based on a few
selected examples, and address open issues for future developments.
|
We investigate special solutions to the Bethe Ansatz equations (BAE) for open
integrable $XXZ$ Heisenberg spin chains containing phantom (infinite) Bethe
roots. The phantom Bethe roots do not contribute to the energy of the Bethe
state, so the energy is determined exclusively by the remaining regular
excitations. We rederive the phantom Bethe roots criterion and focus on BAE
solutions for mixtures of phantom roots and regular (finite) Bethe roots. We
prove that in the presence of phantom Bethe roots, all eigenstates are split
between two invariant subspaces, spanned by chiral shock states. Bethe
eigenstates are described by two complementary sets of Bethe Ansatz equations
for regular roots, one for each invariant subspace. The respective
"semi-phantom" Bethe vectors are states of chiral nature, with chirality
properties getting less pronounced when more regular Bethe roots are added. For
the easy plane case "semi-phantom" Bethe states carry nonzero magnetic current,
and are characterized by quasi-periodic modulation of the magnetization
profile, the most prominent example being the spin helix states (SHS). We
illustrate our results investigating "semi-phantom" Bethe states generated by
one regular Bethe root (the other Bethe roots being phantom), with simple
structure of the invariant subspace, in all details. We obtain the explicit
expressions for Bethe vectors, and calculate the simplest correlation
functions, including the spin-current for all the states in the single particle
multiplet.
|
Lunar explorations have provided us with information about its abundant
resources that can be utilized in orbiting-resource depots as lunar-derived
commodities. To reduce the energy requirements of a launcher to send these
commodities from the lunar surface to the space depots, this paper explores the
application of the electromagnetic acceleration principle and provides an
assessment of the actual technical characteristics of the launcher's
installation to ensure the acceleration of a payload with a mass of 1,500 kg to
a speed of 2,200 m/s (circumlunar orbit speed). To fulfill a lightweight (fewer
materials and less energy) support structure for the electromagnetic launcher
with strength requirements, the tensegrity structure minimum mass principle
without global buckling has been developed and applied to support the
electromagnetic acceleration device. Therefore, this paper proposes and
develops a minimal mass electromagnetic tensegrity lunar launcher. We first
demonstrate the mechanics of launcher and payload, how a payload can be
accelerated to a specific velocity, and how a payload carrier can be recycled
for another launch. Then, a detailed discussion on the lunar launch system,
procedures of propulsion, the required mass, and energy of the launch barrel
are given. The governing equations of tensegrity minimal mass tensegrity design
algorithm with gravity and without global buckling. Finally, a case study is
conducted to show a feasible structure design, the required mass, and energy.
The principles developed in this paper are also applicable to the rocket launch
system, space elevator, space train transportation, interstellar payload
package delivery, etc.
|
We present high-fidelity numerical simulations of expiratory biosol transport
during normal breathing under indoor, stagnant air conditions with and without
a facial mask. We investigate mask efficacy to suppress the spread of saliva
particles that is underpinnings existing social distancing recommendations. The
present simulations incorporate the effect of human anatomy and consider a
spectrum of saliva particulate sizes that ranges from 0.1 micrometers to 10
micrometers while accounting also for their evaporation. The simulations
elucidate the vorticity dynamics of human breathing and show that without a
facial mask, saliva particulates could travel over 2.2 m away from the person.
However, a non-medical grade face mask can drastically reduce saliva
particulate propagation to 0.72 m away from the person. This study provides new
quantitative evidence that facial masks can successfully suppress the spreading
of saliva particulates due to normal breathing in indoor environments.
|
Domain walls in fractional quantum Hall ferromagnets are gapless helical
one-dimensional channels formed at the boundaries of topologically distinct
quantum Hall (QH) liquids. Na\"{i}vely, these helical domain walls (hDWs)
constitute two counter-propagating chiral states with opposite spins. Coupled
to an s-wave superconductor, helical channels are expected to lead to
topological superconductivity with high order non-Abelian excitations. Here we
investigate transport properties of hDWs in the $\nu=2/3$ fractional QH regime.
Experimentally we found that current carried by hDWs is substantially smaller
than the prediction of the na\"{i}ve model. Luttinger liquid theory of the
system reveals redistribution of currents between quasiparticle charge, spin
and neutral modes, and predicts the reduction of the hDW current. Inclusion of
spin-non-conserving tunneling processes reconciles theory with experiment. The
theory confirms emergence of spin modes required for the formation of
fractional topological superconductivity.
|
We calculate the vacuum (Casimir) energy for a scalar field with $\phi^4$
self-interaction in (1+1) dimensions non perturbatively, i.e., in all orders of
the self-interaction. We consider massive and massless fields in a finite box
with Dirichlet boundary conditions and on the whole axis as well. For strong
coupling, the vacuum energy is negative indicating some instability.
|
Multi input multi output (MIMO) systems\' capability of using seperate
signals brings many advantages to radar signal processing and time frequency
analysis. In this paper, a variety of properties of MIMO ambiguity functions
related with representations of Heisenberg group are given. Some of the
existing results for SIMO ambiguity functions are generalized to MIMO case.
Combined effect of seperate signals is investigated.
|
Recently, autonomous driving has made substantial progress in addressing the
most common traffic scenarios like intersection navigation and lane changing.
However, most of these successes have been limited to scenarios with
well-defined traffic rules and require minimal negotiation with other vehicles.
In this paper, we introduce a previously unconsidered, yet everyday,
high-conflict driving scenario requiring negotiations between agents of equal
rights and priorities. There exists no centralized control structure and we do
not allow communications. Therefore, it is unknown if other drivers are willing
to cooperate, and if so to what extent. We train policies to robustly negotiate
with opposing vehicles of an unobservable degree of cooperativeness using
multi-agent reinforcement learning (MARL). We propose Discrete Asymmetric Soft
Actor-Critic (DASAC), a maximum-entropy off-policy MARL algorithm allowing for
centralized training with decentralized execution. We show that using DASAC we
are able to successfully negotiate and traverse the scenario considered over
99% of the time. Our agents are robust to an unknown timing of opponent
decisions, an unobservable degree of cooperativeness of the opposing vehicle,
and previously unencountered policies. Furthermore, they learn to exhibit
human-like behaviors such as defensive driving, anticipating solution options
and interpreting the behavior of other agents.
|
Reservoir computers (RC) are a form of recurrent neural network (RNN) used
for forecasting timeseries data. As with all RNNs, selecting the
hyperparameters presents a challenge when training onnew inputs. We present a
method based on generalized synchronization (GS) that gives direction in
designing and evaluating the architecture and hyperparameters of an RC. The
'auxiliary method' for detecting GS provides a computationally efficient
pre-training test that guides hyperparameterselection. Furthermore, we provide
a metric for RC using the reproduction of the input system's Lyapunov
exponentsthat demonstrates robustness in prediction.
|
Structural defects and chemical impurities exist in organic semiconductors
acting as trap centers for the excited states. This work presents a novel
analytical model to calculate the trapping and detrapping rates between two
Gaussian density of states. Miller-Abrahams rate and Fermi-Dirac statistics are
employed in this model. The introduction of effective filled and empty sites
for correlated bands greatly simplifies the expression of recombination rate. A
technology computer-aided design simulator was used to simulate the donor-like
traps in an organic semiconductor DPP-DTT based thin-film transistor, showing
good agreement with the measured transfer characteristic.
|
Recent work has shown that in a dataset of user ratings on items there exists
a group of Core Users who hold most of the information necessary for
recommendation. This set of Core Users can be as small as 20 percent of the
users. Core Users can be used to make predictions for out-of-sample users
without much additional work. Since Core Users substantially shrink a ratings
dataset without much loss of information, they can be used to improve
recommendation efficiency. We propose a method, combining latent factor models,
ensemble boosting and K-means clustering, to generate a small set of Artificial
Core Users (ACUs) from real Core User data. Our ACUs have dense rating
information, and improve the recommendation performance of real Core Users
while remaining interpretable.
|
Asteroseismic measurements enable inferences of the underlying stellar
structure, such as the density and the speed of sound at various points within
the interior of the star. This provides an opportunity to test stellar
evolution theory by assessing whether the predicted structure of a star agrees
with the measured structure. Thus far, this kind of inverse analysis has only
been applied to the Sun and three solar-like main-sequence stars. Here we
extend the technique to stars on the subgiant branch, and apply it to one of
the best-characterized subgiants of the Kepler mission, HR 7322. The
observation of mixed oscillation modes in this star facilitates inferences of
the conditions of its inert helium core, nuclear-burning hydrogen shell, and
the deeper parts of its radiative envelope. We find that despite significant
differences in the mode frequencies, the structure near to the center of this
star does not differ significantly from the predicted structure.
|
In many engineering systems operating with a working fluid, the best
efficiency is reached close to a condition of flow separation, which makes its
prediction very crucial in industry. Providing that wall-based quantities can
be measured, we know today how to obtain good predictions for two and
three-dimensional steady and periodic flows. In these flows, the separation is
defined on a fixed line attached to a material surface. The last case to
elucidate is the one where this line is no longer attached to the wall but on
the contrary is contained within the flow. This moving separation is probably,
however, the most common case of separation in natural flows and industrial
applications. Since this case has received less attention during the past few
years, we propose in this study to examine some properties of moving separation
in two-dimensional, unsteady flows where the separation does not leave a
signature on the wall. Since in this framework separation can be extracted by
using a Lagrangian frame where the separation profile can be viewed as a
hyperbolic unstable manifold, we propose a method to extract the separation
point defined by the Lagrangian saddle point that belongs to this unstable
manifold. In practice, the separation point and profile are initially extracted
by detecting the most attracting Lagrangian coherent structure near the wall,
and can then be advected in time for following instants. It is found that
saddle points, which initially act as separation points in the viscous wall
flow region, remarkably preserve their hyperbolicity even if they are ejected
from the wall toward the inviscid region. Two test cases are studied, the
creeping flow of a rotating and translating cylinder close to a wall, and the
unsteady separation in the boundary layer generated by a planar jet impinging
onto a plane wall.
|
In this paper we continue a long line of work on representing the cut
structure of graphs. We classify the types minimum vertex cuts, and the
possible relationships between multiple minimum vertex cuts.
As a consequence of these investigations, we exhibit a simple $O(\kappa
n)$-space data structure that can quickly answer pairwise
$(\kappa+1)$-connectivity queries in a $\kappa$-connected graph. We also show
how to compute the "closest" $\kappa$-cut to every vertex in near linear
$\tilde{O}(m+poly(\kappa)n)$ time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.