abstract
stringlengths 42
2.09k
|
---|
Let $G$ be a non-compact connected semisimple real Lie group with finite
center. Suppose $L$ is a non-compact connected closed subgroup of $G$ acting
transitively on a symmetric space $G/H$ such that $L\cap H$ is compact. We
study the action on $L/L\cap H$ of a Dirac operator $D_{G/H}(E)$ acting on
sections of an $E$-twist of the spin bundle over $G/H$. As a byproduct, in the
case of $(G,H,L)=(SL(2,{\mathbb R})\times SL(2,{\mathbb
R}),\Delta(SL(2,{\mathbb R})\times SL(2,{\mathbb R})),SL(2,{\mathbb R})\times
SO(2))$, we identify certain representations of $L$ which lie in the kernel of
$D_{G/H}(E)$.
|
Many real-world systems can be expressed in temporal networks with nodes
playing far different roles in structure and function and edges representing
the relationships between nodes. Identifying critical nodes can help us control
the spread of public opinions or epidemics, predict leading figures in
academia, conduct advertisements for various commodities, and so on. However,
it is rather difficult to identify critical nodes because the network structure
changes over time in temporal networks. In this paper, considering the sequence
topological information of temporal networks, a novel and effective learning
framework based on the combination of special GCNs and RNNs is proposed to
identify nodes with the best spreading ability. The effectiveness of the
approach is evaluated by weighted Susceptible-Infected-Recovered model.
Experimental results on four real-world temporal networks demonstrate that the
proposed method outperforms both traditional and deep learning benchmark
methods in terms of the Kendall $\tau$ coefficient and top $k$ hit rate.
|
It has recently been shown that two-loop kite-type diagrams can be computed
analytically in terms of iterated integrals with algebraic kernels. This result
was obtained using a new integral representation for two-loop sunset subgraphs.
In this paper, we have developed a similar representation for a three-loop
banana integral in $d = 2-2\varepsilon$ dimensions. This answer can be
generalized up to any given order in the $\varepsilon$-expansion and can be
calculated numerically both below and above the threshold. We also demonstrate
how this result can be used to compute more complex three-loop integrals
containing the three-loop banana as a subgraph.
|
We develop a trust-region method for minimizing the sum of a smooth term $f$
and a nonsmooth term $h$), both of which can be nonconvex. Each iteration of
our method minimizes a possibly nonconvex model of $f + h$ in a trust region.
The model coincides with $f + h$ in value and subdifferential at the center. We
establish global convergence to a first-order stationary point when $f$
satisfies a smoothness condition that holds, in particular, when it has
Lipschitz-continuous gradient, and $h$ is proper and lower semi-continuous. The
model of $h$ is required to be proper, lower-semi-continuous and prox-bounded.
Under these weak assumptions, we establish a worst-case $O(1/\epsilon^2)$
iteration complexity bound that matches the best known complexity bound of
standard trust-region methods for smooth optimization. We detail a special
instance, named TR-PG, in which we use a limited-memory quasi-Newton model of
$f$ and compute a step with the proximal gradient method, resulting in a
practical proximal quasi-Newton method. We establish similar convergence
properties and complexity bound for a quadratic regularization variant, named
R2, and provide an interpretation as a proximal gradient method with adaptive
step size for nonconvex problems. R2 may also be used to compute steps inside
the trust-region method, resulting in an implementation named TR-R2. We
describe our Julia implementations and report numerical results on inverse
problems from sparse optimization and signal processing. Both TR-PG and TR-R2
exhibit promising performance and compare favorably with two linesearch
proximal quasi-Newton methods based on convex models.
|
Coronavirus disease 2019 (COVID-19) is a Public Health Emergency of
International Concern infecting more than 40 million people across 188
countries and territories. Chest computed tomography (CT) imaging technique
benefits from its high diagnostic accuracy and robustness, it has become an
indispensable way for COVID-19 mass testing. Recently, deep learning approaches
have become an effective tool for automatic screening of medical images, and it
is also being considered for COVID-19 diagnosis. However, the high infection
risk involved with COVID-19 leads to relative sparseness of collected labeled
data limiting the performance of such methodologies. Moreover, accurately
labeling CT images require expertise of radiologists making the process
expensive and time-consuming. In order to tackle the above issues, we propose a
supervised domain adaption based COVID-19 CT diagnostic method which can
perform effectively when only a small samples of labeled CT scans are
available. To compensate for the sparseness of labeled data, the proposed
method utilizes a large amount of synthetic COVID-19 CT images and adjusts the
networks from the source domain (synthetic data) to the target domain (real
data) with a cross-domain training mechanism. Experimental results show that
the proposed method achieves state-of-the-art performance on few-shot COVID-19
CT imaging based diagnostic tasks.
|
We provide moment bounds for expressions of the type $(X^{(1)} \otimes \dots
\otimes X^{(d)})^T A (X^{(1)} \otimes \dots \otimes X^{(d)})$ where $\otimes$
denotes the Kronecker product and $X^{(1)}, \dots, X^{(d)}$ are random vectors
with independent, mean 0, variance 1, subgaussian entries. The bounds are tight
up to constants depending on $d$ for the case of Gaussian random vectors. Our
proof also provides a decoupling inequality for expressions of this type. Using
these bounds, we obtain new, improved concentration inequalities for
expressions of the form $\|B (X^{(1)} \otimes \dots \otimes X^{(d)})\|_2$.
|
This paper argues that continual learning methods can benefit by splitting
the capacity of the learner across multiple models. We use statistical learning
theory and experimental analysis to show how multiple tasks can interact with
each other in a non-trivial fashion when a single model is trained on them. The
generalization error on a particular task can improve when it is trained with
synergistic tasks, but can also deteriorate when trained with competing tasks.
This theory motivates our method named Model Zoo which, inspired from the
boosting literature, grows an ensemble of small models, each of which is
trained during one episode of continual learning. We demonstrate that Model Zoo
obtains large gains in accuracy on a variety of continual learning benchmark
problems.
|
Pretrained language models are now of widespread use in Natural Language
Processing. Despite their success, applying them to Low Resource languages is
still a huge challenge. Although Multilingual models hold great promise,
applying them to specific low-resource languages e.g. Roman Urdu can be
excessive. In this paper, we show how the code-switching property of languages
may be used to perform cross-lingual transfer learning from a corresponding
high resource language. We also show how this transfer learning technique
termed Bilingual Language Modeling can be used to produce better performing
models for Roman Urdu. To enable training and experimentation, we also present
a collection of novel corpora for Roman Urdu extracted from various sources and
social networking sites, e.g. Twitter. We train Monolingual, Multilingual, and
Bilingual models of Roman Urdu - the proposed bilingual model achieves 23%
accuracy compared to the 2% and 11% of the monolingual and multilingual models
respectively in the Masked Language Modeling (MLM) task.
|
With the consolidation of deep learning in drug discovery, several novel
algorithms for learning molecular representations have been proposed. Despite
the interest of the community in developing new methods for learning molecular
embeddings and their theoretical benefits, comparing molecular embeddings with
each other and with traditional representations is not straightforward, which
in turn hinders the process of choosing a suitable representation for QSAR
modeling. A reason behind this issue is the difficulty of conducting a fair and
thorough comparison of the different existing embedding approaches, which
requires numerous experiments on various datasets and training scenarios. To
close this gap, we reviewed the literature on methods for molecular embeddings
and reproduced three unsupervised and two supervised molecular embedding
techniques recently proposed in the literature. We compared these five methods
concerning their performance in QSAR scenarios using different classification
and regression datasets. We also compared these representations to traditional
molecular representations, namely molecular descriptors and fingerprints. As
opposed to the expected outcome, our experimental setup consisting of over
25,000 trained models and statistical tests revealed that the predictive
performance using molecular embeddings did not significantly surpass that of
traditional representations. While supervised embeddings yielded competitive
results compared to those using traditional molecular representations,
unsupervised embeddings tended to perform worse than traditional
representations. Our results highlight the need for conducting a careful
comparison and analysis of the different embedding techniques prior to using
them in drug design tasks, and motivate a discussion about the potential of
molecular embeddings in computer-aided drug design.
|
We employ neural networks for classification of data of the TUS fluorescence
telescope, the world's first orbital detector of ultra-high energy cosmic rays.
We focus on two particular types of signals in the TUS data: track-like flashes
produced by cosmic ray hits of the photodetector and flashes that originated
from distant lightnings. We demonstrate that even simple neural networks
combined with certain conventional methods of data analysis can be highly
effective in tasks of classification of data of fluorescence telescopes.
|
The performance of natural language generation systems has improved
substantially with modern neural networks. At test time they typically employ
beam search to avoid locally optimal but globally suboptimal predictions.
However, due to model errors, a larger beam size can lead to deteriorating
performance according to the evaluation metric. For this reason, it is common
to rerank the output of beam search, but this relies on beam search to produce
a good set of hypotheses, which limits the potential gains. Other alternatives
to beam search require changes to the training of the model, which restricts
their applicability compared to beam search. This paper proposes incremental
beam manipulation, i.e. reranking the hypotheses in the beam during decoding
instead of only at the end. This way, hypotheses that are unlikely to lead to a
good final output are discarded, and in their place hypotheses that would have
been ignored will be considered instead. Applying incremental beam manipulation
leads to an improvement of 1.93 and 5.82 BLEU points over vanilla beam search
for the test sets of the E2E and WebNLG challenges respectively. The proposed
method also outperformed a strong reranker by 1.04 BLEU points on the E2E
challenge, while being on par with it on the WebNLG dataset.
|
We consider the problem of soft scattering for the analogue of pion states in
gauge-fermion theories which approach a conformal fixed point in the infrared
limit. Introducing a fermion mass into such a theory will explicitly break both
scale invariance and chiral symmetry, leading to confinement and a spectrum of
bound states. We argue that in such a theory, the pion scattering length
diverges in the limit of zero fermion mass, in sharp contrast to QCD-like
theories where the chiral Lagrangian predicts a vanishing scattering length. We
demonstrate this effect both with a simple dimensional argument, and in a
generalized linear sigma model which we argue can be used to describe the
interactions of light scalar and pseudoscalar bound states in the soft limit of
a mass-deformed infrared-conformal theory. As a result, lattice calculations of
pion scattering lengths could be a sensitive probe for infrared scale
invariance in gauge-fermion theories.
|
Context: Interest in software engineering (SE) methodologies and tools has
been complemented in recent years by research efforts oriented towards
understanding the human processes involved in software development. This shift
has been imperative given reports of inadequately performing teams and the
consequent growing emphasis on individuals and team relations in contemporary
SE methods. Objective: While software repositories have frequently been studied
with a view to explaining such human processes, research has tended to use
primarily quantitative analysis approaches. There is concern, however, that
such approaches can provide only a partial picture of the software process.
Given the way human behavior is nuanced within psychological and social
contexts, it has been asserted that a full understanding may only be achieved
through deeper contextual enquiries. Method: We have followed such an approach
and have applied data mining, SNA, psycholinguistic analysis and directed
content analysis (CA) to study the way core developers at IBM Rational Jazz
contribute their social and intellectual capital, and have compared the
attitudes, interactions and activities of these members to those of their less
active counterparts. Results: Among our results, we uncovered that Jazz's core
developers worked across multiple roles, and were crucial to their teams'
organizational, intra-personal and inter-personal processes. Additionally,
although these individuals were highly task- and achievement-focused, they were
also largely responsible for maintaining positive team atmosphere, and for
providing context awareness in support of their colleagues. Conclusion: Our
results suggest that high-performing distributed agile teams rely on both
individual and collective efforts, as well as organizational environments that
promote informal and organic work structures.(Abridged)
|
Thanks to missions like Kepler and TESS, we now have access to tens of
thousands of high precision, fast cadence, and long baseline stellar
photometric observations. In principle, these light curves encode a vast amount
of information about stellar variability and, in particular, about the
distribution of starspots and other features on their surfaces. Unfortunately,
the problem of inferring stellar surface properties from a rotational light
curve is famously ill-posed, as it often does not admit a unique solution.
Inference about the number, size, contrast, and location of spots can therefore
depend very strongly on the assumptions of the model, the regularization
scheme, or the prior. The goal of this paper is twofold: (1) to explore the
various degeneracies affecting the stellar light curve "inversion" problem and
their effect on what can and cannot be learned from a stellar surface given
unresolved photometric measurements; and (2) to motivate ensemble analyses of
the light curves of many stars at once as a powerful data-driven alternative to
common priors adopted in the literature. We further derive novel results on the
dependence of the null space on stellar inclination and limb darkening and show
that single-band photometric measurements cannot uniquely constrain quantities
like the total spot coverage without the use of strong priors. This is the
first in a series of papers devoted to the development of novel algorithms and
tools for the analysis of stellar light curves and spectral time series, with
the explicit goal of enabling statistically robust inference about their
surface properties.
|
Graph representation of structured data can facilitate the extraction of
stereoscopic features, and it has demonstrated excellent ability when working
with deep learning systems, the so-called Graph Neural Networks (GNNs).
Choosing a promising architecture for constructing GNNs can be transferred to a
hyperparameter optimisation problem, a very challenging task due to the size of
the underlying search space and high computational cost for evaluating
candidate GNNs. To address this issue, this research presents a novel genetic
algorithm with a hierarchical evaluation strategy (HESGA), which combines the
full evaluation of GNNs with a fast evaluation approach. By using full
evaluation, a GNN is represented by a set of hyperparameter values and trained
on a specified dataset, and root mean square error (RMSE) will be used to
measure the quality of the GNN represented by the set of hyperparameter values
(for regression problems). While in the proposed fast evaluation process, the
training will be interrupted at an early stage, the difference of RMSE values
between the starting and interrupted epochs will be used as a fast score, which
implies the potential of the GNN being considered. To coordinate both types of
evaluations, the proposed hierarchical strategy uses the fast evaluation in a
lower level for recommending candidates to a higher level, where the full
evaluation will act as a final assessor to maintain a group of elite
individuals. To validate the effectiveness of HESGA, we apply it to optimise
two types of deep graph neural networks. The experimental results on three
benchmark datasets demonstrate its advantages compared to Bayesian
hyperparameter optimization.
|
Building on the success of the ADReSS Challenge at Interspeech 2020, which
attracted the participation of 34 teams from across the world, the ADReSSo
Challenge targets three difficult automatic prediction problems of societal and
medical relevance, namely: detection of Alzheimer's Dementia, inference of
cognitive testing scores, and prediction of cognitive decline. This paper
presents these prediction tasks in detail, describes the datasets used, and
reports the results of the baseline classification and regression models we
developed for each task. A combination of acoustic and linguistic features
extracted directly from audio recordings, without human intervention, yielded a
baseline accuracy of 78.87% for the AD classification task, an MMSE prediction
root mean squared (RMSE) error of 5.28, and 68.75% accuracy for the cognitive
decline prediction task.
|
A solution is proposed to a longstanding open problem in kinetic theory,
namely, given any set of realizable velocity moments up to order 2n, a closure
for the moment of order 2n+1 is constructed for which the moment system found
from the free-transport term in the one-dimensional (1-D) kinetic equation is
globally hyperbolic and in conservative form. In prior work, the hyperbolic
quadrature method of moments (HyQMOM) was introduced to close this moment
system up to fourth order (n $\le$ 2). Here, HyQMOM is reformulated and
extended to arbitrary even-order moments. The HyQMOM closure is defined based
on the properties of the monic orthogonal polynomials Qn that are uniquely
defined by the velocity moments up to order 2n -- 1. Thus, HyQMOM is strictly a
moment closure and does not rely on the reconstruction of a velocity
distribution function with the same moments. On the boundary of moment space, n
double roots of the characteristic polynomial P2n+1 are the roots of Qn, while
in the interior, P 2n+1 and Qn share n roots. The remaining n + 1 roots of
P2n+1 bound and separate the roots of Qn. An efficient algorithm, based on the
Chebyshev algorithm, for computing the moment of order 2n + 1 from the moments
up to order 2n is developed. The analytical solution to a 1-D Riemann problem
is used to demonstrate convergence of the HyQMOM closure with increasing n.
|
Renormalisation group (RG) methods provide one of the most important
techniques for analysing the physics of many-body systems, both analytically
and numerically. By iterating an RG map, which "course-grains" the description
of a many-body system and generates a flow in the parameter space, physical
properties of interest can be extracted even for complex models. RG analysis
also provides an explanation of physical phenomena such as universality. Many
systems exhibit simple RG flows, but more complicated -- even chaotic --
behaviour is also known. Nonetheless, the structure of such RG flows can still
be analysed, elucidating the physics of the system, even if specific
trajectories may be highly sensitive to the initial point. In contrast, recent
work has shown that important physical properties of quantum many-body systems,
such as its spectral gap and phase diagram, can be uncomputable.
In this work, we show that such undecidable systems exhibit a novel type of
RG flow, revealing a qualitatively different and more extreme form of
unpredictability than chaotic RG flows. In contrast to chaotic RG flows in
which initially close points can diverge exponentially, trajectories under
these novel uncomputable RG flows can remain arbitrarily close together for an
uncomputable number of iterations, before abruptly diverging to different fixed
points that are in separate phases. The structure of such uncomputable RG flows
is so complex that it cannot be computed or approximated, even in principle. We
give a mathematically rigorous construction of the block-renormalisation-group
map for the original undecidable many-body system that appeared in the
literature (Cubitt, P\'erez-Garcia, Wolf, Nature 528, 207-211 (2015)). We prove
that each step of this RG map is computable, and that it converges to the
correct fixed points, yet the resulting RG flow is uncomputable.
|
This article studies estimation of a stationary autocovariance structure in
the presence of an unknown number of mean shifts. Here, a Yule-Walker moment
estimator for the autoregressive parameters in a dependent time series
contaminated by mean shift changepoints is proposed and studied. The estimator
is based on first order differences of the series and is proven consistent and
asymptotically normal when the number of changepoints $m$ and the series length
$N$ satisfies $m/N \rightarrow 0$ as $N \rightarrow \infty$
|
The quality of controlling a system of optical cavities in the
Tavis-Cummings-Hubbard (TCH) model is estimated with the examples of quantum
gates, quantum walks on graphs, and of the detection of singlet states. This
type of control of complex systems is important for quantum computing, for the
optical interpretation of mechanical movements, and for quantum cryptography,
where singlet states of photons and charges play an essential role. It has been
found that the main reason for the decrease of the control quality in the THC
model is due to the finite width of the atomic spectral lines, which is itself
related to the time energy uncertainty relation. This paper evaluates the
quality of a CSign-type quantum gate based on asynchronous atomic excitations
and on the optical interpretation of the motion of a free particle.
|
It is important to calculate and analyze temperature and humidity prediction
accuracies among quantitative meteorological forecasting. This study
manipulates the extant neural network methods to foster the predictive
accuracy. To achieve such tasks, we analyze and explore the predictive accuracy
and performance in the neural networks using two combined meteorological
factors (temperature and humidity). Simulated studies are performed by applying
the artificial neural network (ANN), deep neural network (DNN), extreme
learning machine (ELM), long short-term memory (LSTM), and long short-term
memory with peephole connections (LSTM-PC) machine learning methods, and the
accurate prediction value are compared to that obtained from each other
methods. Data are extracted from low frequency time-series of ten metropolitan
cities of South Korea from March 2014 to February 2020 to validate our
observations. To test the robustness of methods, the error of LSTM is found to
outperform that of the other four methods in predictive accuracy. Particularly,
as testing results, the temperature prediction of LSTM in summer in Tongyeong
has a root mean squared error (RMSE) value of 0.866 lower than that of other
neural network methods, while the mean absolute percentage error (MAPE) value
of LSTM for humidity prediction is 5.525 in summer in Mokpo, significantly
better than other metropolitan cities.
|
The nonequilibrium dynamics of vortices in 2D quantum fluids can be predicted
by accounting for the way in which vortex ellipticity is coupled to the
gradient in background fluid density. In the absence of nonlinear interactions,
a harmonically trapped fluid can be analyzed analytically to show that single
vortices will move in an elliptic trajectory that has the same orientation and
aspect ratio as the vortex projection itself. This allows the vortex
ellipticity to be estimated through observation of its trajectory. A
combination of analysis and numerical simulation is then used to show that
nonlinear interactions cause the vortex orientation to precess, and that the
rate of vortex precession is once again mimicked by a precession of the
elliptical trajectory. Both vortex ellipticity and rate of precession can
therefore be inferred by observing its motion in a trap. An ability to
anticipate and control local vortex structure and vortex trajectory is expected
to prove useful in designing few-vortex systems in which ellipticity is a
ubiquitous, as-yet-unharnessed feature.
|
Identifying the mutations that drive cancer growth is key in clinical
decision making and precision oncology. As driver mutations confer selective
advantage and thus have an increased likelihood of occurrence, frequency-based
statistical models are currently favoured. These methods are not suited to
rare, low frequency, driver mutations. The alternative approach to address this
is through functional-impact scores, however methods using this approach are
highly prone to false positives. In this paper, we propose a novel combination
method for driver mutation identification, which uses the power of both
statistical modelling and functional-impact based methods. Initial results show
this approach outperforms the state-of-the-art methods in terms of precision,
and provides comparable performance in terms of area under receiver operating
characteristic curves (AU-ROC). We believe that data-driven systems based on
machine learning, such as these, will become an integral part of precision
oncology in the near future.
|
Considering a non-centrosymmetric, non-magnetic double Weyl semimetal (WSM)
SrSi$_2$, we investigate the electron and hole pockets in bulk Fermi surface
behavior that enables us to characterize the material as a type-I WSM. We study
the structural handedness of the material and correlate it with the distinct
surface Fermi surface at two opposite surfaces following an energy evolution.
The Fermi arc singlet becomes doublet with the onset of spin orbit coupling
that is in accordance with the topological charge of the Weyl Nodes (WNs). A
finite energy separation between WNs of opposite chirality in SrSi$_2$ allows
us to compute circular photogalvanic effect (CPGE). Followed by the three band
formula, we show that CPGE is only quantized for Fermi level chosen in the
vicinity of WN residing at higher value of energy. Surprisingly, for the other
WN of opposite chirality in the lower value of energy, CPGE is not found to be
quantized. Such a behavior of CPGE is in complete contrast to the time reversal
breaking WSM where CPGE is quantized to two opposite plateau depending on the
topological charge of the activated WN. We further analyze our finding by
examining the momentum resolved CPGE. Finally we show that two band formula for
CPGE is not able to capture the quantization that is apprehended by the three
band formula.
|
In this article a surprising result is demonstrated using the neural tangent
kernel. This kernel is defined as the inner product of the vector of the
gradient of an underlying model evaluated at training points. This kernel is
used to perform kernel regression. The surprising thing is that the accuracy of
that regression is independent of the accuracy of the underlying network.
|
Theoretical models show that the power of relativistic jets of active
galactic nuclei depends on the spin and mass of the central supermassive black
holes, as well as the accretion. Here we report an analysis of archival
observations of a sample of blazars. We find a significant correlation between
jet kinetic power and the spin of supermassive black holes. At the same time,
we use multiple linear regression to analyze the relationship between jet
kinetic power and accretion, spin and black hole mass. We find that the spin of
supermassive black holes and accretion are the most important contribution to
the jet kinetic power. The contribution rates of both the spin of supermassive
black holes and accretion are more than 95\%. These results suggest that the
spin energy of supermassive black holes powers the relativistic jets. The jet
production efficiency of almost all Fermi blazars can be explained by
moderately thin magnetically arrested accretion disks around rapidly spinning
black holes.
|
By viewing $\tilde{A}$ and $\tilde{D}$ type cluster algebras as triangulated
surfaces, we find all cluster variables in terms of either (i) the frieze
pattern (or bipartite belt) or (ii) the periodic quantities previously found
for the cluster map associated with these frieze patterns. We show that these
cluster variables form friezes which are precisely the ones found in [1] by
applying the cluster character to the associated cluster category.
|
The representation of ground states of fermionic quantum impurity problems as
superpositions of Gaussian states has recently been given a rigorous
mathematical foundation. [S. Bravyi and D. Gosset, Comm. Math. Phys. 356, 451
(2017)]. It is natural to ask how many parameters are required for an efficient
variational scheme based on this representation. An upper bound is $\mathcal
O(N^2)$, where $N$ is the system size, which corresponds to the number
parameters needed to specify an arbitrary Gaussian state. We provide an
alternative representation, with more favorable scaling, only requiring
$\mathcal O(N)$ parameters, that we illustrate for the interacting resonant
level model. We achieve the reduction by associating mean-field-like parent
Hamiltonians with the individual terms in the superposition, using physical
insight to retain only the most relevant channels in each parent Hamiltonian.
We benchmark our variational ansatz against the Numerical Renormalization
Group, and compare our results to existing variational schemes of a similar
nature to ours. Apart from the ground state energy, we also study the spectrum
of the correlation matrix -- a very stringent measure of accuracy. Our approach
outperforms some existing schemes and remains quantitatively accurate in the
numerically challenging near-critical regime.
|
In this manuscript we show that the metric mean dimension of a free semigroup
action satisfies three variational principles: (a) the first one is based on a
definition of Shapira's entropy, introduced in \cite{SH} for a singles dynamics
and extended for a semigroup action in this note; (b) the second one treats
about a definition of Katok's entropy for a free semigroup action introduced in
\cite{CRV-IV}; (c) lastly we consider the local entropy function for a free
semigroup action and show that the metric mean dimension satisfies a
variational principle in terms of such function. Our results are inspired in
the ones obtained by \cite{LT2019}, \cite{VV}, \cite{GS1} and \cite{RX}.
|
Machine learning is a modern approach to problem-solving and task automation.
In particular, machine learning is concerned with the development and
applications of algorithms that can recognize patterns in data and use them for
predictive modeling. Artificial neural networks are a particular class of
machine learning algorithms and models that evolved into what is now described
as deep learning. Given the computational advances made in the last decade,
deep learning can now be applied to massive data sets and in innumerable
contexts. Therefore, deep learning has become its own subfield of machine
learning. In the context of biological research, it has been increasingly used
to derive novel insights from high-dimensional biological data. To make the
biological applications of deep learning more accessible to scientists who have
some experience with machine learning, we solicited input from a community of
researchers with varied biological and deep learning interests. These
individuals collaboratively contributed to this manuscript's writing using the
GitHub version control platform and the Manubot manuscript generation toolset.
The goal was to articulate a practical, accessible, and concise set of
guidelines and suggestions to follow when using deep learning. In the course of
our discussions, several themes became clear: the importance of understanding
and applying machine learning fundamentals as a baseline for utilizing deep
learning, the necessity for extensive model comparisons with careful
evaluation, and the need for critical thought in interpreting results generated
by deep learning, among others.
|
We study the barotropic compressible Navier-Stokes equations with Navier-type
boundary condition in a two-dimensional simply connected bounded domain with
$C^{\infty}$ boundary $\partial\Omega.$ By some new estimates on the boundary
related to the Navier-type slip boundary condition, the classical solution to
the initial-boundary-value problem of this system exists globally in time
provided the initial energy is suitably small even if the density has large
oscillations and contains vacuum states. Futhermore, we also prove that the
oscillation of the density will grow unboundedly in the long run with an
exponential rate provided vacuum (even a point) appears initially. As we known,
this is the first result concerning the global existence of classical solutions
to the compressible Navier-Stokes equations with Navier-type slip boundary
condition and the density containing vacuum initially for general 2D bounded
smooth domains.
|
Community detection is a key task to further understand the function and the
structure of complex networks. Therefore, a strategy used to assess this task
must be able to avoid biased and incorrect results that might invalidate
further analyses or applications that rely on such communities. Two widely used
strategies to assess this task are generally known as structural and
functional. The structural strategy basically consists in detecting and
assessing such communities by using multiple methods and structural metrics. On
the other hand, the functional strategy might be used when ground truth data
are available to assess the detected communities. However, the evaluation of
communities based on such strategies is usually done in experimental
configurations that are largely susceptible to biases, a situation that is
inherent to algorithms, metrics and network data used in this task.
Furthermore, such strategies are not systematically combined in a way that
allows for the identification and mitigation of bias in the algorithms, metrics
or network data to converge into more consistent results. In this context, the
main contribution of this article is an approach that supports a robust quality
evaluation when detecting communities in real-world networks. In our approach,
we measure the quality of a community by applying the structural and functional
strategies, and the combination of both, to obtain different pieces of
evidence. Then, we consider the divergences and the consensus among the pieces
of evidence to identify and overcome possible sources of bias in community
detection algorithms, evaluation metrics, and network data. Experiments
conducted with several real and synthetic networks provided results that show
the effectiveness of our approach to obtain more consistent conclusions about
the quality of the detected communities.
|
In this note, we consider the complexity of maintaining the longest
increasing subsequence (LIS) of an array under (i) inserting an element, and
(ii) deleting an element of an array. We show that no algorithm can support
queries and updates in time $\mathcal{O}(n^{1/2-\epsilon})$ and
$\mathcal{O}(n^{1/3-\epsilon})$ for the dynamic LIS problem, for any constant
$\epsilon>0$, when the elements are weighted or the algorithm supports
1D-queries (on subarrays), respectively, assuming the All-Pairs Shortest Paths
(APSP) conjecture or the Online Boolean Matrix-Vector Multiplication (OMv)
conjecture. The main idea in our construction comes from the work of Abboud and
Dahlgaard [FOCS 2016], who proved conditional lower bounds for dynamic planar
graph algorithm. However, this needs to be appropriately adjusted and
translated to obtain an instance of the dynamic LIS problem.
|
We present a high-resolution (R=140,000) spectrum of the bright quasar
HE0001-2340 (z=2.26), obtained with ESPRESSO at the Very Large Telescope. We
analyse three systems at z=0.45, z=1.65, and z=2.19 using multiple-component
Voigt-profile fitting. We also compare our spectrum with those obtained with
VLT/UVES, covering a total period of 17 years. We disentangle turbulent and
thermal broadening in many components spread over about 400 km/s in the z~2.19
sub-DLA system. We derive an average temperature of 16000+/-1300 K, i.e., about
twice the canonical value of the warm neutral medium in the Galactic
interstellar medium. A comparison with other high-z, low-metallicity absorbers
reveals an anti-correlation between gas temperature and total HI column
density. Although requiring confirmation, this could be the first observational
evidence of a thermal decrease with galacto-centric distance, i.e., we may be
witnessing a thermal transition between the circum-galactic medium and the
cooler ISM. We revisit the Mg isotopic ratios at z=0.45 and z=1.65 and
constrain them to be xi = (26Mg+25Mg)/24Mg <0.6 and <1.4 in these two systems,
respectively. These values are consistent with the standard Solar ratio, i.e.,
we do not confirm strong enhancement of heavy isotopes previously inferred from
UVES data. Finally, we confirm the partial coverage of the quasar emission-line
region by a FeI-bearing cloud in the z=0.45 system and present evidence for
velocity sub-structure of the gas that has Doppler parameters of the order of
only ~0.3 km/s. This work demonstrates the uniqueness of high-fidelity,
high-resolution optical spectrographs on large telescopes as tools to
investigate the thermal state of the gas in and around galaxies as well as its
spatial and velocity structure on small scales, and to constrain the associated
stellar nucleosynthetic history. [abridged]
|
The anomalous dimensions for the interpolating currents of baryons are
indispensable inputs in a serious analysis of baryon QCD sum rules. However,
the results in the literature are vague. In view of this, in this work, we
investigate the one-loop anomalous dimensions for some interpolating currents
such as those of $\Lambda_{Q}$ and proton. This work has more significance in
pedagogy.
|
Four adaptations of the smoothed aggregation algebraic multigrid (SA-AMG)
method are proposed with an eye towards improving the convergence and
robustness of the solver in situations when the discretization matrix contains
many weak connections. These weak connections can cause higher than expected
levels of fill-in within the coarse discretization matrices and can also give
rise to sub-optimal smoothing within the prolongator smoothing phase. These
smoothing drawbacks are due to the relatively small size of some diagonal
entries within the filtered matrix that one obtains after dropping the weak
connections. The new algorithms consider modifications to the Jacobi-like step
that defines the prolongator smoother, modifications to the filtered matrix,
and also direct modifications to the resulting grid transfer operators.
Numerical results are given illustrating the potential benefits of the proposed
adaptations.
|
Materials with strong magnetoresistive responses are the backbone of
spintronic technology, magnetic sensors, and hard drives. Among them, manganese
oxides with a mixed valence and a cubic perovskite structure stand out due to
their colossal magnetoresistance (CMR). A double exchange interaction underlies
the CMR in manganates, whereby charge transport is enhanced when the spins on
neighboring Mn3+ and Mn4+ ions are parallel. Prior efforts to find different
materials or mechanisms for CMR resulted in a much smaller effect. Here we show
an enormous CMR at low temperatures in EuCd2P2 without manganese, oxygen, mixed
valence, or cubic perovskite structure. EuCd2P2 has a layered trigonal lattice
and exhibits antiferromagnetic ordering at 11 K. The magnitude of CMR (104
percent) in as-grown crystals of EuCd2P2 rivals the magnitude in optimized thin
films of manganates. Our magnetization, transport, and synchrotron X-ray data
suggest that strong magnetic fluctuations are responsible for this phenomenon.
The realization of CMR at low temperatures without heterovalency leads to a new
regime for materials and technologies related to antiferromagnetic spintronics.
|
Group sequential design (GSD) is widely used in clinical trials in which
correlated tests of multiple hypotheses are used. Multiple primary objectives
resulting in tests with known correlations include evaluating 1) multiple
experimental treatment arms, 2) multiple populations, 3) the combination of
multiple arms and multiple populations, or 4) any asymptotically multivariate
normal tests. In this paper, we focus on the first 3 of these and extend the
framework of the weighted parametric multiple test procedure from fixed designs
with a single analysis per objective to a GSD setting where different
objectives may be assessed at the same or different times, each in a group
sequential fashion. Pragmatic methods for design and analysis of weighted
parametric group sequential design(WPGSD) under closed testing procedures are
proposed to maintain the strong control of familywise Type I error rate (FWER)
when correlations between tests are incorporated. This results in the ability
to relax testing bounds compared to designs not fully adjusting for known
correlations, increasing power or allowing decreased sample size. We illustrate
the proposed methods using clinical trial examples and conduct a simulation
study to evaluate the operating characteristics.
|
The equilibrium of magneto-elastic rods, formed of an elastic matrix
containing a uniform distribution of paramagnetic particles, that are subject
to terminal loads and are immersed in a uniform magnetic field, is studied. The
deduced nonlinear equilibrium equations are fully consistent with Kirchhoff's
theory in the sense that they hold at the same order of magnitude. Exact
solutions of those equations in terms of Weierstrass elliptic functions are
presented with reference to magneto-elastic cantilevers that undergo planar
deformations under the action of a terminal force and a magnetic field whose
directions are either parallel or orthogonal. The exact solutions are applied
to the study of a problem of remotely controlled deformation of a rod and to a
bifurcation problem in which the end force and the magnetic field act as an
imperfection parameter and a bifurcation parameter, respectively.
|
The likelihood-informed subspace (LIS) method offers a viable route to
reducing the dimensionality of high-dimensional probability distributions
arising in Bayesian inference. LIS identifies an intrinsic low-dimensional
linear subspace where the target distribution differs the most from some
tractable reference distribution. Such a subspace can be identified using the
leading eigenvectors of a Gram matrix of the gradient of the log-likelihood
function. Then, the original high-dimensional target distribution is
approximated through various forms of marginalization of the likelihood
function, in which the approximated likelihood only has support on the
intrinsic low-dimensional subspace. This approximation enables the design of
inference algorithms that can scale sub-linearly with the apparent
dimensionality of the problem. Intuitively, the accuracy of the approximation,
and hence the performance of the inference algorithms, are influenced by three
factors -- the dimension truncation error in identifying the subspace, Monte
Carlo error in estimating the Gram matrices, and Monte Carlo error in
constructing marginalizations. %This work establishes a unified framework to
analyze each of these three factors and their interplay. Under mild technical
assumptions, we establish error bounds for a range of existing dimension
reduction techniques based on the principle of LIS. Our error bounds also
provide useful insights into the accuracy of these methods. In addition, we
analyze the integration of LIS with sampling methods such as Markov Chain Monte
Carlo (MCMC) and sequential Monte Carlo (SMC). We also demonstrate the
applicability of our analysis on a linear inverse problem with Gaussian prior,
which shows that all the estimates can be dimension-independent if the prior
covariance is a trace-class operator.
|
The asteroid (16) Psyche is the largest of the M-type asteroids, which have
been hypothesized to be the cores of disrupted planetesimals and the parent
bodies of the iron meteorites. While recent evidence has collected against a
pure metal composition for Psyche, its spectrum and radar properties remain
anomalous. We observed (16) Psyche in thermal emission with the Atacama Large
(sub-)Millimeter Array (ALMA) at a resolution of 30 km over 2/3 of its
rotation. The diurnal temperature variations are at the $\sim$10 K level over
most of the surface and are best fit by a smooth surface with a thermal inertia
of 280$\pm$100 J m$^{-2}$ K$^{-1}$ s$^{-1/2}$. We measure a millimeter
emissivity of 0.61$\pm$0.02, which we interpret via a model that treats the
surface as a porous mixture of silicates and metals, where the latter may take
the form of iron sulfides/oxides or alternatively as conducting metallic
inclusions. The emissivity indicates a metal content of no less than 20\% and
potentially much higher, but the polarized emission that should be present for
a surface with $\geq$20\% metal content is almost completely absent. This
requires a highly scattering surface, which may be due to the presence of
reflective metallic inclusions. If such is the case, a consequence is that
metal-rich asteroids may produce less polarized emission than metal-poor
asteroids, exactly the opposite prediction from standard theory, arising from
the dominance of scattering over the bulk material properties.
|
For a Lie group $G$, let $B_{com} G$ be the classifying space for
commutativity. Let $E_{com} G$ be the total space of the principal $G$-bundle
associated to $B_{com} G$. In this article, we present a computation of the
cohomology of $E_{com} U(3)$ using the spectral sequence associated to a
homotopy colimit. As a part of our computation we will also compute the
integral cohomology of $U(3)/N(T)$ and $(U(3)/T) \times_{\Sigma_3} (U(3)/T)$
where $T$ is a maximal torus of $U(3)$ with normalizer $N(T)$.
|
In this paper, the well-known Faulkner construction is revisited and adapted
to include the super case, which gives a natural correspondence between
generalized Jordan (super)pairs and faithful Lie (super)algebra (super)modules,
under certain constraints (bilinear forms with properties analogous to the ones
of a Killing form are required, and only finite-dimensional objects are
considered). We always assume that the base field has characteristic different
from $2$.
It is also proven that associated objects in this Faulkner correspondence
have isomorphic automorphism group schemes. Finally, this correspondence will
be used to transfer the construction of the tensor product to the class of
generalized Jordan (super)pairs with "good" bilinear forms.
|
The important, and often dominant, role of tunneling in low temperature
kinetics has resulted in numerous theoretical explorations into the methodology
for predicting it. Nevertheless, there are still key aspects of the derivations
that are lacking, particularly for non-separable systems in the low temperature
regime, and further explorations of the physical factors affecting the
tunneling rate are warranted. In this work we obtain a closed-form rate
expression for the tunneling rate constant that is a direct analog of the
rigid-rotor-harmonic-oscillator expression. This expression introduces a novel
"entanglement factor" that modulates the reaction rate. Furthermore, we are
able to extend this expression, which is valid for non-separable systems at low
temperatures, to properly account for the conservation of angular momentum. In
contrast, previous calculations have considered only vibrational transverse
modes and so effectively employ a decoupled rotational partition function for
the orientational modes. We also suggest a simple theoretical model to describe
the tunneling effects in the vicinity of the crossover temperature (the
temperature where tunneling becomes the dominating mechanism). This model
allows one to naturally classify, interpret, and predict experimental data.
Among other things, it quantitatively explains in simple terms the so-called
"quantum bobsled" effect, also known as the negative centrifugal effect, which
is related to curvature of the reaction path. Taken together, the expressions
obtained here allow one to predict the thermal and $E$-resolved rate constants
over broad ranges of temperatures and energies.
|
Convolutional Neural Networks (CNNs) are the state-of-the-art algorithms for
the processing of images. However the configuration and training of these
networks is a complex task requiring deep domain knowledge, experience and much
trial and error. Using genetic algorithms, competitive CNN topologies for image
recognition can be produced for any specific purpose, however in previous work
this has come at high computational cost. In this work two novel approaches are
presented to the utilisation of these algorithms, effective in reducing
complexity and training time by nearly 20%. This is accomplished via
regularisation directly on training time, and the use of partial training to
enable early ranking of individual architectures. Both approaches are validated
on the benchmark CIFAR10 data set, and maintain accuracy.
|
In this paper, the structure of the second relative homology and the relative
stem cover of the direct sum of two pairs of Leibniz algebras are determined by
means of the non-abelian tensor product of Leibniz algebras. We also
characterize all pairs of finite dimensional nilpotent Leibniz algebras such
that...
|
We study two actions of the (degree 0) Picard group on the set of the
spanning trees of a finite ribbon graph. It is known that these two actions,
denoted $\beta_q$ and $\rho_q$ respectively, are independent of the base vertex
$q$ if and only if the ribbon graph is planar. Baker and Wang conjectured that
in a nonplanar ribbon graph without multiple edges there always exists a vertex
$q$ for which $\rho_q\neq\beta_q$. We prove the conjecture and extend it to a
class of ribbon graphs with multiple edges. We also give explicit examples
exploring the relationship between the two torsor structures in the nonplanar
case.
|
In this paper, we consider Legendre trajectories of trans-$S$-manifolds. We
obtain curvature characterizations of these curves and give a classification
theorem. We also investigate Legendre curves whose Frenet frame fields are
linearly dependent with certain combination of characteristic vector fields of
the trans-$S$-manifold.
|
Recent research has shown that it is possible to find interpretable
directions in the latent spaces of pre-trained Generative Adversarial Networks
(GANs). These directions enable controllable image generation and support a
wide range of semantic editing operations, such as zoom or rotation. The
discovery of such directions is often done in a supervised or semi-supervised
manner and requires manual annotations which limits their use in practice. In
comparison, unsupervised discovery allows finding subtle directions that are
difficult to detect a priori. In this work, we propose a contrastive
learning-based approach to discover semantic directions in the latent space of
pre-trained GANs in a self-supervised manner. Our approach finds semantically
meaningful dimensions comparable with state-of-the-art methods.
|
The confirmation of the discrepancy with the Standard Model predictions in
the anomalous magnetic moment by the Muon g-2 experiment at Fermilab points to
a low scale of new physics. Flavour symmetries broken at low energies can
account for this discrepancy but these models are much more restricted, as they
would also generate off-diagonal entries in the dipole moment matrix.
Therefore, if we assume that the observed discrepancy in the muon $g-2$ is
explained by the contributions of a low-energy flavor symmetry, lepton flavour
violating processes can constrain the structure of the lepton mass matrices and
therefore the flavour symmetries themselves predicting these structures. We
apply these ideas to several discrete flavour symmetries popular in the
leptonic sector, such as $\Delta (27)$, $A_4$, and $A_5 \ltimes {\rm CP}$.
|
In this paper, we propose a novel discriminative model for online behavioral
analysis with application to emotion state identification. The proposed model
is able to extract more discriminative characteristics from behavioral data
effectively and find the direction of optimal projection efficiently to satisfy
requirements of online data analysis, leading to better utilization of the
behavioral information to produce more accurate recognition results.
|
Explainable artificial intelligence is the attempt to elucidate the workings
of systems too complex to be directly accessible to human cognition through
suitable side-information referred to as "explanations". We present a trainable
explanation module for convolutional image classifiers we call bounded logit
attention (BLA). The BLA module learns to select a subset of the convolutional
feature map for each input instance, which then serves as an explanation for
the classifier's prediction. BLA overcomes several limitations of the
instancewise feature selection method "learning to explain" (L2X) introduced by
Chen et al. (2018): 1) BLA scales to real-world sized image classification
problems, and 2) BLA offers a canonical way to learn explanations of variable
size. Due to its modularity BLA lends itself to transfer learning setups and
can also be employed as a post-hoc add-on to trained classifiers. Beyond
explainability, BLA may serve as a general purpose method for differentiable
approximation of subset selection. In a user study we find that BLA
explanations are preferred over explanations generated by the popular
(Grad-)CAM method.
|
Deep Neural Networks are known to be vulnerable to small, adversarially
crafted, perturbations. The current most effective defense methods against
these adversarial attacks are variants of adversarial training. In this paper,
we introduce a radically different defense trained only on clean images: a
sparse coding based frontend which significantly attenuates adversarial attacks
before they reach the classifier. We evaluate our defense on CIFAR-10 dataset
under a wide range of attack types (including Linf , L2, and L1 bounded
attacks), demonstrating its promise as a general-purpose approach for defense.
|
A classical theorem on character degrees states that if a finite group has
fewer than four character degrees, then the group is solvable. We prove a
corresponding result on character values by showing that if a finite group has
fewer than eight character values in its character table, then the group is
solvable. This confirms a conjecture of T. Sakurai. We also classify
non-solvable groups with exactly eight character values.
|
We show that weak equivalences in a (cofibrantly generated) left Bousfield
localization of the projective model category of simplicial presheaves can be
characterized by a local lifting property if and only if the localization is
exact.
|
Accurate liver and lesion segmentation from computed tomography (CT) images
are highly demanded in clinical practice for assisting the diagnosis and
assessment of hepatic tumor disease. However, automatic liver and lesion
segmentation from contrast-enhanced CT volumes is extremely challenging due to
the diversity in contrast, resolution, and quality of images. Previous methods
based on UNet for 2D slice-by-slice or 3D volume-by-volume segmentation either
lack sufficient spatial contexts or suffer from high GPU computational cost,
which limits the performance. To tackle these issues, we propose a novel
context-aware PolyUNet for accurate liver and lesion segmentation. It jointly
explores structural diversity and consecutive t-adjacent slices to enrich
feature expressive power and spatial contextual information while avoiding the
overload of GPU memory consumption. In addition, we utilize zoom out/in and
two-stage refinement strategy to exclude the irrelevant contexts and focus on
the specific region for the fine-grained segmentation. Our method achieved very
competitive performance at the MICCAI 2017 Liver Tumor Segmentation (LiTS)
Challenge among all tasks with a single model and ranked the $3^{rd}$,
$12^{th}$, $2^{nd}$, and $5^{th}$ places in the liver segmentation, lesion
segmentation, lesion detection, and tumor burden estimation, respectively.
|
Fusing live fluoroscopy images with a 3D rotational reconstruction of the
vasculature allows to navigate endovascular devices in minimally invasive
neuro-vascular treatment, while reducing the usage of harmful iodine contrast
medium. The alignment of the fluoroscopy images and the 3D reconstruction is
initialized using the sensor information of the X-ray C-arm geometry. Patient
motion is then corrected by an image-based registration algorithm, based on a
gradient difference similarity measure using digital reconstructed radiographs
of the 3D reconstruction. This algorithm does not require the vessels in the
fluoroscopy image to be filled with iodine contrast agent, but rather relies on
gradients in the image (bone structures, sinuses) as landmark features. This
paper investigates the accuracy, robustness and computation time aspects of the
image-based registration algorithm. Using phantom experiments 97% of the
registration attempts passed the success criterion of a residual registration
error of less than 1 mm translation and 3{\deg} rotation. The paper establishes
a new method for validation of 2D-3D registration without requiring changes to
the clinical workflow, such as attaching fiducial markers. As a consequence,
this method can be retrospectively applied to pre-existing clinical data. For
clinical data experiments, 87% of the registration attempts passed the
criterion of a residual translational error of < 1 mm, and 84% possessed a
rotational error of < 3{\deg}.
|
We determine the Waring ranks of all sextic binary forms using a Geometric
Invariant Theory approach. In particular, we shed new light on a claim by E. B.
Elliott at the end of the 19th century concerning the binary sextics with
Waring rank 3.
|
We demonstrate that single crystals of methylammonium lead bromide (MAPbBr3)
could be grown directly on vertically aligned carbon nanotube (VACNT) forests.
The fast-growing MAPbBr3 single crystals engulfed the protogenetic inclusions
in the form of individual CNTs, thus resulting in a three-dimensionally
enlarged photosensitive interface. Photodetector devices were obtained,
detecting low light intensities (~20 nW) from UV range to 550 nm. Moreover, a
photocurrent was recorded at zero external bias voltage which points to the
plausible formation of a p-n junction resulting from interpenetration of
MAPbBr3 single crystals into the VACNT forest. This reveals that vertically
aligned CNTs can be used as electrodes in operationally stable perovskite-based
optoelectronic devices and can serve as a versatile platform for future
selective electrode development.
|
We present the Galactic Radio Explorer (GReX), an all-sky monitor to probe
the brightest bursts in the radio sky. Building on the success of STARE2, we
will search for fast radio bursts (FRBs) emitted from Galactic magnetars as
well as bursts from nearby galaxies. GReX will search down to ten microseconds
time resolution, allowing us to find new super giant radio pulses from Milky
Way pulsars and study their broadband emission. The proposed instrument will
employ ultra-wide band (0.7-2 GHz) feeds coupled to a high performance
(receiver temperature 10 K) low noise amplifier (LNA) originally developed for
the DSA-110 and DSA-2000 projects. In GReX Phase I (GReX-I), unit systems will
be deployed at Owens Valley Radio Observatory (OVRO) and Big Smoky Valley,
Nevada. Phase II will expand the array, placing feeds in India, Australia, and
elsewhere in order to build up to continuous coverage of nearly 4$\pi$
steradians and to increase our exposure to the Galactic plane. We model the
local magnetar population to forecast for GReX, finding the improved
sensitivity and increased exposure to the Galactic plane could lead to dozens
of FRB-like bursts per year.
|
Although travelling faster than the speed of light in vacuum is not
physically allowed, the analogous bound in medium can be exceeded by a moving
particle. For an electron in dielectric material this leads to emission of
photons which is usually referred to as Cherenkov radiation. In this article a
related mathematical system for waves in inhomogeneous anisotropic medium with
a maximum of three polarisation directions is studied. The waves are assumed to
satisfy $P^k_j u_k (x,t) = S_j(x,t)$, where $P$ is a vector-valued wave
operator that depends on a Riemannian metric and $S $ is a point source that
moves at speed $\beta < c$ in given direction $\theta \in \mathbb{S}^2$. The
phase velocity $v_{\text{phase}}$ is described by the metric and depends on
both location and direction of motion. In regions where
$v_{\text{phase}}(x,\theta) < \beta <c $ holds the source generates a
cone-shaped front of singularities that propagate according to the underlying
geometry. We introduce a model for a measurement setup that applies the
mechanism and show that the Riemannian metric inside a bounded region can be
reconstructed from partial boundary measurements. The result suggests that
Cherenkov type radiation can be applied to detect internal geometric properties
of an inhomogeneous anisotropic target from a distance.
|
Camera pose estimation in known scenes is a 3D geometry task recently tackled
by multiple learning algorithms. Many regress precise geometric quantities,
like poses or 3D points, from an input image. This either fails to generalize
to new viewpoints or ties the model parameters to a specific scene. In this
paper, we go Back to the Feature: we argue that deep networks should focus on
learning robust and invariant visual features, while the geometric estimation
should be left to principled algorithms. We introduce PixLoc, a scene-agnostic
neural network that estimates an accurate 6-DoF pose from an image and a 3D
model. Our approach is based on the direct alignment of multiscale deep
features, casting camera localization as metric learning. PixLoc learns strong
data priors by end-to-end training from pixels to pose and exhibits exceptional
generalization to new scenes by separating model parameters and scene geometry.
The system can localize in large environments given coarse pose priors but also
improve the accuracy of sparse feature matching by jointly refining keypoints
and poses with little overhead. The code will be publicly available at
https://github.com/cvg/pixloc.
|
High precision CCD observations of six totally eclipsing contact binaries
were presented and analyzed. It is found that only one target is an A-type
contact binary (V429 Cam), while the others are W-type contact ones. By
analyzing the times of light minima, we discovered that two of them exhibit
secular period increase while three manifest long-term period decrease. For
V1033 Her, a cyclic variation superimposed on the long-term increase was
discovered. By comparing the Gaia distances with those calculated by the
absolute parameters of 173 contact binaries, we found that Gaia distance can be
applied to estimate absolute parameters for most contact binaries. The absolute
parameters of our six targets were estimated by using their Gaia distances. The
evolutionary status of contact binaries was studied, we found that the A- and
W- subtype contact binaries may have different formation channels. The
relationship between the spectroscopic and photometric mass ratios for 101
contact binaries was presented. It is discovered that the photometric mass
ratios are in good agreement with the spectroscopic ones for almost all the
totally eclipsing systems, which is corresponding to the results derived by
Pribulla et al. and Terrell & Wilson.
|
User activities generate a significant number of poor-quality or irrelevant
images and data vectors that cannot be processed in the main data processing
pipeline or included in the training dataset. Such samples can be found with
manual analysis by an expert or with anomalous detection algorithms. There are
several formal definitions for the anomaly samples. For neural networks, the
anomalous is usually defined as out-of-distribution samples. This work proposes
methods for supervised and semi-supervised detection of out-of-distribution
samples in image datasets. Our approach extends a typical neural network that
solves the image classification problem. Thus, one neural network after
extension can solve image classification and anomalous detection problems
simultaneously. Proposed methods are based on the center loss and its effect on
a deep feature distribution in a last hidden layer of the neural network. This
paper provides an analysis of the proposed methods for the LeNet and
EfficientNet-B0 on the MNIST and ImageNet-30 datasets.
|
Deep learning (DL) has emerged as a powerful tool for accelerated MRI
reconstruction, but these methods often necessitate a database of fully-sampled
measurements for training. Recent self-supervised and unsupervised learning
approaches enable training without fully-sampled data. However, a database of
undersampled measurements may not be available in many scenarios, especially
for scans involving contrast or recently developed translational acquisitions.
Moreover, database-trained models may not generalize well when the unseen
measurements differ in terms of sampling pattern, acceleration rate, SNR, image
contrast, and anatomy. Such challenges necessitate a new methodology that can
enable scan-specific DL MRI reconstruction without any external training
datasets. In this work, we propose a zero-shot self-supervised learning
approach to perform scan-specific accelerated MRI reconstruction to tackle
these issues. The proposed approach splits available measurements for each scan
into three disjoint sets. Two of these sets are used to enforce data
consistency and define loss during training, while the last set is used to
establish an early stopping criterion. In the presence of models pre-trained on
a database with different image characteristics, we show that the proposed
approach can be combined with transfer learning to further improve
reconstruction quality.
|
We study the collision frequencies of particles in the weakly and highly
ionized plasmas with the power-law q-distributions in nonextensive statistics.
We derive the average collision frequencies of neutral-neutral particle,
electron-neutral particle, ion-neutral particle, electron-electron, ion-ion and
electron-ion, respectively, in the q-distributed plasmas. We show that the
average collision frequencies depend strongly on the q-parameter in a complex
form and thus their properties are significantly different from that in
Maxwell-distributed plasmas. These new average collision frequencies are
important for us to study accurately the transport property in the complex
plasmas with non-Maxwell/power-law velocity distributions.
|
Generalized zero-shot learning (GZSL) aims to classify samples under the
assumption that some classes are not observable during training. To bridge the
gap between the seen and unseen classes, most GZSL methods attempt to associate
the visual features of seen classes with attributes or to generate unseen
samples directly. Nevertheless, the visual features used in the prior
approaches do not necessarily encode semantically related information that the
shared attributes refer to, which degrades the model generalization to unseen
classes. To address this issue, in this paper, we propose a novel semantics
disentangling framework for the generalized zero-shot learning task (SDGZSL),
where the visual features of unseen classes are firstly estimated by a
conditional VAE and then factorized into semantic-consistent and
semantic-unrelated latent vectors. In particular, a total correlation penalty
is applied to guarantee the independence between the two factorized
representations, and the semantic consistency of which is measured by the
derived relation network. Extensive experiments conducted on four GZSL
benchmark datasets have evidenced that the semantic-consistent features
disentangled by the proposed SDGZSL are more generalizable in tasks of
canonical and generalized zero-shot learning. Our source code is available at
https://github.com/uqzhichen/SDGZSL.
|
While symmetry has been exploited to analyze synchronization patterns in
complex networks, the identification of symmetries in large-size network
remains as a challenge. We present in the present work a new method, namely the
method of eigenvector-based analysis, to identify symmetries in general complex
networks and, incorporating the method of eigenvalue analysis, investigate the
formation and transition of synchronization patterns. The efficiency of the
proposed method is validated by both artificial and empirical network models
consisting of coupled chaotic oscillators. Furthermore, we generalize the
method to networks of weighted couplings in which no strict symmetry exists but
synchronization clusters are still well organized, with the predications
agreeing with the results of direct simulations. Our study provides a new
approach for identifying network symmetries, and paves a way to the
investigation of synchronization patterns in large-size complex network.
|
We revisit the notions of the quantum-mechanical sojourn time in the context
of the quantum clocks to enquire whether the sojourn time be clocked without
the clock affecting the dynamics of the wave motion. Upon recognizing that the
positivity of conditional sojourn time is not ensured even in the case of
physically co-evolving clock mechanisms, we trace its origins to the
non-trivial inadvertent scattering arising from the disparity, however weak,
engendered by the very clock potential. Specifically, our investigations focus
on the Larmor spin rotation-based unitary clock where the alleviation of these
unphysical contributions has been achieved by correcting the mathematical
apparatus of extracting the sojourn times. The corrections have been obtained
for both the spin precession-based and spin alignment-based scenarios. The
sojourn times so obtained are found to have proper high- and low-energy limits
and turn out to be positive definite for an arbitrary potential. The regimen
provided here is general and appeals equivalently for unitary as well as
non-unitary clocks where the clock-induced perturbations couple to the system
Hamiltonian.
|
The damage mechanisms and load redistribution of high strength TC17 titanium
alloy/unidirectional SiC fibre composite (fibre diameter = 100 $\mu$m) under
high temperature (350 {\deg}C) fatigue cycling have been investigated in situ
using synchrotron X-ray computed tomography (CT) and X-ray diffraction (XRD)
for high cycle fatigue (HCF) under different stress amplitudes. The
three-dimensional morphology of the crack and fibre fractures has been mapped
by CT. During stable growth, matrix cracking dominates with the crack
deflecting (by 50-100 $\mu$m in height) when bypassing bridging fibres. A small
number of bridging fibres have fractured close to the matrix crack plane
especially under relatively high stress amplitude cycling. Loading to the peak
stress led to rapid crack growth accompanied by a burst of fibre fractures.
Many of the fibre fractures occurred 50-300 $\mu$m from the matrix crack plane
during rapid growth, in contrast to that in the stable growth stage, leading to
extensive fibre pull-out on the fracture surface. The changes in fibre loading,
interfacial stress, and the extent of fibre-matrix debonding in the vicinity of
the crack have been mapped for the fatigue cycle and after the rapid growth by
high spatial resolution XRD. The fibre/matrix interfacial sliding extends up to
600 $\mu$m (in the stable growth zone) or 700 $\mu$m (in the rapid growth zone)
either side of the crack plane. The direction of interfacial shear stress
reverses with the loading cycle, with the maximum frictional sliding stress
reaching ~55 MPa in both the stable growth and rapid growth regimes.
|
HardWare-aware Neural Architecture Search (HW-NAS) has recently gained
tremendous attention by automating the design of DNNs deployed in more
resource-constrained daily life devices. Despite its promising performance,
developing optimal HW-NAS solutions can be prohibitively challenging as it
requires cross-disciplinary knowledge in the algorithm, micro-architecture, and
device-specific compilation. First, to determine the hardware-cost to be
incorporated into the NAS process, existing works mostly adopt either
pre-collected hardware-cost look-up tables or device-specific hardware-cost
models. Both of them limit the development of HW-NAS innovations and impose a
barrier-to-entry to non-hardware experts. Second, similar to generic NAS, it
can be notoriously difficult to benchmark HW-NAS algorithms due to their
significant required computational resources and the differences in adopted
search spaces, hyperparameters, and hardware devices. To this end, we develop
HW-NAS-Bench, the first public dataset for HW-NAS research which aims to
democratize HW-NAS research to non-hardware experts and make HW-NAS research
more reproducible and accessible. To design HW-NAS-Bench, we carefully
collected the measured/estimated hardware performance of all the networks in
the search spaces of both NAS-Bench-201 and FBNet, on six hardware devices that
fall into three categories (i.e., commercial edge devices, FPGA, and ASIC).
Furthermore, we provide a comprehensive analysis of the collected measurements
in HW-NAS-Bench to provide insights for HW-NAS research. Finally, we
demonstrate exemplary user cases to (1) show that HW-NAS-Bench allows
non-hardware experts to perform HW-NAS by simply querying it and (2) verify
that dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost
trade-offs. The codes and all collected data are available at
https://github.com/RICE-EIC/HW-NAS-Bench.
|
Tumour spheroids have the potential to be used as preclinical
chemosensitivity assays. However, the production of three dimensional (3D)
tumour spheroids remains challenging as not all tumour cell lines form
spheroids with regular morphologies and spheroid transfer often induces
disaggregation. In the field of pancreatic cancer, the MiaPaCa-2 cell line is
an interesting model for research but it is known for its difficulty to form
stable spheroids; also, when formed, spheroids from this cell line are weak and
arduous to manage and to harvest for further analyses such as multiple staining
and imaging. In this work, we compared different methods (i.e. hanging drop,
round bottom wells and Matrigel embedding, each of them with or without
methylcellulose in the media) to evaluate which one allowed to better overpass
these limitations. Morphometric analysis indicated that hanging drop in
presence of methylcellulose leaded to well-organized spheroids; interestingly,
quantitative PCR (qPCR) analysis reflected the morphometric characterization,
indicating that same spheroids expressed the highest values of CD44, VIMENTIN,
TGF beta1 and Ki67. In addition, we investigated the generation of MiaPaCa-2
spheroids when cultured on substrates of different hydrophobicity, in order to
minimize the area in contact with the culture media and to further improve
spheroid formation.
|
We propose a method to probe chameleon particles predicted in the $F(R)$
gravity as a model of the modified gravity based on the concept of a stimulated
pulsed-radar collider. We analyze the chameleon mechanism induced by an ambient
environment consisting of focused photon beams and a dilute gas surrounding a
place where stimulated photon-photon scatterings occur. We then discuss how to
extract the characteristic feature of the chameleon signal. We find that a
chameleon with the varying mass around $(0.1-1)\,\mu$eV in a viable model of
the $F(R)$ gravity is testable by searching for the steep pressure dependence
of the 10th-power for the signal yield.
|
We propose a robust and efficient way to store and transport quantum
information via one-dimensional discrete time quantum walks. We show how to
attain an effective dispersionless wave packet evolution using only two types
of local unitary operators (quantum coins or gates), properly engineered to act
at predetermined times and at specific lattice sites during the system's time
evolution. In particular, we show that a qubit initially localized about a
Gaussian distribution can be almost perfectly confined during long times or
sent hundreds lattice sites away from its original location and later almost
perfectly reconstructed using only Hadamard and $\sigma_x$ gates.
|
Nested stochastic modeling has been on the rise in many fields of the
financial industry. Such modeling arises whenever certain components of a
stochastic model are stochastically determined by other models. There are at
least two main areas of applications, including (1) portfolio risk management
in the banking sector and (2) principle-based reserving and capital
requirements in the insurance sector. As financial instrument values often
change with economic fundamentals, the risk management of a portfolio (outer
loop) often requires the assessment of financial positions subject to changes
in risk factors in the immediate future. The valuation of financial position
(inner loop) is based on projections of cashflows and risk factors into the
distant future. The nesting of such stochastic modeling can be computationally
challenging.
Most of existing techniques to speed up nested simulations are based on curve
fitting. The main idea is to establish a functional relationship between inner
loop estimator and risk factors by running a limited set of economic scenarios,
and, instead of running inner loop simulations, inner loop estimations are made
by feeding other scenarios into the fitted curve. This paper presents a
non-conventional approach based on the concept of sample recycling. Its essence
is to run inner loop estimation for a small set of outer loop scenarios and to
find inner loop estimates under other outer loop scenarios by recycling those
known inner loop paths. This new approach can be much more efficient when
traditional techniques are difficult to implement in practice.
|
Reconstruction of object or scene surfaces has tremendous applications in
computer vision, computer graphics, and robotics. In this paper, we study a
fundamental problem in this context about recovering a surface mesh from an
implicit field function whose zero-level set captures the underlying surface.
To achieve the goal, existing methods rely on traditional meshing algorithms;
while promising, they suffer from loss of precision learned in the implicit
surface networks, due to the use of discrete space sampling in marching cubes.
Given that an MLP with activations of Rectified Linear Unit (ReLU) partitions
its input space into a number of linear regions, we are motivated to connect
this local linearity with a same property owned by the desired result of
polygon mesh. More specifically, we identify from the linear regions,
partitioned by an MLP based implicit function, the analytic cells and analytic
faces that are associated with the function's zero-level isosurface. We prove
that under mild conditions, the identified analytic faces are guaranteed to
connect and form a closed, piecewise planar surface. Based on the theorem, we
propose an algorithm of analytic marching, which marches among analytic cells
to exactly recover the mesh captured by an implicit surface network. We also
show that our theory and algorithm are equally applicable to advanced MLPs with
shortcut connections and max pooling. Given the parallel nature of analytic
marching, we contribute AnalyticMesh, a software package that supports
efficient meshing of implicit surface networks via CUDA parallel computing, and
mesh simplification for efficient downstream processing. We apply our method to
different settings of generative shape modeling using implicit surface
networks. Extensive experiments demonstrate our advantages over existing
methods in terms of both meshing accuracy and efficiency.
|
Faced with a considerable lack of resources in African languages to carry out
work in Natural Language Processing (NLP), Natural Language Understanding (NLU)
and artificial intelligence, the research teams of NTeALan association has set
itself the objective of building open-source platforms for the collaborative
construction of lexicographic data in African languages. In this article, we
present our first reports after 2 years of collaborative construction of
lexicographic resources useful for African NLP tools.
|
A significant number of those killed in traffic accidents annually refers to
pedestrians. most of these accidents occur when a pedestrian is going to pass
the street. One of the most hazardous areas for pedestrians crossing the street
is midblock crosswalks. These areas are often uncontrolled and there is no
separated phase for pedestrian crossing. As a result, using created gaps among
vehicles is often the only opportunity for pedestrians to cross the street.
This research aims to investigate effective factors on pedestrian gap
acceptance at uncontrolled mid-block crosswalks. In this regard, several
variables such as individual, environmental, and traffic variables were
considered and their related data were extracted by filming the location. from
video images, they were analyzed using logit regression model and the behavior
After extracting the data of pedestrian while he is crossing the street was
modeled as the probabilistic function of pedestrian gap acceptance. The results
illustrate that conflict flow rate, the temporal gap among vehicles, and driver
yielding in the face of vehicles are the most important effective factors on
pedestrian behavior. The other one is that the speed and type of vehicles do
not affect the pedestrian's decision about rejecting or accepting the gap. The
results also show that individual characteristics such as age and gender of
pedestrians do not make a significant relationship with pedestrian gap
acceptance.
|
We construct for the first time conditionally decomposable $d$-polytopes for
$d \ge 4$. These examples have any number of vertices from $4d-4$ upwards. A
polytope is said to be conditionally decomposable if one polytope
combinatorially equivalent to it is decomposable (with respect to the Minkowski
sum) and another one combinatorially equivalent to it is indecomposable. In our
examples, one has $4d-2$ vertices and is the sum of a line segment and a
bi-pyramid over a prism. The other ones have $4d-4$ vertices, and one of them
is the sum of a line segment and a $2$-fold pyramid over a prism. We show that
the latter examples have the minimum number of vertices among all conditionally
decomposable $d$-polytopes that have a line segment for a summand.
|
We consider various asymptotic scaling limits $N\to\infty$ for the $2N$
complex eigenvalues of non-Hermitian random matrices in the symmetry class of
the symplectic Ginibre ensemble. These are known to be integrable, forming
Pfaffian point processes, and we obtain limiting expressions for the
corresponding kernel for different potentials. The first part is devoted to the
symplectic Ginibre ensemble with a Gaussian potential. We obtain the asymptotic
at the edge of the spectrum in the vicinity of the real line. The unifying form
of the kernel allows us to make contact with the bulk scaling along the real
line and with the edge scaling away from the real line, where we recover the
known determinantal process of the complex Ginibre ensemble. Part two covers
ensembles of Mittag-Leffler type with a singularity at the origin. For
potentials $Q(\zeta)=|\zeta|^{2\lambda}-(2c/N)\log|\zeta|$, with $\lambda>0$
and $c>-1$, the limiting kernel obeys a linear differential equation of
fractional order $1/\lambda$ at the origin. For integer $m=1/\lambda$ it can be
solved in terms of Mittag-Leffler functions. In the last part, we derive the
Ward's equation for a general class of potentials as a tool to investigate
universality. This allows us to determine the functional form of kernels that
are translation invariant up to its integration domain.
|
We report novel observational evidence on the evolutionary status of
lithium-rich giant stars by combining asteroseismic and lithium abundance data.
Comparing observations and models of the asteroseismic gravity-mode period
spacing $\Delta\Pi_{1}$, we find that super-Li-rich giants (SLR, A(Li)~$>
3.2$~dex) are almost exclusively young red-clump (RC) stars. Depending on the
exact phase of evolution, which requires more data to refine, SLR stars are
either (i) less than $\sim 2$~Myr or (ii) less than $\sim40$~Myr past the main
core helium flash (CHeF). Our observations set a strong upper limit for the
time of the inferred Li-enrichment phase of $< 40$~Myr post-CHeF, lending
support to the idea that lithium is produced around the time of the CHeF. In
contrast, the more evolved RC stars ($> 40$~Myr post-CHeF) generally have low
lithium abundances (A(Li)~$<1.0$~dex). Between the young, super-Li-rich phase,
and the mostly old, Li-poor RC phase, there is an average reduction of lithium
by about 3 orders of magnitude. This Li-destruction may occur rapidly. We find
the situation to be less clear with stars having Li abundances between the two
extremes of super-Li-rich and Li-poor. This group, the `Li-rich' stars ($3.2
>$~A(Li)~$> 1.0$~dex), shows a wide range of evolutionary states.
|
The Coulomb Branch Formula conjecturally expresses the refined Witten index
for $N=4$ Quiver Quantum Mechanics as a sum over multi-centered collinear black
hole solutions, weighted by so-called `single-centered' or `pure-Higgs'
indices, and suitably modified when the quiver has oriented cycles. On the
other hand, localization expresses the same index as an integral over the
complexified Cartan torus and auxiliary fields, which by Stokes' theorem leads
to the famous Jeffrey-Kirwan residue formula. Here, by evaluating the same
integral using steepest descent methods, we show the index is in fact given by
a sum over deformed multi-centered collinear solutions, which encompasses both
regular and scaling collinear solutions. As a result, we confirm the Coulomb
Branch Formula for Abelian quivers in the presence of oriented cycles, and
identify the origin of the pure-Higgs and minimal modification terms as coming
from collinear scaling solutions. For cyclic Abelian quivers, we observe that
part of the scaling contributions reproduce the stacky invariants for trivial
stability, a mathematically well-defined notion whose physics significance had
remained obscure.
|
We present an S$_4$ flavour symmetric model within a minimal seesaw framework
resulting in mass matrices that leads to TM$_1$ mixing. Minimal seesaw is
realized by adding two right-handed neutrinos to the Standard Model. The model
predicts Normal Hierarchy (NH) for neutrino masses. Using the constrained
six-dimensional parameter space, we have evaluated the effective Majorana
neutrino mass, which is the parameter of interest in neutrinoless double beta
decay experiments. The possibility of explaining baryogenesis via resonant
leptogenesis is also examined within the model. A non-zero, resonantly enhanced
CP asymmetry generated from the decay of right-handed neutrinos at the TeV
scale is studied, considering flavour effects. The evolution of lepton
asymmetry is discussed by solving the set of Boltzmann equations numerically
and obtain the value of baryon asymmetry to be $\lvert \eta_B \rvert = 6.3
\times 10^{-10}$.
|
Automatic speech recognition (ASR) is widely used in consumer electronics.
ASR greatly improves the utility and accessibility of technology, but usually
the output is only word sequences without punctuation. This can result in
ambiguity in inferring user-intent. We first present a transformer-based
approach for punctuation prediction that achieves 8% improvement on the IWSLT
2012 TED Task, beating the previous state of the art [1]. We next describe our
multimodal model that learns from both text and audio, which achieves 8%
improvement over the text-only algorithm on an internal dataset for which we
have both the audio and transcriptions. Finally, we present an approach to
learning a model using contextual dropout that allows us to handle variable
amounts of future context at test time.
|
Semiconductor quantum dots, where electrons or holes are isolated via
electrostatic potentials generated by surface gates, are promising building
blocks for semiconductor-based quantum technology. Here, we investigate double
quantum dot (DQD) charge qubits in GaAs, capacitively coupled to high-impedance
SQUID array and Josephson junction array resonators. We tune the strength of
the electric dipole interaction between the qubit and the resonator in-situ
using surface gates. We characterize the qubit-resonator coupling strength,
qubit decoherence, and detuning noise affecting the charge qubit for different
electrostatic DQD configurations. We find that all quantities can be tuned
systematically over more than one order of magnitude, resulting in reproducible
decoherence rates $\Gamma_2/2\pi<~5$ MHz in the limit of high interdot
capacitance. Conversely, by reducing the interdot capacitance, we can increase
the DQD electric dipole strength, and therefore its coupling to the resonator.
By employing a Josephson junction array resonator with an impedance of $\sim4$
k$\Omega$ and a resonance frequency of $\omega_r/2\pi \sim 5.6$ GHz, we observe
a coupling strength of $g/2\pi \sim 630$ MHz, demonstrating the possibility to
achieve the ultrastrong coupling regime (USC) for electrons hosted in a
semiconductor DQD. These results are essential for further increasing the
coherence of quantum dot based qubits and investigating USC physics in
semiconducting QDs.
|
The process of the excitation of an electromagnetic field by a relativistic
electron bunch at the input of a semi-infinite plasma waveguide is
investigated. The shape and intensity of the short transition electromagnetic
pulse are determined and its evolution during propagating in the plasma
waveguide is studied. Besides, the influence of the plasma boundary on the
spatial structure of plasma wake oscillations in plasma waveguide is
considered.
|
We revisit the classic Susceptible-Infected-Recovered (SIR) epidemic model
and one of its nonlocal variations recently developed in \cite{Guan}. We
introduce several new approaches to derive exact analytical solutions in the
classical situation and analyze the corresponding effective approximations in
the nonlocal setting. An interesting new feature of the nonlocal models,
compared with the classic SIR model, is the appearance of multiple peak
solutions for the infected population. We provide several rigorous results on
the existence and non-existence of peak solutions with sharp asymptotics.
|
Brunel harmonics appear in the optical response of an atom in process of
laser-induced ionization, when the electron leaves the atom and is accelerated
in the strong optical field. In contrast to recollision-based harmonics, the
Brunel mechanism does not require the electron returning to the core. Here we
show that in the presence of a strong ionizing terahertz (THz) field, even a
weak driving field at the optical frequencies allow for generating Brunel
harmonics effectively. The strong ionizing THz pump suppresses recollisions,
making Brunel dominant in a wide spectral range. High-order Brunel harmonics
may form a coherent carrier-envelope-phase insensitive supercontinuum,
compressible into an isolated pulse with the duration down to 100 attoseconds.
|
Privacy Policies are the legal documents that describe the practices that an
organization or company has adopted in the handling of the personal data of its
users. But as policies are a legal document, they are often written in
extensive legal jargon that is difficult to understand. Though work has been
done on privacy policies but none that caters to the problem of verifying if a
given privacy policy adheres to the data protection laws of a given country or
state. We aim to bridge that gap by providing a framework that analyzes privacy
policies in light of various data protection laws, such as the General Data
Protection Regulation (GDPR). To achieve that, firstly we labeled both the
privacy policies and laws. Then a correlation scheme is developed to map the
contents of a privacy policy to the appropriate segments of law that a policy
must conform to. Then we check the compliance of privacy policy's text with the
corresponding text of the law using NLP techniques. By using such a tool, users
would be better equipped to understand how their personal data is managed. For
now, we have provided a mapping for the GDPR and PDPA, but other laws can
easily be incorporated in the already built pipeline.
|
We present a scalar-tensor theory of gravity on a torsion-free and metric
compatible Lyra manifold. This is obtained by generalizing the concept of
physical reference frame by considering a scale function defined over the
manifold. The choice of a specific frame induces a local base, naturally
non-holonomic, whose structure constants give rise to extra terms in the
expression of the connection coefficients and in the expression for the
covariant derivative. In the Lyra manifold, transformations between reference
frames involving both coordinates and scale change the transformation law of
tensor fields, when compared to those of the Riemann manifold. From a direct
generalization of the Einstein-Hilbert minimal action coupled with a matter
term, it was possible to build a Lyra invariant action, which gives rise to the
associated Lyra Scalar-Tensor theory of gravity (LyST), with field equations
for $g_{\mu\nu}$ and $\phi$. These equations have a well-defined Newtonian
limit, from which it can be seen that both metric and scale play a role in the
description gravitational interaction. We present a spherically symmetric
solution for the LyST gravity field equations. It dependent on two parameters
$m$ and $r_{L}$, whose physical meaning is carefully investigated. We highlight
the properties of LyST spherically symmetric line element and compare it to
Schwarzchild solution.
|
Tubercles are modifications to the leading edge of an airfoil in the form of
blunt wave-like serrations. Several studies on the effect of tubercles on
isolated airfoils have shown a beneficial effect in the post-stall regime, as
reduced drag and increased lift, leading to a delay of stall. The prospect of
delaying stall is particularly attractive to designers of axial compressors in
gas turbines, as this leads to designs with higher loading and therefore higher
pressure rise with fewer number of stages. In the present study, experiments
were performed on a cascade of airfoils with NACA 65209 profile with different
tubercle geometries. The measurements were made over an exit plane using a
five-hole probe to compare the cascade performance parameters. Additionally,
hot-wire measurements were taken near the blade surface to understand the
nature of the flow in the region close to the tubercles. Oil-flow visualization
on the cascade end wall reveal the flow through the passage of blades with and
without tubercles. For the cascade considered, the estimated stall angle for
the best performing set of blades is found to increase up to 8.6{\deg} from
that of the unmodified blade of 6.0{\deg}. Application of such structures in
axial compressor blades may well lead to suppression of stall in axial
compressors and extend the operating range.
|
Hexagonal rare-earth ferrite RFeO$_3$ family represents a unique class of
multiferroics exhibiting weak ferromagnetism, and a strong coupling between
magnetism and structural trimerization is predicted. However, the hexagonal
structure for RFeO$_3$ remains metastable in conventional condition. We have
succeeded in stabilizing the hexagonal structure of polycrystalline YbFeO$_3$
by partial Sc substitution of Yb. Using bulk magnetometry and neutron
diffraction, we find that Yb$_{0.42}$Sc$_{0.58}$FeO$_3$ orders into a canted
antiferromagnetic state with the Neel temperature $T_N$ ~ 165 K, below which
the $Fe^{3+}$ moments form the triangular configuration in the $ab$-plane and
their in-plane projections are parallel to the [100] axis, consistent with
magnetic space group $P$6$_{3}$$c'm'$. It is determined that the spin-canting
is aligned along the $c$-axis, giving rise to the weak ferromagnetism.
Furthermore, the $Fe^{3+}$ moments reorient toward a new direction below
reorientation temperature $T_R$ ~ 40 K, satisfying magnetic subgroup
$P$6$_{3}$, while the $Yb^{3+}$ moments order independently and
ferrimagnetically along the $c$-axis at the characteristic temperature $T_{Yb}$
~ 15 K. Interestingly, reproducible modulation of electric polarization induced
by magnetic field at low temperature is achieved, suggesting that the delicate
structural distortion associated with two-up/one-down buckling of the
Yb/Sc-planes and tilting of the FeO$_5$ bipyramids may mediate the coupling
between ferroelectric and magnetic orders under magnetic field. The present
work represents a substantial progress to search for high-temperature
multiferroics in hexagonal ferrites and related materials.
|
High quality facial image editing is a challenging problem in the movie
post-production industry, requiring a high degree of control and identity
preservation. Previous works that attempt to tackle this problem may suffer
from the entanglement of facial attributes and the loss of the person's
identity. Furthermore, many algorithms are limited to a certain task. To tackle
these limitations, we propose to edit facial attributes via the latent space of
a StyleGAN generator, by training a dedicated latent transformation network and
incorporating explicit disentanglement and identity preservation terms in the
loss function. We further introduce a pipeline to generalize our face editing
to videos. Our model achieves a disentangled, controllable, and
identity-preserving facial attribute editing, even in the challenging case of
real (i.e., non-synthetic) images and videos. We conduct extensive experiments
on image and video datasets and show that our model outperforms other
state-of-the-art methods in visual quality and quantitative evaluation. Source
codes are available at https://github.com/InterDigitalInc/latent-transformer.
|
Not all topics are equally "flammable" in terms of toxicity: a calm
discussion of turtles or fishing less often fuels inappropriate toxic dialogues
than a discussion of politics or sexual minorities. We define a set of
sensitive topics that can yield inappropriate and toxic messages and describe
the methodology of collecting and labeling a dataset for appropriateness. While
toxicity in user-generated data is well-studied, we aim at defining a more
fine-grained notion of inappropriateness. The core of inappropriateness is that
it can harm the reputation of a speaker. This is different from toxicity in two
respects: (i) inappropriateness is topic-related, and (ii) inappropriate
message is not toxic but still unacceptable. We collect and release two
datasets for Russian: a topic-labeled dataset and an appropriateness-labeled
dataset. We also release pre-trained classification models trained on this
data.
|
Evolution equations for leading twist operators in high orders of
perturbation theory can be restored from the spectrum of anomalous dimensions
and the calculation of the special conformal anomaly at one order less using
conformal symmetry of QCD at the Wilson-Fisher critical point at non-integer
$d=4-2\epsilon$ space-time dimensions. In this work we generalize this
technique to axial-vector operators. We calculate the corresponding three-loop
evolution kernels in Larin's scheme and derive explicit expressions for the
finite renormalization kernel that describes the difference to the vector case
to restore the conventional ${\overline{\mathrm{MS}}}$-scheme. The results are
directly applicable to deeply-virtual Compton scattering and the transition
form factor $\gamma^*\gamma\to\pi$.
|
In one complex variable, the cross ratio is a well-known quantity associated
with four given points in the complex plane that remains invariant under linear
fractional maps. In particular, if one knows where three points in the complex
plane are mapped under a linear fractional map, one can use this invariance to
explicitly determine the map and to show that linear fractional maps are
$3$-transitive. In this paper, we define a generalized cross ratio and
determine some of its basic properties. In particular, we determine which
hypotheses must be made to guarantee that our generalized cross ratio is well
defined. We thus obtain a class of maps that obey similar transitivity
properties as in one complex dimension, under some more restrictive conditions.
|
Automated story generation remains a difficult area of research because it
lacks strong objective measures. Generated stories may be linguistically sound,
but in many cases suffer poor narrative coherence required for a compelling,
logically-sound story. To address this, we present Fabula Entropy Indexing
(FEI), an evaluation method to assess story coherence by measuring the degree
to which human participants agree with each other when answering true/false
questions about stories. We devise two theoretically grounded measures of
reader question-answering entropy, the entropy of world coherence (EWC), and
the entropy of transitional coherence (ETC), focusing on global and local
coherence, respectively. We evaluate these metrics by testing them on
human-written stories and comparing against the same stories that have been
corrupted to introduce incoherencies. We show that in these controlled studies,
our entropy indices provide a reliable objective measure of story coherence.
|
Learning individualized treatment rules (ITRs) is an important topic in
precision medicine. Current literature mainly focuses on deriving ITRs from a
single source population. We consider the observational data setting when the
source population differs from a target population of interest. We assume
subject covariates are available from both populations, but treatment and
outcome data are only available from the source population. Although adjusting
for differences between source and target populations can potentially lead to
an improved ITR for the target population, it can substantially increase the
variability in ITR estimation. To address this dilemma, we develop a weighting
framework that aims to tailor an ITR for a given target population and protect
against high variability due to superfluous covariate shift adjustments. Our
method seeks covariate balance over a nonparametric function class
characterized by a reproducing kernel Hilbert space and can improve many ITR
learning methods that rely on weights. We show that the proposed method
encompasses importance weights and the so-called overlap weights as two extreme
cases, allowing for a better bias-variance trade-off in between. Numerical
examples demonstrate that the use of our weighting method can greatly improve
ITR estimation for the target population compared with other weighting methods.
|
We calculate the mass spectrum and the structure of the positronium system at
a strong coupling in a basis light-front approach. We start from the
light-front QED Hamiltonian and retain one dynamical photon in our basis. We
perform the fermion mass renormalization associated with the nonperturbative
fermion self-energy correction. We present the resulting mass spectrum and wave
functions for the selected low-lying states. Next, we apply this approach to
QCD and calculate the heavy meson system with one dynamical gluon retained. We
illustrate the obtained mass spectrum and wave functions for the selected
low-lying states.
|
We study compressible types in the context of (local and global) NIP. By
extending a result in machine learning theory (the existence of a bound on the
recursive teaching dimension), we prove density of compressible types. Using
this, we obtain explicit uniform honest definitions for NIP formulas (answering
a question of Eshel and the second author), and build compressible models in
countable NIP theories.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.