abstract
stringlengths 42
2.09k
|
---|
Objective: Magnetic Resonance Spectroscopy (MRS) is a noninvasive tool to
reveal metabolic information. One challenge of MRS is the relatively low
Signal-Noise Ratio (SNR) due to low concentrations of metabolites. To improve
the SNR, the most common approach is to average signals that are acquired in
multiple times. The data acquisition time, however, is increased by multiple
times accordingly, resulting in the scanned objects uncomfortable or even
unbearable. Methods: By exploring the multiple sampled data, a deep learning
denoising approach is proposed to learn a mapping from the low SNR signal to
the high SNR one. Results: Results on simulated and in vivo data show that the
proposed method significantly reduces the data acquisition time with slightly
compromised metabolic accuracy. Conclusion: A deep learning denoising method
was proposed to significantly shorten the time of data acquisition, while
maintaining signal accuracy and reliability. Significance: Provide a solution
of the fundamental low SNR problem in MRS with artificial intelligence.
|
We present an approach for encoding visual task relationships to improve
model performance in an Unsupervised Domain Adaptation (UDA) setting. Semantic
segmentation and monocular depth estimation are shown to be complementary
tasks; in a multi-task learning setting, a proper encoding of their
relationships can further improve performance on both tasks. Motivated by this
observation, we propose a novel Cross-Task Relation Layer (CTRL), which encodes
task dependencies between the semantic and depth predictions. To capture the
cross-task relationships, we propose a neural network architecture that
contains task-specific and cross-task refinement heads. Furthermore, we propose
an Iterative Self-Learning (ISL) training scheme, which exploits semantic
pseudo-labels to provide extra supervision on the target domain. We
experimentally observe improvements in both tasks' performance because the
complementary information present in these tasks is better captured.
Specifically, we show that: (1) our approach improves performance on all tasks
when they are complementary and mutually dependent; (2) the CTRL helps to
improve both semantic segmentation and depth estimation tasks performance in
the challenging UDA setting; (3) the proposed ISL training scheme further
improves the semantic segmentation performance. The implementation is available
at https://github.com/susaha/ctrl-uda.
|
We propose a bootstrap program for CFTs near intersecting boundaries which
form a co-dimension 2 edge. We describe the kinematical setup and show that
bulk 1-pt functions and bulk-edge 2-pt functions depend on a non-trivial
cross-ratio and on the angle between the boundaries. Using the boundary OPE
(BOE) with respect to each boundary, we derive two independent conformal block
expansions for these correlators. The matching of the two BOE expansions leads
to a crossing equation. We analytically solve this equation in several simple
cases, notably for a free bulk field, where we recover Feynman-diagrammatic
results by Cardy.
|
With the development of modern radio interferometers, wide-field continuum
surveys have been planned and undertaken, for which accurate wide-field imaging
methods are essential. Based on the widely-used W-stacking method, we propose a
new wide-field imaging algorithm that can synthesize visibility data from a
model of the sky brightness via degridding, able to construct dirty maps from
measured visibility data via gridding. Results carry the smallest approximation
error yet achieved relative to the exact calculation involving the direct
Fourier transform. In contrast to the original W-stacking method, the new
algorithm performs least-misfit optimal gridding (and degridding) in all three
directions, and is capable of achieving much higher accuracy than is feasible
with the original algorithm. In particular, accuracy at the level of single
precision arithmetic is readily achieved by choosing a least-misfit convolution
function of width W=7 and an image cropping parameter of x0=0.25. If the
accuracy required is only that attained by the original W-stacking method, the
computational cost for both the gridding and FFT steps can be substantially
reduced using the proposed method by making an appropriate choice of the width
and image cropping parameters.
|
We give lower bounds on the performance of two of the most popular sampling
methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and
multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when
applied to well-conditioned distributions. Our main result is a nearly-tight
lower bound of $\widetilde{\Omega}(\kappa d)$ on the mixing time of MALA from
an exponentially warm start, matching a line of algorithmic results up to
logarithmic factors and answering an open question of Chewi et. al. We also
show that a polynomial dependence on dimension is necessary for the relaxation
time of HMC under any number of leapfrog steps, and bound the gains achievable
by changing the step count. Our HMC analysis draws upon a novel connection
between leapfrog integration and Chebyshev polynomials, which may be of
independent interest.
|
Generalized numbers, arithmetic operators and derivative operators, grouped
in four classes based on symmetry features, are introduced. Their building
element is the pair of $q$-logarithm/$q$-exponential inverse functions. Some of
the objects were previously described in the literature, while others are newly
defined. Commutativity, associativity and distributivity, and also a pair of
linear/nonlinear derivatives are observed within each class. Two entropic
functionals emerge from the formalism, one of them is the nonadditive Tsallis
entropy.
|
Stellar activity due to different processes (magnetic activity, photospheric
flows) affects the measurement of radial velocities (RV). Radial velocities
have been widely used to detect exoplanets, although the stellar signal
significantly impacts the detection and characterisation performance,
especially for low mass planets. On the other hand, RV time series are also
very rich in information on stellar processes. In this lecture, I review the
context of RV observations, describe how radial velocities are measured, and
the properties of typical observations. I present the challenges represented by
stellar activity for exoplanet studies, and describe the processes at play.
Finally, I review the approaches which have been developed, including
observations and simulations, as well as solar and stellar comparisons.
|
In this paper, a unified batch-online learning approach is introduced to
learn a linear representation of nonlinear system dynamics using the Koopman
operator. The presented system modeling approach leverages a novel incremental
Koopman-based update law that retrieves a mini-batch of samples stored in a
memory to not only minimizes the instantaneous Koopman operator's
identification errors but also the identification errors for the batch of
retrieved samples. Discontinuous modifications of gradient flows are presented
for the online update law to assure finite-time convergence under
easy-to-verify conditions defined on the batch of data. Therefore, this unified
online-batch framework allows performing joint sample- and time-domain analysis
for converging the Koopman operator's parameters. More specifically, it is
shown that if the collected mini-batch of samples guarantees a rank condition,
then finite-time guarantee in the time domain can be certified and the settling
time depends on the quality of collected samples being reused in the update
law. Moreover, the efficiency of the proposed Koopman-based update law is
further analyzed by showing that the identification regret in continuous time
grows sub-linearly with time. Furthermore, to avoid learning corrupted dynamics
due to the selection of an inappropriate set of Koopman observables, a
higher-layer meta learner employs a discrete Bayesian optimization algorithm to
obtain the best library of observable functions for the operator. Since
finite-time convergence of the Koopman model for each set of observable is
guaranteed under a rank condition on stored data, the fitness of each set of
observables can be obtained based on the identification error on the stored
samples in the proposed framework and even without implementing any controller
based on the learned system.
|
Recently, Ni and Pan proved a $q$-congruence on certain sums involving
central $q$-binomial coefficients, which was conjectured by Guo. In this paper,
we give a generalization of this $q$-congruence and confirm another
$q$-congruence, also conjectured by Guo. Our proof uses Ni and Pan's technique
and a simple $q$-congruence observed by Guo and Schlosser.
|
The current COVID-19 pandemic has shown us that we are still facing
unpredictable challenges in our society. The necessary constrain on social
interactions affected heavily how we envision and prepare the future of social
robots and artificial agents in general. Adapting current affective perception
models towards constrained perception based on the hard separation between
facial perception and affective understanding would help us to provide robust
systems. In this paper, we perform an in-depth analysis of how recognizing
affect from persons with masks differs from general facial expression
perception. We evaluate how the recently proposed FaceChannel adapts towards
recognizing facial expressions from persons with masks. In Our analysis, we
evaluate different training and fine-tuning schemes to understand better the
impact of masked facial expressions. We also perform specific feature-level
visualization to demonstrate how the inherent capabilities of the FaceChannel
to learn and combine facial features change when in a constrained social
interaction scenario.
|
The segmentation of coronary arteries by convolutional neural network is
promising yet requires a large amount of labor-intensive manual annotations.
Transferring knowledge from retinal vessels in widely-available public labeled
fundus images (FIs) has a potential to reduce the annotation requirement for
coronary artery segmentation in X-ray angiograms (XAs) due to their common
tubular structures. However, it is challenged by the cross-anatomy domain shift
due to the intrinsically different vesselness characteristics in different
anatomical regions under even different imaging protocols. To solve this
problem, we propose a Semi-Supervised Cross-Anatomy Domain Adaptation (SS-CADA)
which requires only limited annotations for coronary arteries in XAs. With the
supervision from a small number of labeled XAs and publicly available labeled
FIs, we propose a vesselness-specific batch normalization (VSBN) to
individually normalize feature maps for them considering their different
cross-anatomic vesselness characteristics. In addition, to further facilitate
the annotation efficiency, we employ a self-ensembling mean-teacher (SEMT) to
exploit abundant unlabeled XAs by imposing a prediction consistency constraint.
Extensive experiments show that our SS-CADA is able to solve the challenging
cross-anatomy domain shift, achieving accurate segmentation for coronary
arteries given only a small number of labeled XAs.
|
Semantic segmentation using convolutional neural networks (CNN) is a crucial
component in image analysis. Training a CNN to perform semantic segmentation
requires a large amount of labeled data, where the production of such labeled
data is both costly and labor intensive. Semi-supervised learning algorithms
address this issue by utilizing unlabeled data and so reduce the amount of
labeled data needed for training. In particular, data augmentation techniques
such as CutMix and ClassMix generate additional training data from existing
labeled data. In this paper we propose a new approach for data augmentation,
termed ComplexMix, which incorporates aspects of CutMix and ClassMix with
improved performance. The proposed approach has the ability to control the
complexity of the augmented data while attempting to be semantically-correct
and address the tradeoff between complexity and correctness. The proposed
ComplexMix approach is evaluated on a standard dataset for semantic
segmentation and compared to other state-of-the-art techniques. Experimental
results show that our method yields improvement over state-of-the-art methods
on standard datasets for semantic image segmentation.
|
We study reinforcement learning (RL) with linear function approximation under
the adaptivity constraint. We consider two popular limited adaptivity models:
the batch learning model and the rare policy switch model, and propose two
efficient online RL algorithms for episodic linear Markov decision processes,
where the transition probability and the reward function can be represented as
a linear function of some known feature mapping. In specific, for the batch
learning model, our proposed LSVI-UCB-Batch algorithm achieves an $\tilde
O(\sqrt{d^3H^3T} + dHT/B)$ regret, where $d$ is the dimension of the feature
mapping, $H$ is the episode length, $T$ is the number of interactions and $B$
is the number of batches. Our result suggests that it suffices to use only
$\sqrt{T/dH}$ batches to obtain $\tilde O(\sqrt{d^3H^3T})$ regret. For the rare
policy switch model, our proposed LSVI-UCB-RareSwitch algorithm enjoys an
$\tilde O(\sqrt{d^3H^3T[1+T/(dH)]^{dH/B}})$ regret, which implies that $dH\log
T$ policy switches suffice to obtain the $\tilde O(\sqrt{d^3H^3T})$ regret. Our
algorithms achieve the same regret as the LSVI-UCB algorithm (Jin et al.,
2019), yet with a substantially smaller amount of adaptivity. We also establish
a lower bound for the batch learning model, which suggests that the dependency
on $B$ in our regret bound is tight.
|
The consortium of the European project 16NRM05 designed a novel ionisation
vacuum gauge in which the electrons take a straight path from the emitting
cathode through the ionisation space into a Faraday cup. Compared to existing
ionisation vacuum gauges, this has the advantage that the electron path length
is well defined. It is independent of the point and angle of emission and is
not affected by space charge around the collector. In addition, the electrons
do not hit the anode where they can be reflected, generate secondary electrons
or cause desorption of neutrals or ions. This design was chosen in order to
develop a more stable ionisation vacuum gauge suitable as reference standard in
the range of 10-6 Pa to 10-2 Pa for calibration purposes of other vacuum gauges
and quadrupole mass spectrometers. Prototype gauges were produced by two
different manufacturers and showed predictable sensitivities with a very small
spread (< 1.5%), very good short-term repeatability (< 0.05%) and
reproducibility (< 1%), even after changing the emission cathode and drop-down
tests. These characteristics make the gauge also attractive for industrial
applications, because a gauge exchange does not require calibration or
re-adjustment of a process.
|
In this work we present a detailed analysis of variational quantum phase
estimation (VQPE), a method based on real-time evolution for ground and excited
state estimation on near-term hardware. We derive the theoretical ground on
which the approach stands, and demonstrate that it provides one of the most
compact variational expansions to date for solving strongly correlated
Hamiltonians. At the center of VQPE lies a set of equations, with a simple
geometrical interpretation, which provides conditions for the time evolution
grid in order to decouple eigenstates out of the set of time evolved expansion
states, and connects the method to the classical filter diagonalization
algorithm. Further, we introduce what we call the unitary formulation of VQPE,
in which the number of matrix elements that need to be measured scales linearly
with the number of expansion states, and we provide an analysis of the effects
of noise which substantially improves previous considerations. The unitary
formulation allows for a direct comparison to iterative phase estimation. Our
results mark VQPE as both a natural and highly efficient quantum algorithm for
ground and excited state calculations of general many-body systems. We
demonstrate a hardware implementation of VQPE for the transverse field Ising
model. Further, we illustrate its power on a paradigmatic example of strong
correlation (Cr2 in the SVP basis set), and show that it is possible to reach
chemical accuracy with as few as ~50 timesteps.
|
Polynomial factorization in conventional sense is an ill-posed problem due to
its discontinuity with respect to coefficient perturbations, making it a
challenge for numerical computation using empirical data. As a regularization,
this paper formulates the notion of numerical factorization based on the
geometry of polynomial spaces and the stratification of factorization
manifolds. Furthermore, this paper establishes the existence, uniqueness,
Lipschitz continuity, condition number, and convergence of the numerical
factorization to the underlying exact factorization, leading to a robust and
efficient algorithm with a MATLAB implementation capable of accurate polynomial
factorizations using floating point arithmetic even if the coefficients are
perturbed.
|
In this paper we propose a novel approach to realize forecast verification.
Specifically, we introduce a strategy for assessing the severity of forecast
errors based on the evidence that, on the one hand, a false alarm just
anticipating an occurring event is better than one in the middle of consecutive
non-occurring events, and that, on the other hand, a miss of an isolated event
has a worse impact than a miss of a single event, which is part of several
consecutive occurrences. Relying on this idea, we introduce a novel definition
of confusion matrix and skill scores giving greater importance to the value of
the prediction rather than to its quality. Then, we introduce a deep ensemble
learning procedure for binary classification, in which the probabilistic
outcomes of a neural network are clustered via optimization of these
value-weighted skill scores. We finally show the performances of this approach
in the case of three applications concerned with pollution, space weather and
stock prize forecasting.
|
This article aims to numerically investigate the combustion phenomenon of
coaxial gaseous CH4 LOx at supercritical pressures. The choice of turbulence
model, real gas model, and chemical kinetics model are the critical parameters
in numerical simulations of cryogenic combustion at high pressure. At this
supercritical operating pressure, the ideal gas law does not remain valid for
such cases. Therefore, we have systematically carried out a comparative study
to analyze the importance of real gas models, turbulence parameters, and
chemical kinetics at such conditions. The comparison of real gas models with
the NIST database reveals better conformity of SRK (Soave Redlich Kwong
Equation of State (EoS)) model predictions with the database. Further, the
computed results indicate that the Standard k-e turbulence model with modified
constant captures the better flame shape and temperature peak position compared
to other RANS based turbulence models while invoking the non-premixed steady
b-PDF flamelet model for simulating the combustion process. Furthermore, a
comparative study comparing two different chemical kinetics models indicates
that the reduced Jones Lindstedt mechanism can accurately predict the flame
characteristics with the least computational cost. Finally, we have studied the
effect of chamber pressure and LOx inlet temperature on the flame
characteristics. The flame characteristics exhibit a strong sensitivity towards
the chamber pressure due to the weakening of the pseudo-boiling effect with an
increase in pressure. As a consequence of lower turbulent rates of energy and
mass transfer through the transcritical mixing layer, the flame spreading
becomes narrower at elevated pressure and temperature, thereby yielding an
increased flame length at transcritical conditions.
|
We study the effect of changes in the parameters of a two-dimensional
potential energy surface on the phase space structures relevant for chemical
reaction dynamics. The changes in the potential energy are representative of
chemical reactions such as isomerization between two structural conformations
or dissociation of a molecule with an intermediate. We present a two degrees of
freedom quartic Hamiltonian that shows pitchfork bifurcation when the
parameters are varied and we derive the bifurcation criteria relating the
parameters. Next, we describe the phase space structures - unstable periodic
orbits and their associated invariant manifolds, and phase space dividing
surfaces - for the systems that can show trajectories undergo reaction defined
as crossing of a potential energy barrier. Finally, we quantify the reaction
dynamics for these systems by obtaining the directional flux and gap time
distribution to illustrate the dependence on total energy and the coupling
strength between the two degrees of freedom.
|
Services on the public Internet are frequently scanned, then subject to
brute-force and denial-of-service attacks. We would like to run such services
stealthily, available to friends but hidden from adversaries. In this work, we
propose a moving target defense named "Chhoyhopper" that utilizes the vast IPv6
address space to conceal publicly available services. The client and server to
hop to different IPv6 addresses in a pattern based on a shared, pre-distributed
secret and the time-of-day. By hopping over a /64 prefix, services cannot be
found by active scanners, and passively observed information is useless after
two minutes. We demonstrate our system with SSH, and show that it can be
extended to other applications.
|
Most video super-resolution methods focus on restoring high-resolution video
frames from low-resolution videos without taking into account compression.
However, most videos on the web or mobile devices are compressed, and the
compression can be severe when the bandwidth is limited. In this paper, we
propose a new compression-informed video super-resolution model to restore
high-resolution content without introducing artifacts caused by compression.
The proposed model consists of three modules for video super-resolution:
bi-directional recurrent warping, detail-preserving flow estimation, and
Laplacian enhancement. All these three modules are used to deal with
compression properties such as the location of the intra-frames in the input
and smoothness in the output frames. For thorough performance evaluation, we
conducted extensive experiments on standard datasets with a wide range of
compression rates, covering many real video use cases. We showed that our
method not only recovers high-resolution content on uncompressed frames from
the widely-used benchmark datasets, but also achieves state-of-the-art
performance in super-resolving compressed videos based on numerous quantitative
metrics. We also evaluated the proposed method by simulating streaming from
YouTube to demonstrate its effectiveness and robustness. The source codes and
trained models are available at
https://github.com/google-research/google-research/tree/master/comisr.
|
Number of zeros seen by a particle around small clusters of other particles
is encoded in the root partition, and partly characterizes the correlations in
fractional quantum Hall trial wavefunctions. We explore a generalization
wherein we consider the counting of zeros seen by a cluster of particles on
another cluster. Numbers of such zeros between clusters in the Laughlin
wavefunctions are fully determined by the root partition. However, such a
counting is unclear for general Jain states where a polynomial expansion is
difficult. Here we consider the simplest state beyond the Laughlin
wavefunction, namely a state containing a single quasiparticle of the Laughlin
state. We show numerically and analytically that in the trial wavefunction for
the quasiparticle of the Laughlin state, counting of zeros seen by a cluster on
another cluster depends on the relative dimensions of the two clusters. We
further ask if the patterns in the counting of zeros extend, in at least an
approximate sense, to wavefunctions beyond the trial states. Using numerical
computations in systems up to $N=9$, we present results for the statistical
distribution of zeros around particle clusters at the center of an FQH droplet
in the ground state of a Hamiltonian that is perturbed away from the $V_1$
interaction (short-range repulsion). Evolution of this distribution with the
strength of the perturbation shows that the counting of zeros is altered by
even a weak perturbation away from the parent Hamiltonian, though the
perturbations do not change the phase of the system.
|
Stochastic high dimensional bandit problems with low dimensional structures
are useful in different applications such as online advertising and drug
discovery. In this work, we propose a simple unified algorithm for such
problems and present a general analysis framework for the regret upper bound of
our algorithm. We show that under some mild unified assumptions, our algorithm
can be applied to different high dimensional bandit problems. Our framework
utilizes the low dimensional structure to guide the parameter estimation in the
problem, therefore our algorithm achieves the best regret bounds in the LASSO
bandit, as well as novel bounds in the low-rank matrix bandit, the group sparse
matrix bandit, and in a new problem: the multi-agent LASSO bandit.
|
Agriculture is the foundation of human civilization. However, the rapid
increase and aging of the global population pose challenges on this cornerstone
by demanding more healthy and fresh food. Internet of Things (IoT) technology
makes modern autonomous greenhouse a viable and reliable engine of food
production. However, the educated and skilled labor capable of overseeing
high-tech greenhouses is scarce. Artificial intelligence (AI) and cloud
computing technologies are promising solutions for precision control and
high-efficiency production in such controlled environments. In this paper, we
propose a smart agriculture solution, namely iGrow: (1) we use IoT and cloud
computing technologies to measure, collect, and manage growing data, to support
iteration of our decision-making AI module, which consists of an incremental
model and an optimization algorithm; (2) we propose a three-stage incremental
model based on accumulating data, enabling growers/central computers to
schedule control strategies conveniently and at low cost; (3) we propose a
model-based iterative optimization algorithm, which can dynamically optimize
the greenhouse control strategy in real-time production. In the simulated
experiment, evaluation results show the accuracy of our incremental model is
comparable to an advanced tomato simulator, while our optimization algorithms
can beat the champion of the 2nd Autonomous Greenhouse Challenge. Compelling
results from the A/B test in real greenhouses demonstrate that our solution
significantly increases production (commercially sellable fruits) (+ 10.15%)
and net profit (+ 87.07%) with statistical significance compared to planting
experts.
|
Recent urbanization has coincided with the enrichment of geotagged data, such
as street view and point-of-interest (POI). Region embedding enhanced by the
richer data modalities has enabled researchers and city administrators to
understand the built environment, socioeconomics, and the dynamics of cities
better. While some efforts have been made to simultaneously use multi-modal
inputs, existing methods can be improved by incorporating different measures of
'proximity' in the same embedding space - leveraging not only the data that
characterizes the regions (e.g., street view, local businesses pattern) but
also those that depict the relationship between regions (e.g., trips, road
network). To this end, we propose a novel approach to integrate multi-modal
geotagged inputs as either node or edge features of a multi-graph based on
their relations with the neighborhood region (e.g., tiles, census block, ZIP
code region, etc.). We then learn the neighborhood representation based on a
contrastive-sampling scheme from the multi-graph. Specifically, we use street
view images and POI features to characterize neighborhoods (nodes) and use
human mobility to characterize the relationship between neighborhoods (directed
edges). We show the effectiveness of the proposed methods with quantitative
downstream tasks as well as qualitative analysis of the embedding space: The
embedding we trained outperforms the ones using only unimodal data as regional
inputs.
|
Context. The recent discovery of much greater magnetic flux cancellation
taking place at the photosphere than previously realised has led us in our
previous works to suggest magnetic reconnection driven by flux cancellation as
the cause of a wide range of dynamic phenomena, including jets of various kinds
and solar atmospheric heating. Aims. Previously, the theory considered energy
release at a two-dimensional current sheet. Here we develop the theory further
by extending it to an axisymmetric current sheet in three dimensions without
resorting to complex variable theory. Methods. We analytically study
reconnection and treat the current sheet as a three-dimensional structure. We
apply the theory to the cancellation of two fragments of equal but opposite
flux that approach each another and are located in an overlying horizontal
magnetic field. Results. The energy release occurs in two phases. During Phase
1, a separator is formed and reconnection is driven at it as it rises to a
maximum height and then moves back down to the photosphere, heating the plasma
and accelerating a plasma jet as it does so. During Phase 2 the fluxes cancel
in the photosphere and accelerate a mixture of cool and hot plasma upwards.
|
Topological spin textures can be found in both two-dimensional and
three-dimensional nanostructures, which are of great importance to advanced
spintronic applications. Here we report the current-induced skyrmion tube
dynamics in three-dimensional synthetic antiferromagnetic (SyAF) bilayer and
multilayer nanostructures. It is found that the SyAF skyrmion tube made of
thinner sublayer skyrmions is more stable during its motion, which ensures that
a higher speed of the skyrmion tube can be reached effectively at larger
driving current. In the SyAF multilayer with a given total thickness, the
current-induced deformation of the SyAF skyrmion tube decreases with an
increasing number of interfaces; namely, the rigidity of the SyAF skyrmion tube
with a given thickness increases with the number of ferromagnetic (FM) layers.
For the SyAF multilayer with an even number of FM layers, the skyrmion Hall
effect can be eliminated when the thicknesses of all FM layers are identical.
Larger damping parameter leads to smaller deformation and slower speed of the
SyAF skyrmion tube. Larger fieldlike torque leads to larger deformation and a
higher speed of the SyAF skyrmion tube. Our results are useful for
understanding the dynamic behaviors of three-dimensional topological spin
textures and may provide guidelines for building SyAF spintronic devices.
|
The liquid-solid diffusion couple technique, supported by phenomenological
analysis and nano-indentation tests, is proposed on account of the relatively
low melting points of Mg to explore the diffusion mobility and creep
deformation. The potential of this strategy is demonstrated in Mg-Ga hcp alloys
where Ga solute (i.e. impurity) and Mg solvent diffusions in hcp Mg-Ga alloys
were both unveiled. It was followed by mapping the compressive creep behavior
via nanoindentation along the composition arrays within the same Mg-Ga couple
sample. The compressive creep resistance of Mg-Ga hcp alloys increased with the
Ga content, and this enhancement was similar to the one found in Mg-Zn alloys
and superior to the one reported in Mg-Al alloys though Al is a slower impurity
diffuser in hcp-Mg than Zn and Ga. Thereby, the solvent diffusion and its
variation with the composition, rather than the solute diffusion, was suggested
to govern the creep properties at high temperatures and low stresses.
|
Irrigation decision systems and water need models have been important
research topics in agriculture since 90s. They improve the efficiency of crop
yields, provide an appropriate use of water on the earth and so, prevent the
water scarcity in some regions. In this paper, a comprehensive survey on water
need models depending on crop growth and irrigation decision systems has been
conducted based on mathematical maodelling. The following outcomes and
solutions are the main contributions. Crop growth models and correspondingly
water need models are suffer from un-modeled dynamics of the environment and
lack of sensory devices. Literature review with the latest developments on
water need models, irrigation decision systems, applied control methods and
discussions are expected to be useful for the future strategies.
|
The bulk of computational approaches for modeling physical systems in
materials science derive from either analytical (i.e. physics based) or
data-driven (i.e. machine-learning based) origins. In order to combine the
strengths of these two approaches, we advance a novel machine learning approach
for solving equations of the generalized Lippmann-Schwinger (L-S) type. In this
paradigm, a given problem is converted into an equivalent L-S equation and
solved as an optimization problem, where the optimization procedure is
calibrated to the problem at hand. As part of a learning-based loop unrolling,
we use a recurrent convolutional neural network to iteratively solve the
governing equations for a field of interest. This architecture leverages the
generalizability and computational efficiency of machine learning approaches,
but also permits a physics-based interpretation. We demonstrate our learning
approach on the two-phase elastic localization problem, where it achieves
excellent accuracy on the predictions of the local (i.e., voxel-level) elastic
strains. Since numerous governing equations can be converted into an equivalent
L-S form, the proposed architecture has potential applications across a range
of multiscale materials phenomena.
|
We discuss the possibility of unifying in a simple and economical manner the
Yukawa couplings of third generation fermions in a non-supersymmetric SO(10)
model with an intermediate symmetry breaking, focusing on two possible patterns
with intermediate Pati-Salam and minimal left-right groups. For this purpose,
we start with a two Higgs doublet model at the electroweak scale and assume a
minimal Yukawa sector at the high energy scales. We first enforce gauge
coupling unification at the two-loop level by including the threshold
corrections in the renormalisation group running which are generated by the
heavy fields that appear at the intermediate symmetry breaking scale. We then
study the running of the Yukawa couplings of the top quark, bottom quark and
tau lepton at two-loops in these two breaking schemes, when the appropriate
matching conditions are imposed. We find that the unification of the third
family Yukawa couplings can be achieved while retaining a viable spectrum,
provided that the ratio of the vacuum expectation values of the two Higgs
doublet fields is large, $\tan\beta \approx 60$.
|
Time series forecasting is a growing domain with diverse applications.
However, changes of the system behavior over time due to internal or external
influences are challenging. Therefore, predictions of a previously learned
fore-casting model might not be useful anymore. In this paper, we present
EVent-triggered Augmented Refitting of Gaussian Process Regression for Seasonal
Data (EVARS-GPR), a novel online algorithm that is able to handle sudden shifts
in the target variable scale of seasonal data. For this purpose, EVARS-GPR
com-bines online change point detection with a refitting of the prediction
model using data augmentation for samples prior to a change point. Our
experiments on sim-ulated data show that EVARS-GPR is applicable for a wide
range of output scale changes. EVARS-GPR has on average a 20.8 % lower RMSE on
different real-world datasets compared to methods with a similar computational
resource con-sumption. Furthermore, we show that our algorithm leads to a
six-fold reduction of the averaged runtime in relation to all comparison
partners with a periodical refitting strategy. In summary, we present a
computationally efficient online fore-casting algorithm for seasonal time
series with changes of the target variable scale and demonstrate its
functionality on simulated as well as real-world data. All code is publicly
available on GitHub: https://github.com/grimmlab/evars-gpr.
|
We derive gain-tuning rules for the positive and negative spatial-feedback
loops of a spatially-distributed filter to change the resolution of its spatial
band-pass characteristic accordingly to a wavelet zoom, while preserving
temporal stability. The filter design is inspired by the canonical spatial
feedback structure of the primary visual cortex and is motivated by
understanding attentional control of visual resolution. Besides biology, our
control-theoretical design strategy is relevant for the development of
neuromorphic multiresolution distributed sensors through the feedback
interconnection of elementary spatial transfer functions and gain tuning.
|
Neural network based Artificial Intelligence (AI) has reported increasing
scales in experiments. However, this paper raises a rarely reported stage in
such experiments called Post-Selection alter the reader to several possible
protocol flaws that may result in misleading results. All AI methods fall into
two broad schools, connectionist and symbolic. The Post-Selection fall into two
kinds, Post-Selection Using Validation Sets (PSUVS) and Post-Selection Using
Test Sets (PSUTS). Each kind has two types of post-selectors, machines and
humans. The connectionist school received criticisms for its "black box" and
now the Post-Selection; but the seemingly "clean" symbolic school seems more
brittle because of its human PSUTS. This paper first presents a controversial
view: all static "big data" are non-scalable. We then analyze why
error-backprop from randomly initialized weights suffers from severe local
minima, why PSUVS lacks cross-validation, why PSUTS violates well-established
protocols, and why every paper involved should transparently report the
Post-Selection stage. To avoid future pitfalls in AI competitions, this paper
proposes a new AI metrics, called developmental errors for all networks
trained, under Three Learning Conditions: (1) an incremental learning
architecture (due to a "big data" flaw), (2) a training experience and (3) a
limited amount of computational resources. Developmental Networks avoid
Post-Selections because they automatically discover context-rules on the fly by
generating emergent Turing machines (not black boxes) that are optimal in the
sense of maximum-likelihood across lifetime, conditioned on the Three Learning
Conditions.
|
We study quantum transport in disordered systems with particle-hole symmetric
Hamiltonians. The particle-hole symmetry is spontaneously broken after
averaging with respect to disorder, and the resulting massless mode is treated
in a random-phase representation of the invariant measure of the
symmetry-group. We compute the resulting fermionic functional integral of the
average two-particle Green's function in a perturbation theory around the
diffusive limit. The results up to two-loop order show that the corrections
vanish, indicating that the diffusive quantum transport is robust. On the other
hand, the diffusion coefficient depends strongly on the particle-hole symmetric
Hamiltonian we choose to study. This reveals a connection between the
underlying microscopic theory and the classical long-scale metallic behaviour
of these systems.
|
We prove a Darboux-Jouanolou type theorem on the algebraic integrability of
polynomial differential $r$-forms over arbitrary fields ($r\geq 1$). We also
investigate the Darboux's method for producing integrating factors.
|
The photoelectric conversion efficiency of a solar cell is dependent on its
temperature. When the solar radiation is incident on the photovoltaics (PV)
panel, a large portion of it is absorbed by the underlying material which
increases its internal energy leading to the generation of heat. An overheated
PV panel results in a decline in its performance which calls for an efficient
cooling mechanism that can offer an optimum output of the electrical power. In
the present numerical work, thermal management with a porous nanochannels
device capable to dissipate high heat flux is employed to regulate the
temperature of a commercial PV panel by integrating the device on the back face
of the panel. The spatial and temporal variation of the PV surface temperature
is obtained by solving the energy balance equation numerically. By evaluating
the steady-state PV surface temperature with and without thermal management,
the extent of cooling and the resulting enhancement in the electrical power
output is studied in detail. The nanochannels device is found to reduce the PV
surface temperature significantly with an average cooling of 31.5 oC.
Additionally, the enhancement in the electrical power output by ~33% and the
reduction in the response time to 1/8th highlight the potential of using porous
nanochannels as a thermal management device. Furthermore, the numerical method
is used to develop a universal curve which can predict the extent of PV cooling
for any generic thermal management device.
|
In the recent period of time with a lot of social platforms emerging, the
relationships among various units can be framed with respect to either
positive, negative or no relation. These units can be individuals, countries or
others that form the basic structural component of a signed network. These
signed networks picture a dynamic characteristic of the graph so formed
allowing only few combinations of signs that brings the structural balance
theorem in picture. Structural balance theory affirms that signed social
networks tend to be organized so as to avoid conflictual situations,
corresponding to cycles of unstable relations. The aim of structural balance in
networks is to find proper partitions of nodes that guarantee equilibrium in
the system allowing only few combination triangles with signed edges to be
permitted in graph. Most of the works in this field of networking have either
explained the importance of signed graph or have applied the balance theorem
and tried to solve problems. Following the recent time trends with each nation
emerging to be superior and competing to be the best, the probable doubt of
happening of WW-III(World War-III) comes into every individuals mind.
Nevertheless, our paper aims at answering some of the interesting questions on
World War-III. In this project we have worked with the creation of a signed
graph picturing the World War-III participating countries as nodes and have
predicted the best possible coalition of countries that will be formed during
war. Also, we have visually depicted the number of communities that will be
formed in this war and the participating countries in each communities.
|
A mixed multigraph is a multigraph which may contain both undirected and
directed edges. An orientation of a mixed multigraph $G$ is an assignment of
exactly one direction to each undirected edge of $G$. A mixed multigraph $G$
can be oriented to a strongly connected digraph if and only if $G$ is
bridgeless and strongly connected [Boesch and Tindell, Am. Math. Mon., 1980].
For each $r \in \mathbb{N}$, let $f(r)$ denote the smallest number such that
any strongly connected bridgeless mixed multigraph with radius $r$ can be
oriented to a digraph of radius at most $f(r)$. We improve the current best
upper bound of $4r^2+4r$ on $f(r)$ [Chung, Garey and Tarjan, Networks, 1985] to
$1.5 r^2 + r + 1$. Our upper bound is tight upto a multiplicative factor of
$1.5$ since, $\forall r \in \mathbb{N}$, there exists an undirected bridgeless
graph of radius $r$ such that every orientation of it has radius at least $r^2
+ r$ [Chv\'atal and Thomassen, J. Comb. Theory. Ser. B., 1978]. We prove a
marginally better lower bound, $f(r) \geq r^2 + 3r + 1$, for mixed multigraphs.
While this marginal improvement does not help with asymptotic estimates, it
clears a natural suspicion that, like undirected graphs, $f(r)$ may be equal to
$r^2 + r$ even for mixed multigraphs. En route, we show that if each edge of
$G$ lies in a cycle of length at most $\eta$, then the oriented radius of $G$
is at most $1.5 r \eta$. All our proofs are constructive and lend themselves to
polynomial time algorithms.
|
Recovery of a 3D head model including the complete face and hair regions is
still a challenging problem in computer vision and graphics. In this paper, we
consider this problem using only a few multi-view portrait images as input.
Previous multi-view stereo methods that have been based, either on optimization
strategies or deep learning techniques, suffer from low-frequency geometric
structures such as unclear head structures and inaccurate reconstruction in
hair regions. To tackle this problem, we propose a prior-guided implicit neural
rendering network. Specifically, we model the head geometry with a learnable
signed distance field (SDF) and optimize it via an implicit differentiable
renderer with the guidance of some human head priors, including the facial
prior knowledge, head semantic segmentation information and 2D hair orientation
maps. The utilization of these priors can improve the reconstruction accuracy
and robustness, leading to a high-quality integrated 3D head model. Extensive
ablation studies and comparisons with state-of-the-art methods demonstrate that
our method can generate high-fidelity 3D head geometries with the guidance of
these priors.
|
During the first wave of Covid-19 information decoupling could be observed in
the flow of news media content. The corollary of the content alignment within
and between news sources experienced by readers (i.e., all news transformed
into Corona-news), was that the novelty of news content went down as media
focused monotonically on the pandemic event. This all-important Covid-19 news
theme turned out to be quite persistent as the pandemic continued, resulting in
the, from a news media's perspective, paradoxical situation where the same news
was repeated over and over. This information phenomenon, where novelty
decreases and persistence increases, has previously been used to track change
in news media, but in this study we specifically test the claim that new
information decoupling behavior of media can be used to reliably detect change
in news media content originating in a negative event, using a Bayesian
approach to change point detection.
|
Retrieval is a crucial stage in web search that identifies a small set of
query-relevant candidates from a billion-scale corpus. Discovering more
semantically-related candidates in the retrieval stage is very promising to
expose more high-quality results to the end users. However, it still remains
non-trivial challenges of building and deploying effective retrieval models for
semantic matching in real search engine. In this paper, we describe the
retrieval system that we developed and deployed in Baidu Search. The system
exploits the recent state-of-the-art Chinese pretrained language model, namely
Enhanced Representation through kNowledge IntEgration (ERNIE), which
facilitates the system with expressive semantic matching. In particular, we
developed an ERNIE-based retrieval model, which is equipped with 1) expressive
Transformer-based semantic encoders, and 2) a comprehensive multi-stage
training paradigm. More importantly, we present a practical system workflow for
deploying the model in web-scale retrieval. Eventually, the system is fully
deployed into production, where rigorous offline and online experiments were
conducted. The results show that the system can perform high-quality candidate
retrieval, especially for those tail queries with uncommon demands. Overall,
the new retrieval system facilitated by pretrained language model (i.e., ERNIE)
can largely improve the usability and applicability of our search engine.
|
How risks are managed implicitly and explicitly at multiple levels of agile
projects has not been extensively studied and there is a need to investigate
how risk management can be used in large agile projects. This is the objective
of this exploratory study which investigates the following research question:
How does a large software/hardware development project using agile practices
manage uncertainty at project/subproject and work package levels?
|
In Chen-Cramer Crypto 2006 paper \cite{cc} algebraic geometric secret sharing
schemes were proposed such that the "Fundamental Theorem in
Information-Theoretically Secure Multiparty Computation" by Ben-Or, Goldwasser
and Wigderson \cite{BGW88} and Chaum, Cr\'{e}peau and Damg{\aa}rd \cite{CCD88}
can be established over constant-size base finite fields. These algebraic
geometric secret sharing schemes defined by a curve of genus $g$ over a
constant size finite field ${\bf F}_q$ is quasi-threshold in the following
sense, any subset of $u \leq T-1$ players (non qualified) has no information of
the secret and any subset of $u \geq T+2g$ players (qualified) can reconstruct
the secret. It is natural to ask that how far from the threshold these
quasi-threshold secret sharing schemes are? How many subsets of $u \in [T,
T+2g-1]$ players can recover the secret or have no information of the secret?
In this paper it is proved that almost all subsets of $u \in [T,T+g-1]$
players have no information of the secret and almost all subsets of $u \in
[T+g,T+2g-1]$ players can reconstruct the secret when the size $q$ goes to the
infinity and the genus satisfies $\lim \frac{g}{\sqrt{q}}=0$. Then algebraic
geometric secret sharing schemes over large finite fields are asymptotically
threshold in this case. We also analyze the case when the size $q$ of the base
field is fixed and the genus goes to the infinity.
|
In this work, we calibrate the relationship between Halpha emission and M
dwarf ages. We compile a sample of 892 M dwarfs with Halpha equivalent width
(HaEW) measurements from the literature that are either co-moving with a white
dwarf of known age (21 stars) or in a known young association (871 stars). In
this sample we identify 7 M dwarfs that are new candidate members of known
associations. By dividing the stars into active and inactive categories
according to their HaEW and spectral type (SpT), we find that the fraction of
active dwarfs decreases with increasing age, and the form of the decline
depends on SpT. Using the compiled sample of age-calibrators we find that HaEW
and fractional Halpha luminosity (LHaLbol) decrease with increasing age. HaEW
for SpT<M7 decreases gradually up until ~1Gyr. For older ages, we found only
two early M dwarfs which are both inactive and seem to continue the gradual
decrease. We also found 14 mid-type out of which 11 are inactive and present a
significant decrease of HaEW, suggesting that the magnetic activity decreases
rapidly after ~1Gyr. We fit LHaLbol versus age with a broken power-law and find
an index of -0.11+0.02-0.01 for ages <~776Myr. The index becomes much steeper
at older ages however a lack of field age-calibrators leaves this part of the
relation far less constrained. Finally, from repeated independent measurements
for the same stars we find that 94% of these has a level of HaEW variability
<=5A at young ages (<1Gyr).
|
Atomic layer deposition (ALD) provides uniform and conformal thin films that
are of interest for a range of applications. To better understand the
properties of amorphous ALD films, we need improved understanding of their
local atomic structure. Previous work demonstrated measurement of how the local
atomic structure of ALD-grown aluminum oxide (AlOx) evolves in operando during
growth by employing synchrotron high energy X-ray diffraction (HE-XRD). In this
work, we report on efforts to employ electron diffraction pair distribution
function (ePDF) measurements using more broadly available transmission electron
microscope (TEM) instrumentation to study the atomic structure of amorphous
ALD-AlOx. We observe electron beam damage in the ALD-coated samples during ePDF
at ambient temperature and successfully mitigate this beam damage using ePDF at
cryogenic temperatures (cryo-ePDF). We employ cryo-ePDF and Reverse Monte Carlo
(RMC) modeling to obtain structural models of ALD-AlOx coatings formed at a
range of deposition temperatures from 150-332{\deg}C. From these model
structures, we derive structural metrics including stoichiometry, pair
distances, and coordination environments in the ALD-AlOx films as a function of
deposition temperature. The structural variations we observe with growth
temperature are consistent with temperature-dependent changes in the surface
hydroxyl density on the growth surface. The sample preparation and cryo-ePDF
procedures we report here can be used for routine measurement of ALD-grown
amorphous thin films to improve our understanding of the atomic structure of
these materials, establish structure-property relationships, and help
accelerate the timescale for the application of ALD to address technological
needs.
|
Hermite reciprocity refers to a series of natural isomorphisms involving
compositions of symmetric, exterior, and divided powers of the standard
$SL_2$-representation. We survey several equivalent constructions of these
isomorphisms, as well as their recent applications to Green's Conjecture on
syzygies of canonical curves. The most geometric approach to Hermite
reciprocity is based on an idea of Voisin to realize certain multilinear
constructions cohomologically by working on a Hilbert scheme of points. We
explain how in the case of ${\bf P}^1$ this can be reformulated in terms of
cohomological properties of Schwarzenberger bundles. We then proceed to study
these bundles from several perspectives:
We show that their exterior powers have supernatural cohomology, arising as
special cases of a construction of Eisenbud and Schreyer.
We recover basic properties of secant varieties $\Sigma$ of rational normal
curves (normality, Cohen-Macaulayness, rational singularities) by considering
their desingularizations via Schwarzenberger bundles, and applying the
Kempf-Weyman geometric technique.
We show that Hermite reciprocity is equivalent to the self-duality of the
unique rank one Ulrich module on the affine cone $\widehat{\Sigma}$ of some
secant variety, and we explain how for a Schwarzenberger bundle of rank $k$ and
degree $d\ge k$, Hermite reciprocity can be viewed as the unique (up to
scaling) non-zero section of $(Sym^k\mathcal{E})(-d+k-1)$.
|
The design of 6th Generation (6G) wireless networks points towards flexible
connect-and-compute technologies capable to support innovative services and use
cases. Targeting the 2030 horizon, 6G networks are poised to pave the way for
sustainable human-centered smart societies and vertical industries, such that
wireless networks will be transformed into a distributed smart connectivity
infrastructure, where new terminal types are embedded in the daily environment.
In this context, the RISE-6G project aims at investigating innovative solutions
that capitalize on the latest advances in the emerging technology of
Reconfigurable Intelligent Surfaces (RISs), which offers dynamic and
goal-oriented radio wave propagation control, enabling the concept of the
wireless environment as a service. The project will focus on: i) the realistic
modeling of RIS-assisted signal propagation, ii) the investigation of the
fundamental limits of RIS-empowered wireless communications and sensing, and
iii) the design of efficient algorithms for orchestrating networking RISs, in
order to implement intelligent, sustainable, and dynamically programmable
wireless environments enabling diverse services that go well beyond the 5G
capabilities. RISE-6G will offer two unprecedented proof-of-concepts for
realizing controlled wireless environments in near-future use cases.
|
The quantum effective action yields equations of motion and correlation
functions including all quantum corrections. We discuss here how it encodes
also Noether currents at the full quantum level. This holds both for
covariantly conserved currents associated to real symmetries that leave the
action invariant as well as for non-conserved Noether currents associated to
extended symmetry transformations which change the action, but in a specific
way. We discuss then in particular symmetries and extended symmetries
associated to space-time geometry for relativistic quantum field theories.
These encompass local dilatations or Weyl gauge transformation, local Lorentz
transformations and local shear transformations. Together they constitute the
symmetry group of the frame bundle GL$(d)$. The corresponding non-conserved
Noether currents are the dilatation or Weyl current, the spin current and the
shear current for which divergence-type equations of motion are obtained from
the quantum effective action.
|
Questions about a text or an image that cannot be answered raise distinctive
issues for an AI. This note discusses the problem of unanswerable questions in
VQA (visual question answering), in QA (visual question answering), and in AI
generally.
|
We show the existence of a compact K\"ahler manifold which does not fit in a
proper flat family over an irreducible base with one projective (possibly
singular) fiber. We also give a topological version of this statement. This
strengthens our earlier counterexamples to the Kodaira algebraic approximation
problem.
|
Multiple teams participate in a random competition. In each round the winner
receives one point. We study the times until ties occur among teams. We
construct martingales and supermartingales that enable us to prove the results
regarding these stopping times. The problems studied in this paper are
motivated by their applications to databases and their storage engines that are
based on augmented balanced search trees. The ties in the competitions are
related to the necessary rebalancing operations that have to be executed on the
database.
|
This Letter reports results from the first long-baseline search for sterile
antineutrinos mixing in an accelerator-based antineutrino-dominated beam. The
rate of neutral-current interactions in the two NOvA detectors, at distances of
1 km and 810 km from the beam source, is analyzed using an exposure of
$12.51\times10^{20}$ protons-on-target from the NuMI beam at Fermilab running
in antineutrino mode. A total of $121$ of neutral-current candidates are
observed at the Far Detector, compared to a prediction of
$122\pm11$(stat.)$\pm15$(syst.) assuming mixing between three active flavors.
No evidence for $\bar{\nu}_{\mu}\rightarrow\bar{\nu}_{s}$ oscillation is
observed. Interpreting this result within a 3+1 model, constraints are placed
on the mixing angles ${\theta}_{24} < 25^{\circ}$ and ${\theta}_{34} <
32^{\circ}$ at the 90% C.L. for $0.05$eV$^{2} \leq \Delta m^{2}_{41} \leq
0.5$eV$^{2}$, the range of mass splittings that produces no significant
oscillations at the Near Detector. These are the first 3+1 confidence limits
set using long-baseline accelerator antineutrinos.
|
The recovery of freshly fallen meteorites from tracked and triangulated
meteors is critical to determining their source asteroid families. However,
locating meteorite fragments in strewn fields remains a challenge with very few
meteorites being recovered from the meteors triangulated in past and ongoing
meteor camera networks. We examined if locating meteorites can be automated
using machine learning and an autonomous drone. Drones can be programmed to fly
a grid search pattern and take systematic pictures of the ground over a large
survey area. Those images can be analyzed using a machine learning classifier
to identify meteorites in the field among many other features. Here, we
describe a proof-of-concept meteorite classifier that deploys off-line a
combination of different convolution neural networks to recognize meteorites
from images taken by drones in the field. The system was implemented in a
conceptual drone setup and tested in the suspected strewn field of a recent
meteorite fall near Walker Lake, Nevada.
|
Detecting and understanding rotation in stellar interiors is nowadays one of
the unsolved problems in stellar physics. Asteroseismology has been able to
provide insights on rotation for the Sun, solar-like stars, and compact objects
like white dwarfs. However, this is still very difficult for intermediate-mass
stars. These stars are moderate-to-rapid rotators. Rotation splits and shifts
the oscillation modes, which makes the oscillation spectrum more complex and
harder to interpret. Here we study the oscillation patterns of a sample of
benchmark $\delta$~Sct stars belonging to eclipsing binary systems with the
objective to find the frequency spacing related to the rotational splitting
($\delta r$). For this task, we combine three techniques: the Fourier
transform, the autocorrelation function, and the histogram of frequency
differences. The last two showed a similar behaviour. For most of the stars, it
was necessary to determine the large separation ($\Delta\nu$) prior to spot
$\delta r$. This is the first time we may clearly state that one of the
periodicities present in the p~modes oscillation spectra of $\delta$~Sct stars
corresponds to the rotational splitting. This is true independently of the
stellar rotation rate. These promising results pave the way to find a robust
methodology to determine rotational splittings from the oscillation spectra of
$\delta$~Sct stars and, thus, understanding the rotational profile of
intermediate-mass pulsating stars.
|
We introduce a general reduction strategy that enables one to search for
solutions of parameterized linear difference equations in difference rings.
Here we assume that the ring itself can be decomposed by a direct sum of
integral domains (using idempotent elements) that enjoys certain technical
features and that the coefficients of the difference equation are not
degenerated. Using this mechanism we can reduce the problem to find solutions
in a ring (with zero-divisors) to search solutions in several copies of
integral domains. Utilizing existing solvers in this integral domain setting,
we obtain a general solver where the components of the linear difference
equations and the solutions can be taken from difference rings that are built
e.g., by $R\Pi\Sigma$-extensions over $\Pi\Sigma$-fields. This class of
difference rings contains, e.g., nested sums and products, products over roots
of unity and nested sums defined over such objects.
|
We study counting propositional logic as an extension of propositional logic
with counting quantifiers. We prove that the complexity of the underlying
decision problem perfectly matches the appropriate level of Wagner's counting
hierarchy, but also that the resulting logic admits a satisfactory
proof-theoretical treatment. From the latter, a type system for a probabilistic
lambda-calculus is derived in the spirit of the Curry-Howard correspondence,
showing the potential of counting propositional logic as a useful tool in
several fields of theoretical computer science.
|
Multi-Party Quantum Computation (MPQC) has attracted a lot of attention as a
potential killer-app for quantum networks through it's ability to preserve
privacy and integrity of the highly valuable computations they would enable.
Contributing to the latest challenges in this field, we present a composable
protocol achieving blindness and verifiability even in the case of a single
honest client. The security of our protocol is reduced, in an
information-theoretically secure way, to that of a classical composable Secure
Multi-Party Computation (SMPC) used to coordinate the various parties. Our
scheme thus provides a statistically secure upgrade of such classical scheme to
a quantum one with the same level of security.
In addition, (i) the clients can delegate their computation to a powerful
fully fault-tolerant server and only need to perform single qubit operations to
unlock the full potential of multi-party quantum computation; (ii) the amount
of quantum communication with the server is reduced to sending quantum states
at the beginning of the computation and receiving the output states at the end,
which is optimal and removes the need for interactive quantum communication;
and (iii) it has a low constant multiplicative qubit overhead compared to the
single-client delegated protocol it is built upon.
The main technical ingredient of our paper is the bootstraping of the MPQC
construction by Double Blind Quantum Computation, a new composable resource for
blind multiparty quantum computation, that demonstrates the surprising fact
that the full protocol does not require verifiability of all components to
achieve security.
|
CeRh$_2$As$_2$, a non-symmorphic heavy fermion material, was recently
reported to host a remarkable phase diagram with two superconducting phases. In
this material, the two inequivalent Ce sites per unit cell, related by
inversion symmetry, introduce a sublattice structure corresponding to an extra
internal degree of freedom. Here we propose a classification of the possible
superconducting states in CeRh$_2$As$_2$ from the two Ce-sites perspective.
Based on the superconducting fitness analysis and the quasiclassical
Eilenberger equations, we discuss two limits: Rashba spin-orbit coupling and
inter-layer hopping dominated normal state. In both limits, we are able find
two scenarios that generate phase diagrams in qualitative agreement with
experiments: i) intra-sublattice pairing with an even-odd transition under
magnetic field, and ii) inter-sublattice pairing with an odd-odd transition
under magnetic field.
|
We present a differentiable soft-body physics simulator that can be composed
with neural networks as a differentiable layer. In contrast to other
differentiable physics approaches that use explicit forward models to define
state transitions, we focus on implicit state transitions defined via function
minimization. Implicit state transitions appear in implicit numerical
integration methods, which offer the benefits of large time steps and excellent
numerical stability, but require a special treatment to achieve
differentiability due to the absence of an explicit differentiable forward
pass. In contrast to other implicit differentiation approaches that require
explicit formulas for the force function and the force Jacobian matrix, we
present an energy-based approach that allows us to compute these derivatives
automatically and in a matrix-free fashion via reverse-mode automatic
differentiation. This allows for more flexibility and productivity when
defining physical models and is particularly important in the context of neural
network training, which often relies on reverse-mode automatic differentiation
(backpropagation). We demonstrate the effectiveness of our differentiable
simulator in policy optimization for locomotion tasks and show that it achieves
better sample efficiency than model-free reinforcement learning.
|
Background: The early stage of defect prediction in the software development
life cycle can reduce testing effort and ensure the quality of software. Due to
the lack of historical data within the same project, Cross-Project Defect
Prediction (CPDP) has become a popular research topic among researchers. CPDP
trained classifiers based on labeled data sets of one project to predict fault
in another project. Goals: Software Defect Prediction (SDP) data sets consist
of manually designed static features, which are software metrics. In CPDP,
source and target project data divergence is the major challenge in achieving
high performance. In this paper, we propose a Generative Adversarial Network
(GAN)-based data transformation to reduce data divergence between source and
target projects. Method: We apply the Generative Adversarial Method where label
data sets are choosing as real data, while target data sets are choosing as
fake data. The Discriminator tries to measure the perfection of domain
adaptation through loss function. Through the generator, target data sets try
to adapt the source project domain and, finally, apply machine learning
classifier (i.e., Naive Bayes) to classify faulty modules. Results: Our result
shows that it is possible to predict defects based on the Generative
Adversarial Method. Our model performs quite well in a cross-project
environment when we choose JDT as a target data sets. However, all chosen data
sets are facing a large class imbalance problem which affects the performance
of our model.
|
We present a number of low-resource approaches to the tasks of the Zero
Resource Speech Challenge 2021. We build on the unsupervised representations of
speech proposed by the organizers as a baseline, derived from CPC and clustered
with the k-means algorithm. We demonstrate that simple methods of refining
those representations can narrow the gap, or even improve upon the solutions
which use a high computational budget. The results lead to the conclusion that
the CPC-derived representations are still too noisy for training language
models, but stable enough for simpler forms of pattern matching and retrieval.
|
Convolutional neural networks (CNNs), inspired by biological visual cortex
systems, are a powerful category of artificial neural networks that can extract
the hierarchical features of raw data to greatly reduce the network parametric
complexity and enhance the predicting accuracy. They are of significant
interest for machine learning tasks such as computer vision, speech
recognition, playing board games and medical diagnosis. Optical neural networks
offer the promise of dramatically accelerating computing speed to overcome the
inherent bandwidth bottleneck of electronics. Here, we demonstrate a universal
optical vector convolutional accelerator operating beyond 10 TeraOPS (TOPS:
operations per second), generating convolutions of images of 250,000 pixels
with 8 bit resolution for 10 kernels simultaneously, enough for facial image
recognition. We then use the same hardware to sequentially form a deep optical
CNN with ten output neurons, achieving successful recognition of full 10 digits
with 900 pixel handwritten digit images with 88% accuracy. Our results are
based on simultaneously interleaving temporal, wavelength and spatial
dimensions enabled by an integrated microcomb source. This approach is scalable
and trainable to much more complex networks for demanding applications such as
unmanned vehicle and real time video recognition.
|
This study investigates the structure of Arf rings. From the perspective of
ring extensions, a decomposition of integrally closed ideals is given. Using
this, we present a kind of their prime ideal decomposition in Arf rings, and
determine their structure in the case where both R and the integral closure of
R are local rings.
|
Energy-efficiency is a key concern for neural network applications. To
alleviate this issue, hardware acceleration using FPGAs or GPUs can provide
better energy-efficiency than general-purpose processors. However, further
improvement of the energy-efficiency of such accelerators will be extremely
beneficial specially to deploy neural network in power-constrained edge
computing environments. In this paper, we experimentally explore the potential
of device-level energy-efficiency techniques (e.g.,supply voltage underscaling,
frequency scaling, and data quantization) for representative off-the-shelf
FPGAs compared to GPUs. Frequency scaling in both platforms can improve the
power and energy consumption but with performance overhead, e.g.,in GPUs it
improves the power consumption and GOPs/J by up to 34% and 28%, respectively.
However, leveraging reduced-precision instructions improves power (up to 13%),
energy (up to 20%), and performance (up to 7%) simultaneously, with negligible
reduction in accuracy of neural network accuracy.
|
Quantum Hall states - the progenitors of the growing family of topological
insulators -- are rich source of exotic quantum phases. The nature of these
states is reflected in the gapless edge modes, which in turn can be classified
as integer - carrying electrons, fractional - carrying fractional charges; and
neutral - carrying excitations with zero net charge but a well-defined amount
of heat. The latter two may obey anyonic statistics, which can be abelian or
non-abelian. The most-studied putative non-abelian state is the spin-polarized
filling factor {\nu}=5/2, whose charge e/4 quasiparticles are accompanied by
neutral modes. This filling, however, permits different possible topological
orders, which can be abelian or non-abelian. While numerical calculations favor
the non-abelian anti-Pfaffian (A-Pf) order to have the lowest energy, recent
thermal conductance measurements suggested the experimentally realized order to
be the particle-hole Pfaffian (PH-Pf) order. It has been suggested that lack of
thermal equilibration among the different edge modes of the A-Pf order can
account for this discrepancy. The identification of the topological order is
crucial for the interpretation of braiding (interference) operations, better
understanding of the thermal equilibration process, and the reliability of the
numerical studies. We developed a new method that helps identifying the
topological order of the {\nu}=5/2 state. By creating an interface between the
two 2D half-planes, one hosting the {\nu}=5/2 state and the other an integer
{\nu}=3 state, the interface supported a fractional {\nu}=1/2 charge mode with
1/2 quantum conductance and a neutral Majorana mode. The presence of the
Majorana mode, probed by measuring noise, propagating in the opposite direction
to the charge mode, asserted the presence of the PH-Pf order but not that of
the A-Pf order.
|
We present a novel method for graph partitioning, based on reinforcement
learning and graph convolutional neural networks. Our approach is to
recursively partition coarser representations of a given graph. The neural
network is implemented using SAGE graph convolution layers, and trained using
an advantage actor critic (A2C) agent. We present two variants, one for finding
an edge separator that minimizes the normalized cut or quotient cut, and one
that finds a small vertex separator. The vertex separators are then used to
construct a nested dissection ordering to permute a sparse matrix so that its
triangular factorization will incur less fill-in. The partitioning quality is
compared with partitions obtained using METIS and SCOTCH, and the nested
dissection ordering is evaluated in the sparse solver SuperLU. Our results show
that the proposed method achieves similar partitioning quality as METIS and
SCOTCH. Furthermore, the method generalizes across different classes of graphs,
and works well on a variety of graphs from the SuiteSparse sparse matrix
collection.
|
Given a transitive DG-Lie algebroid $(\mathcal{A}, \rho)$ over a smooth
separated scheme $X$ of finite type over a field $\mathbb{K}$ of characteristic
$0$ we define a notion of connection $\nabla \colon
\mathbf{R}\Gamma(X,\mathrm{Ker} \rho) \to \mathbf{R}\Gamma
(X,\Omega_X^1[-1]\otimes \mathrm{Ker} \rho)$ and construct an $L_\infty$
morphism between DG-Lie algebras $f \colon \mathbf{R}\Gamma(X, \mathrm{Ker}
\rho) \rightsquigarrow\mathbf{R}\Gamma(X, \Omega_X^{\leq 1} [2])$ associated to
a connection and to a cyclic form on the DG-Lie algebroid. In this way, we
obtain a lifting of the first component of the modified Buchweitz-Flenner
semiregularity map in the algebraic context, which has an application to the
deformation theory of coherent sheaves on $X$ admitting a finite locally free
resolution. Another application is to the deformations of (Zariski) principal
bundles on $X$.
|
During the global spread of COVID-19, Japan has been among the top countries
to maintain a relatively low number of infections, despite implementing limited
institutional interventions. Using a Tokyo Metropolitan dataset, this study
investigated how these limited intervention policies have affected public
health and economic conditions in the COVID-19 context. A causal loop analysis
suggested that there were risks to prematurely terminating such interventions.
On the basis of this result and subsequent quantitative modelling, we found
that the short-term effectiveness of a short-term pre-emptive stay-at-home
request caused a resurgence in the number of positive cases, whereas an
additional request provided a limited negative add-on effect for economic
measures (e.g. the number of electronic word-of-mouth (eWOM) communications and
restaurant visits). These findings suggest the superiority of a mild and
continuous intervention as a long-term countermeasure under epidemic pressures
when compared to strong intermittent interventions.
|
We characterize the membership of Hankel operators with general symbols in
the Schatten Classes $S^p,\, p\in(0,1),$ of the large Bergman spaces
$A^2_{\omega}$. The case $p\geq 1$ was proved by Lin and Rochberg.
|
A model for ripple formation on liquid surfaces exposed to an external laser
or particle beam and a variable ground is developed. The external incident beam
is hereby mechanically coupled to the liquid surface due to surface roughness.
Starting from the Navier Stokes equation the coupled equations for the velocity
potential and the surface height are derived in a shallow-water approximation
with special attention to viscosity. The resulting equations obey conservation
laws for volume and momentum where characteristic potentials for gravitation
and surface tension are identified analogously to conservative forces. The
approximate solutions are discussed in the context of ripple formation in laser
materials processing involving melting of a surface by a laser beam. Linear
stability analysis provides the formation of a damped wave modified by an
interplay between the external beam, the viscosity, and the surface tension.
The limit of small viscosity leads to damped gravitational and the limit of
high viscosity to capillary waves. The resulting wavelengths are in the order
of the ripples occurring in laser welding experiments hinting to the
involvement of hydrodynamic processes in their origin. By discussing the
response of the system to external periodic excitations with the help of
Floquet multipliers, we show that the ripple formation could be triggered by a
a periodically modulated external beam, e.g. appropriate repetition rates of an
incident laser beam. The weak nonlinear stability analysis provides ranges
where hexagonal or stripe structures can appear. The orientation of stripe
structures and ripples are shown to be dependent on the incident angle of the
laser or particle beam where a minimal angle is reported. Numerical simulations
confirm the findings and allow to describe the influence of variable grounds.
|
Physically-motivated and mathematically robust atom-centred representations
of molecular structures are key to the success of modern atomistic machine
learning (ML) methods. They lie at the foundation of a wide range of methods to
predict the properties of both materials and molecules as well as to explore
and visualize the chemical compound and configuration space. Recently, it has
become clear that many of the most effective representations share a
fundamental formal connection: that they can all be expressed as a
discretization of N-body correlation functions of the local atom density,
suggesting the opportunity of standardizing and, more importantly, optimizing
the calculation of such representations. We present an implementation, named
librascal, whose modular design lends itself both to developing refinements to
the density-based formalism and to rapid prototyping for new developments of
rotationally equivariant atomistic representations. As an example, we discuss
SOAP features, perhaps the most widely used member of this family of
representations, to show how the expansion of the local density can be
optimized for any choice of radial basis set. We discuss the representation in
the context of a kernel ridge regression model, commonly used with SOAP
features, and analyze how the computational effort scales for each of the
individual steps of the calculation. By applying data reduction techniques in
feature space, we show how to further reduce the total computational cost by at
up to a factor of 4 or 5 without affecting the model's symmetry properties and
without significantly impacting its accuracy.
|
Modeling of work systems occurs for all sorts of reasons. Requirements need
to be expressed. A pre-existing situation may need to be charted and analyzed.
Early design decisions may be captured using architecture principles. Detailed
design may be worked out. We all regard these activities as essentially being
forms of modeling. In the work systems modeling library, we consider work
system engineering from a modeling perspective. In the field of work system
engineering, a whole plethora of modeling methods is available to system
engineers and architects. Each of these methods can be used to model some
(aspects) of a domain related to an existing and/or a planned work system. The
aspects may refer to requirements, architecture, design, processing, data, etc,
etc. In other words, these methodes are essentially all intended to model
different aspects of work systems and/or their context. The aim of the work
systems modeling library (WSML) is to bring together methodical knowledge
concerning the modeling of work systems.
|
We present a deep radio-polarimetric observation of the stellar bow shock
EB27 associated to the massive star BD+43 3654. This is the only stellar bow
shock confirmed to have non-thermal radio emission. We used the Jansky Very
Large Array in S band (2 - 4GHz) to test whether this synchrotron emission is
polarised. The unprecedented sensitivity achieved allowed us to map even the
fainter regions of the bow shock, revealing that the more diffuse emission is
steeper and the bow shock brighter than previously reported. No linear
polarisation is detected in the bow shock above 0.5%, although we detected
polarised emission from two southern sources, probably extragalactic in nature.
We modeled the intensity and morphology of the radio emission to better
constrain the magnetic field and injected power in relativistic electrons.
Finally, we derived a set of more precise parameters for the system EB27-BD+43
3654 using Gaia Early Data Release 3, including the spatial velocity. The new
trajectory, back in time, intersects the core of the Cyg OB2 association.
|
A debate is emerging regarding the recent inconsistent results of different
studies for the Cosmic Star Formation Rate Density (CSFRD) at high-z. We employ
UV and IR datasets to investigate the star formation rate function (SFRF) at
${\rm z \sim 0-9}$. We find that the SFRFs derived from the dust corrected
${\rm UV}$ (${\rm UV_{corr}}$) data contradict those from IR on some key issues
since they are described by different distributions (Schechter vs double-power
law), imply different physics for galaxy formation (${\rm UV_{corr}}$ data
suggest a SFR limit/strong mechanism that diminish the number density of high
star forming systems with respect IR) and compare differently with the stellar
mass density evolution obtained from SED fitting (${\rm UV_{corr}}$ is in
agreement, while IR in tension up to 0.5 dex). However, both tracers agree on a
constant CSFRD evolution at ${\rm z \sim 1-4}$ and point to a plateau instead
of a peak. In addition, using both indicators we demonstrate that the evolution
of the {\it observed} CSFRD can be described by only {\bf 2} parameters and a
function that has the form of a Gamma distribution (${\bf \Gamma(a,bt)}$). In
contrast to previous parameterizations used in the literature our framework
connects the parameters to physical properties like the star formation rate
depletion time and cosmic baryonic gas density. The build up of stellar mass
occurs in $\Gamma(a,bt)$ distributed steps and is the result of gas consumption
up to the limit that there is no eligible gas for SF at t = ${\rm \infty}$,
resulting to a final cosmic stellar mass density of $\sim 0.5 \times 10^9 \,
{\rm \frac{M_{\odot}}{Mpc^3}}$.
|
We investigate the collective decay dynamics of atoms with a generic
multilevel structure (angular momenta $F\leftrightarrow F'$) coupled to two
light modes of different polarization inside a cavity. In contrast to two-level
atoms, we find that multilevel atoms can harbour eigenstates that are perfectly
dark to cavity decay even within the subspace of permutationally symmetric
states (collective Dicke manifold). The dark states arise from destructive
interference between different internal transitions and are shown to be
entangled. Remarkably, the superradiant decay of multilevel atoms can end up
stuck in one of these dark states, where a macroscopic fraction of the atoms
remains excited. This opens the door to the preparation of entangled dark
states of matter through collective dissipation useful for quantum sensing and
quantum simulation. Our predictions should be readily observable in current
optical cavity experiments with alkaline-earth atoms or Raman-dressed
transitions.
|
A logical function can be used to characterizing a property of a state of
Boolean network (BN), which is considered as an aggregation of states. To
illustrate the dynamics of a set of logical functions, which characterize our
concerned properties of a BN, the invariant subspace containing the set of
logical functions is proposed, and its properties are investigated. Then the
invariant subspace of Boolean control network (BCN) is also proposed. The
dynamics of invariant subspace of BCN is also invariant. Finally, using outputs
as the set of logical functions, the minimum realization of BCN is proposed,
which provides a possible solution to overcome the computational complexity of
large scale BNs/BCNs.
|
We show that the BFV quantization scheme can be implemented in the
nonprojectable 2+1 Horava theory. This opens the possibility of imposing more
general gauge conditions in the quantization of this theory. The BFV
quantization is based on the canonical formalism, which is suitable to
incorporate the measure associated to the second-class constraints that the
theory has. Special features of the Hamiltonian density and the matrix of
second-class constraints allow that the system be involutive in terms of Dirac
brackets, which is a nontrivial requisite for implementing the BFV formalism.
We present the BRST symmetry transformations in the canonical variables. The
theory is of rank one, in the classification introduced by Fradkin and
Fradkina. The originally called relativistic gauge-fixing conditions of the BFV
formalism can be implemented in the nonprojectable Horava theory, extended to
nonrelativistic forms. We show that the nonlocal gauge condition introduced in
the projectable theory can be included among these gauges.
|
Cell instance segmentation in fluorescence microscopy images is becoming
essential for cancer dynamics and prognosis. Data extracted from cancer
dynamics allows to understand and accurately model different metabolic
processes such as proliferation. This enables customized and more precise
cancer treatments. However, accurate cell instance segmentation, necessary for
further cell tracking and behavior analysis, is still challenging in scenarios
with high cell concentration and overlapping edges. Within this framework, we
propose a novel cell instance segmentation approach based on the well-known
U-Net architecture. To enforce the learning of morphological information per
pixel, a deep distance transformer (DDT) acts as a back-bone model. The DDT
output is subsequently used to train a top-model. The following top-models are
considered: a three-class (\emph{e.g.,} foreground, background and cell border)
U-net, and a watershed transform. The obtained results suggest a performance
boost over traditional U-Net architectures. This opens an interesting research
line around the idea of injecting morphological information into a fully
convolutional model.
|
This paper reports on the second "Throughput" phase of the Tracking Machine
Learning (TrackML) challenge on the Codalab platform. As in the first
"Accuracy" phase, the participants had to solve a difficult experimental
problem linked to tracking accurately the trajectory of particles as e.g.
created at the Large Hadron Collider (LHC): given O($10^5$) points, the
participants had to connect them into O($10^4$) individual groups that
represent the particle trajectories which are approximated helical. While in
the first phase only the accuracy mattered, the goal of this second phase was a
compromise between the accuracy and the speed of inference. Both were measured
on the Codalab platform where the participants had to upload their software.
The best three participants had solutions with good accuracy and speed an order
of magnitude faster than the state of the art when the challenge was designed.
Although the core algorithms were less diverse than in the first phase, a
diversity of techniques have been used and are described in this paper. The
performance of the algorithms are analysed in depth and lessons derived.
|
The quantum levels population behavior of the two coupled flux qubits
depending on the external driving field characteristics is studied. The
explicit expressions for the multiphoton transition probabilities at an
arbitrary control field amplitude is obtained for the case of small tunnel
splitting energies. We describe the controllable features of their formation
and thereby creating or destroying entanglement by system bias tuning on the
direct inter-level transition and during the transition through intermediate
states. We found a feature of the qubits population inverting that ends in the
independence of the resonances positions from the qubits coupling strength.
Using Floquet--Markov equation we numerically demonstrate, that the positions
of multiphoton resonances are stable to dissipative processes.
|
Investigating the problem of setting control limits in the case of parameter
uncertainty is more accessible when monitoring the variance because only one
parameter has to be estimated. Simply ignoring the induced uncertainty
frequently leads to control charts with poor false alarm performances.
Adjusting the unconditional in-control (IC) average run length (ARL) makes the
situation even worse. Guaranteeing a minimum conditional IC ARL with some given
probability is another very popular approach to solving these difficulties.
However, it is very conservative as well as more complex and more difficult to
communicate. We utilize the probability of a false alarm within the planned
number of points to be plotted on the control chart. It turns out that
adjusting this probability produces notably different limit adjustments
compared to controlling the unconditional IC ARL. We then develop numerical
algorithms to determine the respective modifications of the upper and two-sided
exponentially weighted moving average (EWMA) charts based on the sample
variance for normally distributed data. These algorithms are made available
within an R package. Finally, the impacts of the EWMA smoothing constant and
the size of the preliminary sample on the control chart design and its
performance are studied.
|
Four stars pulsating simultaneously with a dominant period
$P_D$$\in$(0.28,0.39) d and an {\it additional} period $P_A$$\in$(0.20,0.27) d
have been identified from among the more than 3000 RR Lyrae stars observed by
the Kepler space telescope during NASA's K2 Mission. All four stars are located
in the direction of the Galactic Bulge and have period ratios, $P_A$/$P_D$,
significantly smaller than those of most double-mode RR Lyrae (RRd) stars:
$P_A$/$P_D$$\in$(0.694,0.710) vs. $P_1$/$P_0$$\in$(0.726,0.748). Three of the
stars are faint ($<$$V$$>$=18--20 mag) and distant and are among the `peculiar'
RRd (pRRd) stars discovered by Prudil et al. (2017); the fourth star, EPIC
216764000 (=V1125 Sgr), is a newly discovered pRRd star several magnitudes
brighter than the other three stars. In this paper the high-precision
long-cadence K2 photometry is analyzed in detail and used to study the
cycle-to-cycle light variations. The pulsational characteristics of pRRd stars
are compared with those of `classical' and `anomalous' RRd (cRRd, aRRd) stars.
The conclusion by Prudil et al. that pRRd stars form a separate group of
double-mode pulsators and are not simply very-short-period cRRd stars is
confirmed. V1127 Aql and AH Cam are identified as other probable members of the
class of pRRd stars.
|
Recent investigations into the inner-workings of state-of-the-art large-scale
pre-trained Transformer-based Natural Language Understanding (NLU) models
indicate that they appear to know humanlike syntax, at least to some extent. We
provide novel evidence that complicates this claim: we find that
state-of-the-art Natural Language Inference (NLI) models assign the same labels
to permuted examples as they do to the original, i.e. they are largely
invariant to random word-order permutations. This behavior notably differs from
that of humans; we struggle with ungrammatical sentences. To measure the
severity of this issue, we propose a suite of metrics and investigate which
properties of particular permutations lead models to be word-order invariant.
In the MNLI dataset, for example, we find almost all (98.7%) examples contain
at least one permutation which elicits the gold label. Models are sometimes
even able to assign gold labels to permutations that they originally failed to
predict correctly. We provide a comprehensive empirical evaluation of this
phenomenon, and further show that this issue exists for both Transformers and
pre-Transformer RNN / ConvNet based encoders, as well as across multiple
languages (English and Mandarin Chinese). Our code and data are available at
https://github.com/facebookresearch/unlu.
|
The coexistence of ferroelectric and topological orders in two-dimensional
(2D) atomic crystals allows non-volatile and switchable quantum spin Hall
states. Here we offer a general design principle for 2D bilayer
heterostructures that can host ferroelectricity and nontrivial band topology
simultaneously using only topologically trivial building blocks. The built-in
electric field arising from the out-of-plane polarization across the
heterostrucuture enables a robust control of the band gap size and band
inversion strength, which can be utilized to manipulate topological phase
transitions. Using first-principles calculations, we demonstrate a series of
bilayer heterostructures are 2D ferroelectric topological insulators (2DFETIs)
characterized with a direct coupling between band topology and polarization
state. We propose a few 2DFETI-based quantum electronics including domain-wall
quantum circuits and topological memristor.
|
Motivated by their success in the single-objective domain, we propose a very
simple linear programming-based matheuristic for tri-objective binary integer
programming. To tackle the problem, we obtain lower bound sets by means of the
vector linear programming solver Bensolve. Then, simple heuristic approaches,
such as rounding and path relinking, are applied to this lower bound set to
obtain high-quality approximations of the optimal set of trade-off solutions.
The proposed algorithm is compared to a recently suggested algorithm which is,
to the best of our knowledge, the only existing matheuristic method for
tri-objective integer programming. Computational experiments show that our
method produces a better approximation of the true Pareto front using
significantly less time than the benchmark method on standard benchmark
instances for the three-objective knapsack problem.
|
The incompressible Navier-Stokes equations are solved in a channel, using a
Discontinuous Galerkin method over staggered grids. The resulting linear
systems are studied both in terms of the structure and in terms of the spectral
features of the related coefficient matrices. In fact, the resulting matrices
are of block type, each block showing Toeplitz-like, band, and tensor structure
at the same time. Using this rich matrix-theoretic information and the
Toeplitz, Generalized Locally Toeplitz technology, a quite complete spectral
analysis is presented, with the target of designing and analyzing fast
iterative solvers for the associated large linear systems. Quite promising
numerical results are presented, commented, and critically discussed for
elongated two- and three-dimensional geometries.
|
The study examined the participation of female students of South Eastern
Nigerian tertiary institutions in Information and Communication Technologies
(ICTs). The study discussed the attendant gender divide in ICTs participation,
reasons for low female participation in ICT, consequences of not bridging the
divide and ways of encouraging female participation in ICT. A structured
questionnaire was used to elicit information from respondents. A multi stage
random sampling technique was used in the selection of respondents. One hundred
and thirty six (136) undergraduate female students of tertiary institutions in
South Eastern Nigeria constituted the study sample. Data collected was analysed
using descriptive statistics. Findings suggest that high cost of ICT and high
level of male dominance, which made females think that ICT is for males were
the major reasons for low female participation in ICT. Reducing the cost of
Information Technology, and parental involvement in their children selection
choice of study were suggested to encourage female participation in Information
and Communication Technologies.
|
This paper introduces RyanSpeech, a new speech corpus for research on
automated text-to-speech (TTS) systems. Publicly available TTS corpora are
often noisy, recorded with multiple speakers, or lack quality male speech data.
In order to meet the need for a high quality, publicly available male speech
corpus within the field of speech recognition, we have designed and created
RyanSpeech which contains textual materials from real-world conversational
settings. These materials contain over 10 hours of a professional male voice
actor's speech recorded at 44.1 kHz. This corpus's design and pipeline make
RyanSpeech ideal for developing TTS systems in real-world applications. To
provide a baseline for future research, protocols, and benchmarks, we trained 4
state-of-the-art speech models and a vocoder on RyanSpeech. The results show
3.36 in mean opinion scores (MOS) in our best model. We have made both the
corpus and trained models for public use.
|
In this paper the growth of a 16mm diameter Ce-doped Tl$_2$NaYCl$_6$ and its
characterization as a gamma-ray detector are reported. With a 16 mm dia. x 8 mm
cylindrical sample, energy resolution of 4.1% (FWHM) at 662 keV and light yield
of 27,800ph/MeV are measured. Decay times of 91 ns (34%), 462 ns (52%), and 2.1
microseconds (15%) are calculated. The x-ray excited emission spectrum exhibits
bands that are similar to other Tl-based elpasolite scintillators like
Tl$_2$NaYCl$_6$:Ce.
|
Intelligent reflecting surface (IRS) is a novel burgeoning concept, which
possesses advantages in enhancing wireless communication and user localization,
while maintaining low hardware cost and energy consumption. Herein, we
establish an IRS-aided mmWave-MIMO based joint localization and communication
system (IMM-JLCS), and probe into its performance evaluation and optimization
design. Specifically, first, we provide the signal, channel and estimation
error models, and contrive the working process of the IMM-JLCS in detail. Then,
by configuring appropriate IRS phase shifts, we derive the closed-form
expressions of the Cramer-Rao Lower Bound (CRLB) of the position/orientation
estimation errors and the effective achievable data rate (EADR), with respect
to the time allocation ratio of the beam alignment and localization stage
(BALS). Subsequently, we investigate the trade-off between the two performance
metrics, for which we propose a joint optimization algorithm. Finally, we carry
out simulations and comparisons to view the trade-off and validate the
effectiveness of the proposed algorithm, in the presence of distinct levels of
estimation uncertainty and user mobility. Our results demonstrate that the
proposed algorithm can find the joint optimal solution for the
position/orientation estimation accuracy and EADR, with its optimization
performance being robust to slight localization or channel estimation errors
and user mobility.
|
In this paper we present an extended variant of low rank parity check matrix
(LRPC) codes that have received significant interests in recent years. It is
shown that the extension indeed yields a superfamily of LRPC codes, which are
termed low row rank parity check codes. The decoding method of the proposed
codes is also investigated.
|
Many facts come with an expiration date, from the name of the President to
the basketball team Lebron James plays for. But language models (LMs) are
trained on snapshots of data collected at a specific moment in time, and this
can limit their utility, especially in the closed-book setting where the
pretraining corpus must contain the facts the model should memorize. We
introduce a diagnostic dataset aimed at probing LMs for factual knowledge that
changes over time and highlight problems with LMs at either end of the spectrum
-- those trained on specific slices of temporal data, as well as those trained
on a wide range of temporal data. To mitigate these problems, we propose a
simple technique for jointly modeling text with its timestamp. This improves
memorization of seen facts from the training time period, as well as
calibration on predictions about unseen facts from future time periods. We also
show that models trained with temporal context can be efficiently ``refreshed''
as new data arrives, without the need for retraining from scratch.
|
Next-generation space observatories will conduct the first systematic surveys
of terrestrial exoplanet atmospheres and search for evidence of life beyond
Earth. While in-depth observations of the nearest habitable worlds may yield
enticing results, there are fundamental questions about planetary habitability
and evolution which can only be answered through population-level studies of
dozens to hundreds of terrestrial planets. To determine the requirements for
next-generation observatories to address these questions, we have developed
Bioverse. Bioverse combines existing knowledge of exoplanet statistics with a
survey simulation and hypothesis testing framework to determine whether
proposed space-based direct imaging and transit spectroscopy surveys will be
capable of detecting various hypothetical statistical relationships between the
properties of terrestrial exoplanets. Following a description of the code, we
apply Bioverse to determine whether an ambitious direct imaging or transit
survey would be able to determine the extent of the circumstellar habitable
zone and study the evolution of Earth-like planets. Given recent evidence that
Earth-sized habitable zone planets are likely much rarer than previously
believed (Pascucci et al. 2019), we find that space missions with large search
volumes will be necessary to study the population of terrestrial and habitable
worlds. Moving forward, Bioverse provides a methodology for performing trade
studies of future observatory concepts to maximize their ability to address
population-level questions, including and beyond the specific examples explored
here.
|
We study a model of interacting neurons. The structure of this neural system
is composed of two layers of neurons such that the neurons of the first layer
send their spikes to the neurons of the second one: if $N$ is the number of
neurons of the first layer, at each spiking time of the first layer, every
neuron of both layers receives an amount of potential of the form $U/\sqrt{N},$
where $U$ is a centered random variable. This kind of structure of neurons can
model a part of the structure of the visual cortex: the first layer represents
the primary visual cortex V1 and the second one the visual area V2. In the
model, we study the "averaged effect" of the neurons of the first layer on a
single neuron of the second layer. The theoretical model consists in two
stochastic processes, one modelling the membrane potential of the neurons of
the first layer, and the other the membrane potential of the particular neuron
of the second layer. We prove the convergence of these processes as the number
of neurons~$N$ goes to infinity and obtain a convergence speed. The proofs rely
on similar arguments as those used in [Erny, L\"ocherbach, Loukianova (2022)]:
the convergence speed of the semigroups of the processes is obtained from the
convergence speed of their infinitesimal generators using a Trotter-Kato
formula, and from the regularity of the limit semigroup. Contrarily to the
situation in [Erny, L\"ocherbach, Loukianova (2022)], the stochastic flow of
the limit process is not continuous, and we need to use Girsanov's theorem for
jump processes result to recover the regularity of the limit semigroup from the
regularity of the stochastic flow of an auxiliary process.
|
The total atomization energy of a molecule is the thermochemical cognate of
the heat of formation in the gas phase, its most fundamental thermochemical
property. We decompose it into different components and provide a survey of
them. It emerges that the connected triple excitations contribution is the
third most important one, about an order of magnitude less important than the
"big two" contributions (mean-field Hartree-Fock and valence CCSD correlation),
but 1-2 orders of magnitude more important than the remainder. For the 200
total atomization energies of small molecules in the W4-17 benchmark, we have
investigated the basis set convergence of the connected triple excitations
contribution (T). Achieving basis set convergence for the valence triple
excitations energy is much easier than for the valence singles and doubles
correlation energy. Using reference data obtained from spdfghi and spdfghik
basis sets, we show that extrapolation from quintuple-zeta and sextuple-zeta
yields values within about 0.004 kcal/mol RMS. Convergence to within about 0.01
kcal/mol is achievable with quadruple- and quintuple-zeta basis sets, and to
within about 0.05 kcal/mol with triple- and quadruple-zeta basis sets. It
appears that radial flexibility in the basis set is more important here than
adding angular momenta L: apparently, replacing nZaPa basis sets with
truncations of 7ZaPa at L=n gains about one angular momentum for small values
of n. We end the article with a brief outlook for the future of accurate
electronic structure calculations.
|
Verifying integrity of software execution in low-end micro-controller units
(MCUs) is a well-known open problem. The central challenge is how to securely
detect software exploits with minimal overhead, since these MCUs are designed
for low cost, low energy and small size. Some recent work yielded inexpensive
hardware/software co-designs for remotely verifying code and execution
integrity. In particular, a means of detecting unauthorized code modifications
and control-flow attacks were proposed, referred to as Remote Attestation (RA)
and Control-Flow Attestation (CFA), respectively. Despite this progress,
detection of data-only attacks remains elusive. Such attacks exploit software
vulnerabilities to corrupt intermediate computation results stored in data
memory, changing neither the program code nor its control flow. Motivated by
lack of any current techniques (for low-end MCUs) that detect these attacks, in
this paper we propose, implement and evaluate DIALED, the first Data-Flow
Attestation (DFA) technique applicable to the most resource-constrained
embedded devices (e.g., TI MSP430). DIALED works in tandem with a companion CFA
scheme to detect all (currently known) types of runtime software exploits at
fairly low cost.
|
To understand and infer meaning in language, neural models have to learn
complicated nuances. Discovering distinctive linguistic phenomena from data is
not an easy task. For instance, lexical ambiguity is a fundamental feature of
language which is challenging to learn. Even more prominently, inferring the
meaning of rare and unseen lexical units is difficult with neural networks.
Meaning is often determined from context. With context, languages allow meaning
to be conveyed even when the specific words used are not known by the reader.
To model this learning process, a system has to learn from a few instances in
context and be able to generalize well to unseen cases. The learning process is
hindered when training data is scarce for a task. Even with sufficient data,
learning patterns for the long tail of the lexical distribution is challenging.
In this thesis, we focus on understanding certain potentials of contexts in
neural models and design augmentation models to benefit from them. We focus on
machine translation as an important instance of the more general language
understanding problem. To translate from a source language to a target
language, a neural model has to understand the meaning of constituents in the
provided context and generate constituents with the same meanings in the target
language. This task accentuates the value of capturing nuances of language and
the necessity of generalization from few observations. The main problem we
study in this thesis is what neural machine translation models learn from data
and how we can devise more focused contexts to enhance this learning. Looking
more in-depth into the role of context and the impact of data on learning
models is essential to advance the NLP field. Moreover, it helps highlight the
vulnerabilities of current neural networks and provides insights into designing
more robust models.
|
In this conference paper I present the first full global fit of a dark matter
effective field theory with the global fitting framework GAMBIT. I show the
results of exhaustive parameter space explorations of the effective dark matter
model, including a general set of operators up to dimension 7, and using the
most up-to-date constraints from direct and indirect detection of dark matter,
relic abundance requirements and collider searches for dark matter candidates.
|
In this paper, we examine linear conditions on finite sets of points in
projective space implied by the Cayley-Bacharach condition. In particular, by
bounding the number of points satisfying the Cayley-Bacharach condition, we
force them to lie on unions of low-dimensional linear spaces. These results are
motivated by investigations into degrees of irrationality of complete
intersections, which are controlled by minimum-degree rational maps to
projective space. As an application of our main theorem, we describe the fibers
of such maps for certain complete intersections of codimension two.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.