abstract
stringlengths 42
2.09k
|
---|
The framework of database repairs and consistent answers to queries is a
principled approach to managing inconsistent databases. We describe the first
system able to compute the consistent answers of general aggregation queries
with the COUNT(A), COUNT(*), SUM(A), MIN(A), and MAX(A) operators, and with or
without grouping constructs. Our system uses reductions to optimization
versions of Boolean satisfiability (SAT) and then leverages powerful SAT
solvers. We carry out an extensive set of experiments on both synthetic and
real-world data that demonstrate the usefulness and scalability of this
approach.
|
We consider the Max-Cut problem. Let $G = (V,E)$ be a graph with adjacency
matrix $(a_{ij})_{i,j=1}^{n}$. Burer, Monteiro & Zhang proposed to find, for
$n$ angles $\left\{\theta_1, \theta_2, \dots, \theta_n\right\} \subset [0,
2\pi]$, minima of the energy $$ f(\theta_1, \dots, \theta_n) = \sum_{i,j=1}^{n}
a_{ij} \cos{(\theta_i - \theta_j)}$$ because configurations achieving a global
minimum leads to a partition of size 0.878 Max-Cut(G). This approach is known
to be computationally viable and leads to very good results in practice. We
prove that by replacing $\cos{(\theta_i - \theta_j)}$ with an explicit function
$g_{\varepsilon}(\theta_i - \theta_j)$ global minima of this new functional
lead to a $(1-\varepsilon)$Max-Cut(G). This suggests some interesting
algorithms that perform well. It also shows that the problem of finding
approximate global minima of energy functionals of this type is NP-hard in
general.
|
Reducing the formation of cracks during growth of GaInP/GaInAs/Ge 3-junction
solar cells on Ge|Si virtual substrates has been attempted by thinning the
structure, namely the Ge bottom cell and the GaInAs middle cell. The
theoretical analysis performed using realistic device parameters indicates that
the GaInAs middle cell can be drastically thinned to 1000 nm while increasing
its In content to 8% with an efficiency loss in the 3-junction cell below 3%.
The experimental results show that the formation of macroscopic cracks is
prevented in thinned GaInAs/Ge 2-junction and GaInP/GaInAs/Ge 3-junction cells.
These prototype crack-free multijunction cells demonstrate the concept and were
used to rule out any possible component integration issue. The performance
metrics are limited by the high threading dislocation density over 2e7cm-2 in
the virtual substrates used, but an almost current matched, crack-free, thinned
3-junction solar cell is demonstrated, and the pathway towards solar cells with
higher voltages identified.
|
Supertree methods are tree reconstruction techniques that combine several
smaller gene trees (possibly on different sets of species) to build a larger
species tree. The question of interest is whether the reconstructed supertree
converges to the true species tree as the number of gene trees increases (that
is, the consistency of supertree methods). In this paper, we are particularly
interested in the convergence rate of the maximum likelihood supertree.
Previous studies on the maximum likelihood supertree approach often formulate
the question of interest as a discrete problem and focus on reconstructing the
correct topology of the species tree. Aiming to reconstruct both the topology
and the branch lengths of the species tree, we propose an analytic approach for
analyzing the convergence of the maximum likelihood supertree method.
Specifically, we consider each tree as one point of a metric space and prove
that the distance between the maximum likelihood supertree and the species tree
converges to zero at a polynomial rate under some mild conditions. We further
verify these conditions for the popular exponential error model of gene trees.
|
Semantic segmentation in autonomous driving predominantly focuses on learning
from large-scale data with a closed set of known classes without considering
unknown objects. Motivated by safety reasons, we address the video class
agnostic segmentation task, which considers unknown objects outside the closed
set of known classes in our training data. We propose a novel auxiliary
contrastive loss to learn the segmentation of known classes and unknown
objects. Unlike previous work in contrastive learning that samples the anchor,
positive and negative examples on an image level, our contrastive learning
method leverages pixel-wise semantic and temporal guidance. We conduct
experiments on Cityscapes-VPS by withholding four classes from training and
show an improvement gain for both known and unknown objects segmentation with
the auxiliary contrastive loss. We further release a large-scale synthetic
dataset for different autonomous driving scenarios that includes distinct and
rare unknown objects. We conduct experiments on the full synthetic dataset and
a reduced small-scale version, and show how contrastive learning is more
effective in small scale datasets. Our proposed models, dataset, and code will
be released at https://github.com/MSiam/video_class_agnostic_segmentation.
|
Modern applications, such as social networking systems and e-commerce
platforms are centered around using large-scale storage systems for storing and
retrieving data. In the presence of concurrent accesses, these storage systems
trade off isolation for performance. The weaker the isolation level, the more
behaviors a storage system is allowed to exhibit and it is up to the developer
to ensure that their application can tolerate those behaviors. However, these
weak behaviors only occur rarely in practice, that too outside the control of
the application, making it difficult for developers to test the robustness of
their code against weak isolation levels.
This paper presents MonkeyDB, a mock storage system for testing
storage-backed applications. MonkeyDB supports a Key-Value interface as well as
SQL queries under multiple isolation levels. It uses a logical specification of
the isolation level to compute, on a read operation, the set of all possible
return values. MonkeyDB then returns a value randomly from this set. We show
that MonkeyDB provides good coverage of weak behaviors, which is complete in
the limit. We test a variety of applications for assertions that fail only
under weak isolation. MonkeyDB is able to break each of those assertions in a
small number of attempts.
|
The COVID-19 disease spreads swiftly, and nearly three months after the first
positive case was confirmed in China, Coronavirus started to spread all over
the United States. Some states and counties reported high number of positive
cases and deaths, while some reported lower COVID-19 related cases and
mortality. In this paper, the factors that could affect the risk of COVID-19
infection and mortality were analyzed in county level. An innovative method by
using K-means clustering and several classification models is utilized to
determine the most critical factors. Results showed that mean temperature,
percent of people below poverty, percent of adults with obesity, air pressure,
population density, wind speed, longitude, and percent of uninsured people were
the most significant attributes
|
Gravitational-wave sources can serve as standard sirens to probe cosmology by
measuring their luminosity distance and redshift. Such standard sirens are also
useful to probe theories beyond general relativity with a modified
gravitational-wave propagation. Many of previous studies on the latter assume
multi-messenger observations so that the luminosity distance can be measured
with gravitational waves while the redshift is obtained by identifying sources'
host galaxies from electromagnetic counterparts. Given that gravitational-wave
events of binary neutron star coalescences with associated electromagnetic
counterpart detections are expected to be rather rare, it is important to
examine the possibility of using standard sirens with gravitational-wave
observations alone to probe gravity. In this paper, we achieve this by
extracting the redshift from the tidal measurement of binary neutron stars that
was originally proposed within the context of gravitational-wave cosmology
(another approach is to correlate "dark sirens" with galaxy catalogs that we do
not consider here). We consider not only observations with ground-based
detectors (e.g. Einstein Telescope) but also multi-band observations between
ground-based and space-based (e.g. DECIGO) interferometers. We find that such
multi-band observations with the tidal information can constrain a parametric
non-Einsteinian deviation in the luminosity distance (due to the modified
friction in the gravitational wave evolution) more stringently than the case
with electromagnetic counterparts by a factor of a few. We also map the
above-projected constraints on the parametric deviation to those on specific
theories and phenomenological models beyond general relativity to put the
former into context.
|
Galaxy groups host the majority of matter and more than half of all the
galaxies in the Universe. Their hot ($10^7$ K), X-ray emitting intra-group
medium (IGrM) reveals emission lines typical of many elements synthesized by
stars and supernovae. Because their gravitational potentials are shallower than
those of rich galaxy clusters, groups are ideal targets for studying, through
X-ray observations, feedback effects, which leave important marks on their gas
and metal contents. Here, we review the history and present status of the
chemical abundances in the IGrM probed by X-ray spectroscopy. We discuss the
limitations of our current knowledge, in particular due to uncertainties in the
modeling of the Fe-L shell by plasma codes, and coverage of the volume beyond
the central region. We further summarize the constraints on the abundance
pattern at the group mass scale and the insight it provides to the history of
chemical enrichment. Parallel to the observational efforts, we review the
progress made by both cosmological hydrodynamical simulations and controlled
high-resolution 3D simulations to reproduce the radial distribution of metals
in the IGrM, the dependence on system mass from group to cluster scales, and
the role of AGN and SN feedback in producing the observed phenomenology.
Finally, we highlight future prospects in this field, where progress will be
driven both by a much richer sample of X-ray emitting groups identified with
eROSITA, and by a revolution in the study of X-ray spectra expected from
micro-calorimeters onboard XRISM and ATHENA.
|
Vaccines are an important public health measure, but vaccine hesitancy and
refusal can create clusters of low vaccine coverage and reduce the
effectiveness of vaccination programs. Social media provides an opportunity to
estimate emerging risks to vaccine acceptance by including geographical
location and detailing vaccine-related concerns. Methods for classifying social
media posts, such as vaccine-related tweets, use language models (LMs) trained
on general domain text. However, challenges to measuring vaccine sentiment at
scale arise from the absence of tonal stress and gestural cues and may not
always have additional information about the user, e.g., past tweets or social
connections. Another challenge in LMs is the lack of commonsense knowledge that
are apparent in users metadata, i.e., emoticons, positive and negative words
etc. In this study, to classify vaccine sentiment tweets with limited
information, we present a novel end-to-end framework consisting of
interconnected components that use domain-specific LM trained on
vaccine-related tweets and models commonsense knowledge into a bidirectional
gated recurrent network (CK-BiGRU) with context-aware attention. We further
leverage syntactical, user metadata and sentiment information to capture the
sentiment of a tweet. We experimented using two popular vaccine-related Twitter
datasets and demonstrate that our proposed approach outperforms
state-of-the-art models in identifying pro-vaccine, anti-vaccine and neutral
tweets.
|
Opacity is an information flow property that captures the notion of plausible
deniability in dynamic systems, that is whether an intruder can deduce that
"secret" behavior has occurred. In this paper we provide a general framework of
opacity to unify the many existing notions of opacity that exist for discrete
event systems. We use this framework to discuss language-based and state-based
notions of opacity over automata. We present several methods for language-based
opacity verification, and a general approach to transform state-based notions
into language-based ones. We demonstrate this approach for current-state and
initial-state opacity, unifying existing results. We then investigate the
notions of K-step opacity. We provide a language-based view of K-step opacity
encompassing two existing notions and two new ones. We then analyze the
corresponding language-based verification methods both formally and with
numerical examples. In each case, the proposed methods offer significant
reductions in runtime and space complexity.
|
The numerical integration of an analytical function $f(x)$ using a finite set
of equidistant points can be performed by quadrature formulas like the
Newton-Cotes. Unlike Gaussian quadrature formulas however, higher-order
Newton-Cotes formulas are not stable, limiting the usable order of such
formulas. Existing work showed that by the use of orthogonal polynomials,
stable high-order quadrature formulas with equidistant points can be developed.
We improve upon such work by making use of (orthogonal) Gram polynomials and
deriving an iterative algorithm, together allowing us to reduce the
space-complexity of the original algorithm significantly.
|
Predicting customer future purchases and lifetime value is a key metrics for
managing marketing campaigns and optimizing marketing spend. This task is
specifically challenging when the relationships between the customer and the
firm are of a noncontractual nature and therefore the future purchases need to
be predicted based mostly on historical purchases. This work compares two
approaches to predict customer future purchases, first using a
'buy-till-you-die' statistical model to predict customer behavior and later
using a neural network on the same dataset and comparing the results. This
comparison will lead to both quantitative and qualitative analysis of those two
methods as well as recommendation on how to proceed in different cases and
opportunities for future research.
|
Human affects are complex paradox and an active research domain in affective
computing. Affects are traditionally determined through a self-report based
psychometric questionnaire or through facial expression recognition. However,
few state-of-the-arts pieces of research have shown the possibilities of
recognizing human affects from psychophysiological and neurological signals. In
this article, electroencephalogram (EEG) signals are used to recognize human
affects. The electroencephalogram (EEG) of 100 participants are collected where
they are given to watch one-minute video stimuli to induce different affective
states. The videos with emotional tags have a variety range of affects
including happy, sad, disgust, and peaceful. The experimental stimuli are
collected and analyzed intensively. The interrelationship between the EEG
signal frequencies and the ratings given by the participants are taken into
consideration for classifying affective states. Advanced feature extraction
techniques are applied along with the statistical features to prepare a fused
feature vector of affective state recognition. Factor analysis methods are also
applied to select discriminative features. Finally, several popular supervised
machine learning classifier is applied to recognize different affective states
from the discriminative feature vector. Based on the experiment, the designed
random forest classifier produces 89.06% accuracy in classifying four basic
affective states.
|
Nonlinear light-matter interactions in structured materials are the source of
exciting properties and enable vanguard applications in photonics. However, the
magnitude of nonlinear effects is generally small, thus requiring high optical
intensities for their manifestation at the nanoscale. Here, we reveal a large
nonlinear response of monolayer hexagonal boron nitride (hBN) in the
mid-infrared phonon-polariton region, triggered by the strongly anharmonic
potential associated with atomic vibrations in this material. We present robust
first-principles theory predicting a threshold light field $\sim40\,$MV/m to
produce order-unity effects in Kerr nonlinearities and harmonic generation,
which are made possible by a combination of the long lifetimes exhibited by
optical phonons and the strongly asymmetric landscape of the configuration
energy in hBN. We further foresee polariton blockade at the few-quanta level in
nanometer-sized structures. In addition, by mixing static and optical fields,
the strong nonlinear response of monolayer hBN gives rise to substantial
frequency shifts of optical phonon modes, exceeding their spectral width for
in-plane DC fields that are attainable using lateral gating technology. We
therefore predict a practical scheme for electrical tunability of the
vibrational modes with potential interest in mid-infrared optoelectronics. The
strong nonlinear response, low damping, and robustness of hBN polaritons set
the stage for the development of applications in light modulation, sensing, and
metrology, while triggering the search for intense vibrational nonlinear
response in other ionic materials.
|
The present paper is devoted to the description of local and 2-local
automorphisms on Cayley algebras over an arbitrary field $\mathbb{F}$. Given a
Cayley algebra $\mathcal{C}$ with norm $n$, let $O(\mathcal{C},n)$ be the
corresponding orthogonal group. We prove that the group of all local
automorphisms of $\mathcal{C}$ coincides with the group $\{\varphi\in
O(\mathcal{C},n)\mid \varphi(1)=1\}.$ Further we prove that the behavior of
2-local automorphisms depends on the Cayley algebra being split or division.
Every 2-local automorphism on the split Cayley algebra is an automorphism, i.e.
they form the exceptional Lie group $G_2(\mathbb{F})$ if
$\textrm{char}\mathbb{F}\neq 2,3$. On the other hand, on division Cayley
algebras over a field $\mathbb{F}$, the groups of 2-local automorphisms and
local automorphisms coincide, and they are isomorphic to the group
$\{\varphi\in O(\mathcal{C},n)\mid \varphi(1)=1\}.$
|
At the fundamental level, quantum communication is ultimately limited by
noise. For instance, quantum signals cannot be amplified without the
introduction of noise in the amplified states. Furthermore, photon loss reduces
the signal-to-noise ratio, accentuating the effect of noise. Thus, most of the
efforts in quantum communications have been directed towards overcoming noise
to achieve longer communication distances, larger secret key rates, or to
operate in noisier environmental conditions. Here, we propose and
experimentally demonstrate a platform for quantum communication based on
ultrafast optical techniques. In particular, our scheme enables the
experimental realization of high-rates and quantum signal filtering approaching
a single spectro-temporal mode, resulting in a dramatic reduction in channel
noise. By experimentally realizing a 1-ps optically induced temporal gate, we
show that ultrafast time filtering can result in an improvement in noise
tolerance by a factor of up to 1200 compared to a 2-ns electronic filter
enabling daytime quantum key distribution or quantum communication in bright
fibers.
|
Big data is gaining overwhelming attention since the last decade. Almost all
the fields of science and technology have experienced a considerable impact
from it. The cloud computing paradigm has been targeted for big data processing
and mining in a more efficient manner using the plethora of resources available
from computing nodes to efficient storage. Cloud data mining introduces the
concept of performing data mining and analytics of huge data in the cloud
availing the cloud resources. But can we do better? Yes, of course! The main
contribution of this chapter is the identification of four game-changing
technologies for the acceleration of computing and analysis of data mining
tasks in the cloud. Graphics Processing Units can be used to further accelerate
the mining or analytic process, which is called GPU accelerated analytics.
Further, Approximate Computing can also be introduced in big data analytics for
bringing efficacy in the process by reducing time and energy and hence
facilitating greenness in the entire computing process. Quantum Computing is a
paradigm that is gaining pace in recent times which can also facilitate
efficient and fast big data analytics in very little time. We have surveyed
these three technologies and established their importance in big data mining
with a holistic architecture by combining these three game-changers with the
perspective of big data. We have also talked about another future technology,
i.e., Neural Processing Units or Neural accelerators for researchers to explore
the possibilities. A brief explanation of big data and cloud data mining
concepts are also presented here.
|
Ferromagnetic superconductor URhGe has orthorhombic structure and possesses
spontaneous magnetisation along the c-axis. Magnetic field directed along the
$b$-axis suppresses ferromagnetism in $c$-direction and leads to a metamagnetic
transition into polarised paramagnetic state in the $b$-direction. The theory
of these phenomena based on the specific magnetic anisotropy of this material
in $(b,c)$ plane is given. Line of the first order metamagnetic transition ends
at a critical point. The Van der Waals - type description of behaviour of
physical properties near this point is developed. The triplet superconducting
state destroyed by orbital effect is recreated in vicinity of the transition.
It is shown that the reentrance of superconductivity is caused by the sharp
increase of magnetic susceptibility in $b$ direction near the metamagnetic
transition. The specific behaviour of the upper critical field in direction of
spontaneous magnetisation in UCoGe and in UGe$_2$ related to the field
dependence of magnetic susceptibility is discussed.
|
Let $p$ be a prime number, and let $k$ be an algebraically closed field of
characteristic $p$. We show that the tame fundamental group of a smooth affine
curve over $k$ is a projective profinite group. We prove that the fundamental
group of a smooth projective variety over $k$ is finitely presented. More
generally we prove that the tame fundamental group of a smooth quasi-projective
variety over $k$ which admits a good compactification is finitely presented.
v2: references added. Thank you to all for the friendly and fruitful
comments.
|
Community question answering and discussion platforms such as Reddit, Yahoo!
answers or Quora provide users the flexibility of asking open ended questions
to a large audience, and replies to such questions maybe useful both to the
user and the community on certain topics such as health, sports or finance.
Given the recent events around COVID-19, some of these platforms have attracted
2000+ questions from users about several aspects associated with the disease.
Given the impact of this disease on general public, in this work we investigate
ways to improve the ranking of user generated answers on COVID-19. We
specifically explore the utility of external technical sources of side
information (such as CDC guidelines or WHO FAQs) in improving answer ranking on
such platforms. We found that ranking user answers based on question-answer
similarity is not sufficient, and existing models cannot effectively exploit
external (side) information. In this work, we demonstrate the effectiveness of
different attention based neural models that can directly exploit side
information available in technical documents or verified forums (e.g., research
publications on COVID-19 or WHO website). Augmented with a temperature
mechanism, the attention based neural models can selectively determine the
relevance of side information for a given user question, while ranking answers.
|
To celebrate Hans Frauenfelder's achievements, we examine energy(-like)
"landscapes" for complex living systems. Energy landscapes summarize all
possible dynamics of some physical systems. Energy(-like) landscapes can
explain some biomolecular processes, including gene expression and, as
Frauenfelder showed, protein folding. But energy-like landscapes and existing
frameworks like statistical mechanics seem impractical for describing many
living systems. Difficulties stem from living systems being high dimensional,
nonlinear, and governed by many, tightly coupled constituents that are noisy.
The predominant modeling approach is devising differential equations that are
tailored to each living system. This ad hoc approach faces the notorious
"parameter problem": models have numerous nonlinear, mathematical functions
with unknown parameter values, even for describing just a few intracellular
processes. One cannot measure many intracellular parameters or can only measure
them as snapshots in time. Another modeling approach uses cellular automata to
represent living systems as discrete dynamical systems with binary variables.
Quantitative (Hamiltonian-based) rules can dictate cellular automata (e.g.,
Cellular Potts Model). But numerous biological features, in current practice,
are qualitatively described rather than quantitatively (e.g., gene is (highly)
expressed or not (highly) expressed). Cellular automata governed by verbal
rules are useful representations for living systems and can mitigate the
parameter problem. However, they can yield complex dynamics that are difficult
to understand because much of the existing mathematical tools and theorems
apply to continuous but not discrete dynamical systems. Recent studies found
ways to overcome this challenge by discovering a predictive "landscape" that
yield low-dimensional representations of cellular automata dynamics. We review
these studies.
|
The second phase of the APOGEE survey is providing near-infrared,
high-resolution, high signal-to-noise spectra of stars in the halo, disk, bar
and bulge of the Milky Way. The near-infrared spectral window is especially
important in the study of the Galactic bulge, where stars are obscured by the
dust and gas of the disk in its line-of-sight. We present a chemical
characterisation of the globular cluster NGC 6544 with high-resolution
spectroscopy. The characterisation of the cluster chemical fingerprint, given
its status of "interloper" towards the Galactic bulge and clear signatures of
tidal disruption in its core is crucial for future chemical tagging efforts.
Cluster members were selected from the DR16 of the APOGEE survey, using
chemo-dynamical criteria of individual stars. A sample of 23 members of the
cluster was selected. An analysis considering the intra-cluster abundance
variations, known anticorrelations is given. According to the RGB content of
the cluster, the iron content and $\alpha$-enhancement are [Fe/H] $= -1.44 \pm
0.04$ dex and [$\alpha$/Fe] $= 0.20 \pm 0.04$ dex, respectively. Cluster
members show a significant spread in [Fe/H] and [Al/Fe] that is larger than
expected based on measurement errors. An [Al/Fe] spread, signal of an Mg-Al
anticorrelation is observed and used to constraint the cluster mass budget,
along with C, N, Mg, Si, K, Ca, and Ce element variations are discussed. Across
all the analysed evolutionary stages (RGB and AGB), about $\sim2/3$ (14 out of
23) show distinct chemical patterns, possibly associated with second-generation
stars.
|
The transition to a low-carbon economy is one of the ambitions of the
European Union for 2030. Biobased industries play an essential role in this
transition. However, there has been an on-going discussion about the actual
benefit of using biomass to produce biobased products, specifically the use of
agricultural materials (e.g., corn and sugarcane). This paper presents the
environmental impact assessment of 30% and 100% biobased PET (polyethylene
terephthalate) production using EU biomass supply chains (e.g., sugar beet,
wheat, and Miscanthus). An integral assessment between the life cycle
assessment methodology and the global sensitivity assessment is presented as an
early-stage support tool to propose and select supply chains that improve the
environmental performance of biobased PET production. From the results,
Miscanthus is the best option for the production of biobased PET: promoting EU
local supply chains, reducing greenhouse gas (GHG) emissions (process and
land-use change), and generating lower impacts in midpoint categories related
to resource depletion, ecosystem quality, and human health. This tool can help
improving the environmental performance of processes that could boost the shift
to a low-carbon economy.
|
We demonstrate a low noise short wavelength infrared (SWIR) Sb based type II
superlattice (T2SL) avalanche photodiodes (APD). The SWIR GaSb/(AlAsSb/GaSb)
APD structure was designed based on impact ionization engineering and grown by
molecular beam epitaxy on GaSb substrate. At room temperature, the device
exhibits a 50 % cut-off wavelength of 1.74 micron. The device revealed to have
electron dominated avalanching mechanism with a gain value of 48 at room
temperature. The electron and hole impact ionization coefficients were
calculated and compared to give better prospect of the performance of the
device. Low excess noise, as characterized by the carrier ionization ratio of ~
0.07, has been achieved.
|
Multilingual pretrained language models have demonstrated remarkable
zero-shot cross-lingual transfer capabilities. Such transfer emerges by
fine-tuning on a task of interest in one language and evaluating on a distinct
language, not seen during the fine-tuning. Despite promising results, we still
lack a proper understanding of the source of this transfer. Using a novel layer
ablation technique and analyses of the model's internal representations, we
show that multilingual BERT, a popular multilingual language model, can be
viewed as the stacking of two sub-networks: a multilingual encoder followed by
a task-specific language-agnostic predictor. While the encoder is crucial for
cross-lingual transfer and remains mostly unchanged during fine-tuning, the
task predictor has little importance on the transfer and can be reinitialized
during fine-tuning. We present extensive experiments with three distinct tasks,
seventeen typologically diverse languages and multiple domains to support our
hypothesis.
|
We present a thermodynamics experiment suitable for first year undergraduate
students employing Stirling Engines to create a demonstration of energy
transformation and to measure the mechanical efficiency of such engines. Using
an inexpensive transparent chambered Stirling Engine, students can connect
concepts such as the theoretical pressure-volume diagram with the physical
movements of the engine's pistons and the resultant useful output work of a
spinning wheel. We found the majority of students successfully complete this
experiment obtaining results similar to when performed by the authors. In
addition to the core thermodynamics lesson, this experiment incorporates DC
circuits, oscilloscopes, and data analysis so it can be integrated into a wider
undergraduate physics course to combine the teaching of multiple subjects.
|
Video super-resolution has recently become one of the most important
mobile-related problems due to the rise of video communication and streaming
services. While many solutions have been proposed for this task, the majority
of them are too computationally expensive to run on portable devices with
limited hardware resources. To address this problem, we introduce the first
Mobile AI challenge, where the target is to develop an end-to-end deep
learning-based video super-resolution solutions that can achieve a real-time
performance on mobile GPUs. The participants were provided with the REDS
dataset and trained their models to do an efficient 4X video upscaling. The
runtime of all models was evaluated on the OPPO Find X2 smartphone with the
Snapdragon 865 SoC capable of accelerating floating-point networks on its
Adreno GPU. The proposed solutions are fully compatible with any mobile GPU and
can upscale videos to HD resolution at up to 80 FPS while demonstrating high
fidelity results. A detailed description of all models developed in the
challenge is provided in this paper.
|
We investigated magnetic textures in a Sc-doped hexaferrite film by means of
phase microscopy (PM) with a hole-free phase plate in a transmission electron
microscope. In a zero magnetic field, the stripe-shaped magnetic domains
coexist with magnetic bubbles. The magnetization in both magnetic domains was
oriented perpendicular to the film and the domain walls have an in-plane
magnetization. In the remnant state at 9.2 mT, several magnetic bubbles were
formed with the formation of stripe-shaped magnetic domains, and the
out-of-plane component in the stripe-shaped domains gradually appeared as the
film thickness increased. As the film thickness increases further, the magnetic
bubbles with clockwise or counter-clockwise spin helicities formed a triangular
lattice. These results in the remnant state suggest that the domain wall energy
in the magnetic bubble domains is lower in the thicker region.
|
This paper addresses the use of data-driven evolving techniques applied to
fault prognostics. In such problems, accurate predictions of multiple steps
ahead are essential for the Remaining Useful Life (RUL) estimation of a given
asset. The fault prognostics' solutions must be able to model the typical
nonlinear behavior of the degradation processes of these assets, and be
adaptable to each unit's particularities. In this context, the Evolving Fuzzy
Systems (EFSs) are models capable of representing such behaviors, in addition
of being able to deal with non-stationary behavior, also present in these
problems. Moreover, a methodology to recursively track the model's estimation
error is presented as a way to quantify uncertainties that are propagated in
the long-term predictions. The well-established NASA's Li-ion batteries data
set is used to evaluate the models. The experiments indicate that generic EFSs
can take advantage of both historical and stream data to estimate the RUL and
its uncertainty.
|
The mutual information is a measure of classical and quantum correlations of
great interest in quantum information. It is also relevant in quantum many-body
physics, by virtue of satisfying an area law for thermal states and bounding
all correlation functions. However, calculating it exactly or approximately is
often challenging in practice. Here, we consider alternative definitions based
on R\'enyi divergences. Their main advantage over their von Neumann counterpart
is that they can be expressed as a variational problem whose cost function can
be efficiently evaluated for families of states like matrix product operators
while preserving all desirable properties of a measure of correlations. In
particular, we show that they obey a thermal area law in great generality, and
that they upper bound all correlation functions. We also investigate their
behavior on certain tensor network states and on classical thermal
distributions.
|
A dilute suspension of motile micro-organisms subjected to a strong ambient
flow, such as algae in the ocean, can be modelled as a population of
non-interacting, orientable active Brownian particles (ABPs). Using the
Smoluchowski equation (i.e. Fokker-Planck equation in space and orientation),
one can describe the non-trivial transport phenomena of ABPs such as taxes and
shear-induced migration. This work transforms the Smoluchowski equation into a
transport equation, in which the drifts and dispersions can be further
approximated as a function of the local flow field. The new model can be
applied to any global flow field due to its local nature, unlike previous
methods such as those utilising the generalised Taylor dispersion theory. The
transformation shows that the overall drift includes both the biased motility
of individual particles in the presence of taxis and the shear-induced
migration in the absence of taxis. In addition, it uncovers other new drifts
and dispersions caused by the interactions between the orientational dynamics
and the passive advection/diffusion of ABPs. Finally, the performance of this
model is assessed using examples of gyrotactic suspensions, where the proposed
model is demonstrated to be most accurate when the biased motility of particles
(i.e. taxis) is weak.
|
With the rapid advancement of information and communication technologies,
many researchers have adopted alternative data sources from private data
vendors to study human movement dynamics in response to large-scale natural or
societal events. Big geosocial data such as georeferenced tweets are publicly
available and dynamically evolving as real-world events are happening, making
it more likely to capture the real-time sentiments and responses of
populations. However, precisely-geolocated geosocial data is scarce and biased
toward urban population centers. In this research, we developed a big geosocial
data analytical framework for extracting human movement dynamics in response to
large-scale events from publicly available georeferenced tweets. The framework
includes a two-stage data collection module that collects data in a more
targeted fashion in order to mitigate the data scarcity issue of georeferenced
tweets; in addition, a variable bandwidth kernel density estimation(VB-KDE)
approach was adopted to fuse georeference information at different spatial
scales, further augmenting the signals of human movement dynamics contained in
georeferenced tweets. To correct for the sampling bias of georeferenced tweets,
we adjusted the number of tweets for different spatial units (e.g., county,
state) by population. To demonstrate the performance of the proposed analytic
framework, we chose an astronomical event that occurred nationwide across the
United States, i.e., the 2017 Great American Eclipse, as an example event and
studied the human movement dynamics in response to this event. However, this
analytic framework can easily be applied to other types of large-scale events
such as hurricanes or earthquakes.
|
The execution of similar units can be compared by their internal behaviors to
determine the causes of their potential performance issues. For instance, by
examining the internal behaviors of different fast or slow web requests more
closely and by clustering and comparing their internal executions, one can
determine what causes some requests to run slowly or behave in unexpected ways.
In this paper, we propose a method of extracting the internal behavior of web
requests as well as introduce a pipeline that detects performance issues in web
requests and provides insights into their root causes. First, low-level and
fine-grained information regarding each request is gathered by tracing both the
user space and the kernel space. Second, further information is extracted and
fed into an outlier detector. Finally, these outliers are then clustered by
their behavior, and each group is analyzed separately. Experiments revealed
that this pipeline is indeed able to detect slow web requests and provide
additional insights into their true root causes. Notably, we were able to
identify a real PHP cache contention using the proposed approach.
|
The fundamental problem of stabilizing a general non-affine continuous-time
nonlinear system is investigated via piecewise affine linear models (PALMs) in
this paper. A novel integral sliding-mode parallel control (ISMPC) approach is
developed, where an uncertain piecewise affine system (PWA) is constructed to
model a non-affine continuous-time nonlinear system equivalently on a compact
region containing the origin. A piecewise integral sliding-mode parallel
controller is designed to globally stabilize the uncertain PWA and,
consequently, to semi-globally stabilize the original nonlinear system. The
proposed scheme enjoys two favorable features: i) some restrictions on the
system input channel are eliminated, thus the developed method is more relaxed
compared with the published approaches; and ii) it is convenient to be used to
deal with both matched and unmatched uncertainties of the system. Moreover, we
provide discussions about the universality analysis of the developed control
strategy for two kinds of typical nonlinear systems. Simulation results from
two numerical examples further demonstrate the performance of the developed
control approach.
|
In this paper we report results of a numerical investigation of turbulent
natural gas combustion for jet in a coflow of lean combustion products in the
Delft-Jet-in-Hot-Coflow (DJHC) burner which emulates MILD (Moderate and Intense
Low Oxygen Dilution) combustion behavior. The focus is on assessing the
performance of the Eddy Dissipation Concept (EDC) model in combination with
two-equation turbulence models and chemical kinetic schemes for about 20
species (Correa mechanism and DRM19 mechanism) by comparing predictions with
experimental measurements. We study two different flame conditions
corresponding to two different oxygen levels (7.6% and 10.9% by mass) in the
hot coflow, and for two jet Reynolds number (Re=4100 and Re=8800). The mean
velocity and turbulent kinetic energy predicted by different turbulence models
are in good agreement with data without exhibiting large differences among the
model predictions. The realizable k-e model exhibits better performance in the
prediction of entrainment. The EDC combustion model predicts too early ignition
leading to a peak in the radial mean temperature profile at too low axial
distance. However the model correctly predicts the experimentally observed
decreasing trend of lift-off height with jet Reynolds number. A detailed
analysis of the mean reaction rate of the EDC model is made and as possible
cause for the deviations between model predictions and experiments a low
turbulent Reynolds number effect is identified. Using modified EDC model
constants prediction of too early ignition can be avoided. The results are
weakly sensitive to the sub-model for laminar viscosity and laminar diffusion
fluxes.
|
The automorphism group $Aut(X,\mu)$ of a compact, complete metric space $X$
with a Radon measure $\mu$ is a subgroup of $\mathcal{U}(L^2(X,\mu))$-the
unitary group of operators on $L^2(X,\mu)$. The $Aut(X,\mu)$-action on the
generalized space $\mathcal{M}(X)$ is a proper action. Hence, there exists a
slice at each point of the generalized space $\mathcal{M}(X)$. Measure Groupoid
(virtual group) is subsequently employed to analyze the resulting dynamical
system as that of the ergodic action of the commutative algebra (a lattice)
$C(X)$ on the generalized space $\mathcal{M}(X)$ which is represented on a
commutative von Neumann algebra.
|
Drying colloidal droplets have a wide range of applications from medical
diagnostics to coatings for industries. This paper explores the effects of the
substrate temperature (ranging from $25$ to $55 ^{\circ}$C) and various initial
concentrations ($\phi$) of $1$ to $20$ wt% of lysozyme in an aqueous solution
on its drying and final dried film state using bright-field optical microscopy.
The $\phi$ is divided into three regimes, ultra-concentrated ($20$ $<$ $\phi$
$\leq$ $17$ wt%), concentrated ($17$ $<$ $\phi$ $\leq$ $9$ wt%) and diluted
($9$ $<$ $\phi$ $\leq$ $1$ wt%). Increasing $\phi$ in these regimes finds that
this movement in the later non-linear region slows down as the front carries
and deposits protein molecules until the supply in solution is exhausted. In
the ultra-concentrated regime, the fluid front moves linearly throughout the
drying process. The deposition of protein onto the surface by the fluid front
creates the "coffee-ring" and increases with increasing $\phi$. A dimple is
observed in a central mound-structure, which grows with increasing $\phi$. As
the temperature increases the drying rate increases, reducing the time for
convective flow and the deposition of material at the fluid front.
Interestingly, at (T, $\phi$) = ($55 ^{\circ}$C, $20$ wt%), the droplet forms
the thickest film with suppressed ring formation. The dimple diminishes in the
ultra-concentrated regime, whereas it changes to an expanded spot in some
samples of the diluted and concentrated regimes with elevated temperatures.
Both initial concentration and substrate temperature lead to surface tension
and temperature gradients across the droplet, affecting the morphological
patterns. This study provides insights into protein-protein and
protein-substrate interactions driven by temperature and concentration for
biomedical and biosensing applications.
|
The recently introduced anomaly-free twistor string in four dimensions is
shown to be defined not just in flat but also in curved twistor space. Further,
arguments are given that the classical limit of the corresponding string field
theory, if it exists, is related to general relativity, in particular to the
Isenberg and Yasskin construction using teleparallel gravity. For spacetimes of
Petrov type D with two shear-free null congruences the construction can be
simplified using two-dimensional twistor manifolds.
|
We revisit here congruence relations for B\"uchi automata, which play a
central role in the automata-based verification. The size of the classical
congruence relation is in $3^{\mathcal{O}(n^2)}$, where $n$ is the number of
states of a given B\"uchi automaton $\mathcal{A}$. Here we present improved
congruence relations that can be exponentially coarser than the classical one.
We further give asymptotically optimal congruence relations of size
$2^{\mathcal{O}(n \log n)}$. Based on these optimal congruence relations, we
obtain an optimal translation from B\"uchi automata to a family of
deterministic finite automata (FDFW) that accepts the complementary language.
To the best of our knowledge, our construction is the first direct and optimal
translation from B\"uchi automata to FDFWs.
|
COVID-19 has aided the spread of racism, as well as national insecurity,
distrust of immigrants, and general xenophobia, both of which may be linked to
the rise in anti-Asian hate crimes during the pandemic. Coronavirus Disease
2019(COVID19) is thought to have originated in late December 2019 in Wuhan,
China, and quickly spread across the world during the spring months of 2020.
Asian Americans recorded in increase in racially based hate crimes including
physical abuse and intimidation as COVID-19 spread throughout the United
States. This research study was conducted by high school students in the Bay
Area to compare the intention and characteristics of hate crimes against Asian
Americans to hate crimes against African Americans. According to studies of
both victim-related and most offender-related variables, hate crimes against
Asian Americans have been rapidly growing in the United States and vary from
those against African Americans. This leads to an investigation into the racial
disparity between Asian American offenders and those of other races. The nature
and characteristics of hate crimes against Asian Americans are compared to
those of hate crimes against African Americans in our research. According to
studies of all victim-related factors, hate crimes against Asian Americans are
similar to those against African Americans. Hate crimes against Asian
Americans, on the other hand, vary greatly from hate crimes against African
Americans in terms of the offender's ethnicity and all incident-related
variables.
|
For a sum of squares domain of finite D'Angelo 1-type at the origin, we show
that the polynomial model obtained from the computation of the Catlin multitype
at the origin of such a domain is likewise a sum of squares domain. We also
prove, under the same finite type assumption, that the multitype is an
invariant of the ideal of holomorphic functions defining the domain. Both
results are proven using Martin Kolar's algorithm for the computation of the
multitype introduced in [13]. Given a sum of squares domain, we rewrite the
Kolar algorithm in terms of ideals of holomorphic functions and also introduce
an approach that explicitly constructs the homogeneous polynomial
transformations used in the algorithm.
|
Image Captioning, or the automatic generation of descriptions for images, is
one of the core problems in Computer Vision and has seen considerable progress
using Deep Learning Techniques. We propose to use Inception-ResNet
Convolutional Neural Network as encoder to extract features from images,
Hierarchical Context based Word Embeddings for word representations and a Deep
Stacked Long Short Term Memory network as decoder, in addition to using Image
Data Augmentation to avoid over-fitting. For data Augmentation, we use
Horizontal and Vertical Flipping in addition to Perspective Transformations on
the images. We evaluate our proposed methods with two image captioning
frameworks- Encoder-Decoder and Soft Attention. Evaluation on widely used
metrics have shown that our approach leads to considerable improvement in model
performance.
|
Over the last decade, the vector-apodizing phase plate (vAPP) coronagraph has
been developed from concept to on-sky application in many high-contrast imaging
systems on 8-m class telescopes. The vAPP is an geometric-phase patterned
coronagraph that is inherently broadband, and its manufacturing is enabled only
by direct-write technology for liquid-crystal patterns. The vAPP generates two
coronagraphic PSFs that cancel starlight on opposite sides of the point spread
function (PSF) and have opposite circular polarization states. The efficiency,
that is the amount of light in these PSFs, depends on the retardance offset
from half-wave of the liquid-crystal retarder. Using different liquid-crystal
recipes to tune the retardance, different vAPPs operate with high efficiencies
($>96\%$) in the visible and thermal infrared (0.55 $\mu$m to 5 $\mu$m). Since
2015, seven vAPPs have been installed in a total of six different instruments,
including Magellan/MagAO, Magellan/MagAO-X, Subaru/SCExAO, and LBT/LMIRcam.
Using two integral field spectrographs installed on the latter two instruments,
these vAPPs can provide low-resolution spectra (R$\sim$30) between 1 $\mu$m and
5 $\mu$m. We review the design process, development, commissioning, on-sky
performance, and first scientific results of all commissioned vAPPs. We report
on the lessons learned and conclude with perspectives for future developments
and applications.
|
Several methods are available in the literature to stochastically compare
random variables and random vectors. We introduce the notion of asymptotic
stochastic order for random processes and define four such orders. Various
properties and interrelations of the orders are discussed. Sufficient
conditions for these orders to hold for certain stochastic processes, evolving
from some statistical entities of interest, are derived.
|
We reduce the problem of quantization of the Yang-Mills field Hamiltonian to
a problem for defining a probability measure on an infinite-dimensional space
of gauge equivalence classes of connections on $\mathbb{R}^3$. We suggest a
formally self-adjoint expression for the quantized Yang-Mills Hamiltonian as an
operator on the corresponding Lebesgue $L^2$-space. In the case when the
Yang-Mills field is associated to the Abelian group $U(1)$ we define the
probability measure which depends on two real parameters $m>0$ and $c\neq 0$.
This yields a non-standard quantization of the Hamiltonian of the
electromagnetic field, and the associated probability measure is Gaussian. The
corresponding quantized Hamiltonian is a self-adjoint operator in a Fock space
the spectrum of which is $\{0\}\cup[\frac12m, \infty)$, i.e. it has a gap.
|
The vast majority of stars in galaxy groups are contained within their
constituent galaxies. Some small fraction of stars is expected, however, to
follow the global dark matter potential of the group. In compact groups,
interactions between the galaxies should be frequent. This leads to a more
intensive material stripping from the group members, which finally forms an
intra-group light component (IGL). Therefore, the distribution of the IGL
should be related to the distribution of the total mass in the compact group
and its dynamical status. In this study we consider the distribution and
fraction of the IGL in a sample of 36 Hickson compact groups (HCGs). We use
deep observations of these compact groups (down to surface brightness $\sim 28$
mag\,arcsec$^{-2}$ in the $r$ band) obtained with the WISE $28$-inch telescope.
For five HCGs with a bright symmetric IGL component, we carry out
multicomponent photometric decomposition to simultaneously fit the galaxy
profiles and the IGL. For the remaining groups, we only fit the profiles of
their constituent galaxies. We find that the mean surface brightness of the IGL
correlates with the mean morphology of the group: it becomes brighter in the
groups with a larger fraction of early-type galaxies. On the other hand, the
IGL brightness depends on the total luminosity of the group. The IGL profile
tends to have a S\'ersic index $n\sim0.5-1$, which is generally consistent with
the mass density profile of dark matter haloes in compact groups obtained from
cosmological simulations.
|
A scheme for the enhanced generation of higher photon-number states is
realized, using an optical time-multiplexing setting that exploits a parametric
down-conversion source for an iterative state generation. We use a quantum
feedback mechanism for already generated photons to induce self-seeding of the
consecutive nonlinear process, enabling us to coherently add photons to the
light that propagates in the feedback loop. The addition can be carried out for
any chosen number of round trips, resulting in a successive buildup of
multiphoton states. Our system is only limited by loop losses. The looped
design is rendered possible by a carefully engineered waveguide source that is
compatible with and preserves the shape of the propagating mode. We compare the
fidelities and success probabilities of our protocol with the common direct
heralding of photon-number states. This comparison reveals that, for same the
fidelity, our feedback-based setup significantly enhances success
probabilities, being vital for an efficient utilization in quantum
technologies. Moreover, quantum characteristics of the produced states are
analyzed, and the flexibility of producing higher photon-number states with our
setup beyond the common direct heralding is demonstrated.
|
Attribute-Based Encryption (ABE) is an emerging cryptographic technique that
allows one to embed a fine-grained access control mechanism into encrypted
data. In this paper we propose a novel ABE scheme called SEA-BREW (Scalable and
Efficient Abe with Broadcast REvocation for Wireless networks), which is suited
for Internet of Things (IoT) and Industrial IoT (IIoT) applications. In
contrast to state-of-the-art ABE schemes, ours is capable of securely
performing key revocations with a single short broadcast message, instead of a
number of unicast messages that is linear with the number of nodes. This is
desirable for low-bitrate Wireless Sensor and Actuator Networks (WSANs) which
often are the heart of (I)IoT systems. In SEA-BREW, sensors, actuators, and
users can exchange encrypted data via a cloud server, or directly via wireless
if they belong to the same WSAN. We formally prove that our scheme is secure
also in case of an untrusted cloud server that colludes with a set of users,
under the generic bilinear group model. We show by simulations that our scheme
requires a constant computational overhead on the cloud server with respect to
the complexity of the access control policies. This is in contrast to
state-of-the-art solutions, which require instead a linear computational
overhead.
|
Stock trend forecasting, aiming at predicting the stock future trends, is
crucial for investors to seek maximized profits from the stock market. Many
event-driven methods utilized the events extracted from news, social media, and
discussion board to forecast the stock trend in recent years. However, existing
event-driven methods have two main shortcomings: 1) overlooking the influence
of event information differentiated by the stock-dependent properties; 2)
neglecting the effect of event information from other related stocks. In this
paper, we propose a relational event-driven stock trend forecasting (REST)
framework, which can address the shortcoming of existing methods. To remedy the
first shortcoming, we propose to model the stock context and learn the effect
of event information on the stocks under different contexts. To address the
second shortcoming, we construct a stock graph and design a new propagation
layer to propagate the effect of event information from related stocks. The
experimental studies on the real-world data demonstrate the efficiency of our
REST framework. The results of investment simulation show that our framework
can achieve a higher return of investment than baselines.
|
The success of neural network embeddings has entailed a renewed interest in
using knowledge graphs for a wide variety of machine learning and information
retrieval tasks. In particular, recent recommendation methods based on graph
embeddings have shown state-of-the-art performance. In general, these methods
encode latent rating patterns and content features. Differently from previous
work, in this paper, we propose to exploit embeddings extracted from graphs
that combine information from ratings and aspect-based opinions expressed in
textual reviews. We then adapt and evaluate state-of-the-art graph embedding
techniques over graphs generated from Amazon and Yelp reviews on six domains,
outperforming baseline recommenders. Additionally, our method has the advantage
of providing explanations that involve the coverage of aspect-based opinions
given by users about recommended items.
|
We address the question of transport of heat, in out-of-equilibrium systems.
The experimental set-up consists in two coupled granular gas Non-Equilibrium
Steady State (NESS) heat baths, in which Brownian-like rotors are imbedded.
These rotors are electro-mechanically coupled thanks to DC micro-motors,
through a resistor $R$, such that energy flows between them. The average flux
depends linearly in the difference of the baths' temperature. Varying $R$
allows to extrapolate in the non-dissipative coupling limit ($R\rightarrow0$).
We show that, in this limit, the heat flux obeys the Fluctuation Theorem, in a
form proposed by Jarzynski and W\'ojcik in $2004$ for the fluctuations of the
flux between finite size equilibrium heat baths.
|
A small-cell network with multiple transmitters and unreliable wireless
backhaul is considered for secrecy enhancement. The small-cell network is
operating under a spectrum sharing agreement with a primary network in a
cognitive radio system. A constraint on the desired outage probability at the
primary receiver is assumed as a part of the spectrum sharing agreement. The
reliability of the wireless backhaul links are modeled by a set of independent
and identically distributed Bernoulli random variables. A sub-optimal and an
optimal small-cell transmitter selection (TS) scheme is proposed to improve the
performance of the system, depending on the availability of channel state
information. Selection schemes are designed for the scenario where knowledge is
available regarding which backhaul links are active. The corresponding secrecy
outage probabilities along with their asymptotic expressions are derived. It is
shown that the secrecy performance is significantly improved compared to the
case where knowledge of the active backhaul links is unavailable.
|
This work presents a system identification procedure based on Convolutional
Neural Networks (CNN) for human posture control using the DEC (Disturbance
Estimation and Compensation) parametric model. The modular structure of the
proposed control model inspired the design of a modular identification
procedure, in the sense that the same neural network is used to identify the
parameters of the modules controlling different degrees of freedom. In this way
the presented examples of body sway induced by external stimuli provide several
training samples at once.
|
Coronal holes are the observational manifestation of the solar magnetic field
open to the heliosphere and are of pivotal importance for our understanding of
the origin and acceleration of the solar wind. Observations from space missions
such as the Solar Dynamics Observatory now allow us to study coronal holes in
unprecedented detail. Instrumental effects and other factors, however, pose a
challenge to automatically detect coronal holes in solar imagery. The science
community addresses these challenges with different detection schemes. Until
now, little attention has been paid to assessing the disagreement between these
schemes. In this COSPAR ISWAT initiative, we present a comparison of nine
automated detection schemes widely-applied in solar and space science. We
study, specifically, a prevailing coronal hole observed by the Atmospheric
Imaging Assembly instrument on 2018 May 30. Our results indicate that the
choice of detection scheme has a significant effect on the location of the
coronal hole boundary. Physical properties in coronal holes such as the area,
mean intensity, and mean magnetic field strength vary by a factor of up to 4.5
between the maximum and minimum values. We conclude that our findings are
relevant for coronal hole research from the past decade, and are therefore of
interest to the solar and space research community.
|
Variance-based global sensitivity analysis, in particular Sobol' analysis, is
widely used for determining the importance of input variables to a
computational model. Sobol' indices can be computed cheaply based on spectral
methods like polynomial chaos expansions (PCE). Another choice are the recently
developed Poincar\'e chaos expansions (PoinCE), whose orthonormal
tensor-product basis is generated from the eigenfunctions of one-dimensional
Poincar\'e differential operators. In this paper, we show that the Poincar\'e
basis is the unique orthonormal basis with the property that partial
derivatives of the basis form again an orthogonal basis with respect to the
same measure as the original basis. This special property makes PoinCE ideally
suited for incorporating derivative information into the surrogate modelling
process. Assuming that partial derivative evaluations of the computational
model are available, we compute spectral expansions in terms of Poincar\'e
basis functions or basis partial derivatives, respectively, by sparse
regression. We show on two numerical examples that the derivative-based
expansions provide accurate estimates for Sobol' indices, even outperforming
PCE in terms of bias and variance. In addition, we derive an analytical
expression based on the PoinCE coefficients for a second popular sensitivity
index, the derivative-based sensitivity measure (DGSM), and explore its
performance as upper bound to the corresponding total Sobol' indices.
|
The Simon tensor gives rise to a local characterization of the Kerr-NUT
family in the stationary class of vacuum spacetimes. We find that a symmetric
and traceless tensor in the quotient space of the stationary Killing trajectory
offers a useful alternative to the Simon tensor. Our tensor is distinct from
the spatial dual of the Simon tensor and illustrates the geometric property of
the three dimensional quotient space more manifest. The reconstruction
procedure of the metric for which the generalized Simon tensor vanishes is
spelled out in detail. We give a four dimensional description of this tensor in
terms of the Coulomb part of the imaginary selfdual Weyl tensor, which
corresponds to the generalization of the three-index tensor defined by Mars.
This allows us to establish a new and simple criterion for the Kerr-NUT family:
the gradient of the Ernst potential becomes the non-null eigenvector of the
Coulomb part of the imaginary selfdual Weyl tensor. We also discuss the ${\rm
SU}(1,2)$ covariant extension of the obstruction tensor into the
Einstein-Maxwell system as an intrinsic characterization of the Kerr-Newman-NUT
family.
|
We study the use of the Euler characteristic for multiparameter topological
data analysis. Euler characteristic is a classical, well-understood topological
invariant that has appeared in numerous applications, including in the context
of random fields. The goal of this paper is to present the extension of using
the Euler characteristic in higher-dimensional parameter spaces. While
topological data analysis of higher-dimensional parameter spaces using stronger
invariants such as homology continues to be the subject of intense research,
Euler characteristic is more manageable theoretically and computationally, and
this analysis can be seen as an important intermediary step in multi-parameter
topological data analysis. We show the usefulness of the techniques using
artificially generated examples, and a real-world application of detecting
diabetic retinopathy in retinal images.
|
Photometric observations of the double-mode pulsator VX Hya are presented.
They are analyzed with a stroboscopic method, completed by Fourier analysis.
|
Charged excitons (trions) are essential for the optical spectra in low
dimensional doped monolayers (ML) of transitional metal dichalcogenides (TMDC).
Using a direct diagonalization of the three-body Hamiltonian, we explore the
low-lying trion states in four types of TMDC MLs. We show that the trions fine
structure results from the interplay between the spin-valley fine structure of
the single-particle bands and the exchange interaction between the composing
particles. We demonstrate that by variations of the doping and dielectric
environment, trion energy fine structure can be tuned, leading to anti-crossing
of the bright and dark states with substantial implications for the optical
spectra of TMDC MLs.
|
Diabetic Retinopathy (DR) is a leading cause of vision loss globally. Yet
despite its prevalence, the majority of affected people lack access to the
specialized ophthalmologists and equipment required for assessing their
condition. This can lead to delays in the start of treatment, thereby lowering
their chances for a successful outcome. Machine learning systems that
automatically detect the disease in eye fundus images have been proposed as a
means of facilitating access to DR severity estimates for patients in remote
regions or even for complementing the human expert's diagnosis. In this paper,
we propose a machine learning system for the detection of referable DR in
fundus images that is based on the paradigm of multiple-instance learning. By
extracting local information from image patches and combining it efficiently
through an attention mechanism, our system is able to achieve high
classification accuracy. Moreover, it can highlight potential image regions
where DR manifests through its characteristic lesions. We evaluate our approach
on publicly available retinal image datasets, in which it exhibits near
state-of-the-art performance, while also producing interpretable visualizations
of its predictions.
|
We consider Bayesian optimization in settings where observations can be
adversarially biased, for example by an uncontrolled hidden confounder. Our
first contribution is a reduction of the confounded setting to the dueling
bandit model. Then we propose a novel approach for dueling bandits based on
information-directed sampling (IDS). Thereby, we obtain the first efficient
kernelized algorithm for dueling bandits that comes with cumulative regret
guarantees. Our analysis further generalizes a previously proposed
semi-parametric linear bandit model to non-linear reward functions, and
uncovers interesting links to doubly-robust estimation.
|
This paper studies inference in linear models whose parameter of interest is
a high-dimensional matrix. We focus on the case where the high-dimensional
matrix parameter is well-approximated by a ``spiked low-rank matrix'' whose
rank grows slowly compared to its dimensions and whose nonzero singular values
diverge to infinity. We show that this framework covers a broad class of models
of latent-variables which can accommodate matrix completion problems, factor
models, varying coefficient models, principal components analysis with missing
data, and heterogeneous treatment effects. For inference, we propose a new
``rotation-debiasing" method for product parameters initially estimated using
nuclear norm penalization. We present general high-level results under which
our procedure provides asymptotically normal estimators. We then present
low-level conditions under which we verify the high-level conditions in a
treatment effects example.
|
Let $D\ne \mathbb{C}$ be a simply connected domain and $f$ be the Riemann
mapping from $\mathbb{D}$ onto $D$. The Hardy number of $D$ is the supremum of
all $p$ for which $f$ belongs in the Hardy space ${H^p}\left( \mathbb{D}
\right)$. A comb domain is the entire plane minus an infinite number of
vertical rays symmetric with respect to the real axis. In this paper we prove
that for any $p\in [1,+\infty]$, there is a comb domain with Hardy number equal
to $p$ and this result is sharp. It is known that the Hardy number is related
with the moments of the exit time of Brownian motion from the domain. In
particular, our result implies that given $ p < q$ there exists a comb domain
with finite $p$-th moment but infinite $q$-th moment if and only if $q\geq
1/2$. This answers a question posed by Boudabra and Markowsky.
|
In this paper, we introduce a new technique that combines two popular methods
to estimate uncertainty in object detection. Quantifying uncertainty is
critical in real-world robotic applications. Traditional detection models can
be ambiguous even when they provide a high-probability output. Robot actions
based on high-confidence, yet unreliable predictions, may result in serious
repercussions. Our framework employs deep ensembles and Monte Carlo dropout for
approximating predictive uncertainty, and it improves upon the uncertainty
estimation quality of the baseline method. The proposed approach is evaluated
on publicly available synthetic image datasets captured from sequences of
video.
|
Domain generalization is challenging due to the domain shift and the
uncertainty caused by the inaccessibility of target domain data. In this paper,
we address both challenges with a probabilistic framework based on variational
Bayesian inference, by incorporating uncertainty into neural network weights.
We couple domain invariance in a probabilistic formula with the variational
Bayesian inference. This enables us to explore domain-invariant learning in a
principled way. Specifically, we derive domain-invariant representations and
classifiers, which are jointly established in a two-layer Bayesian neural
network. We empirically demonstrate the effectiveness of our proposal on four
widely used cross-domain visual recognition benchmarks. Ablation studies
validate the synergistic benefits of our Bayesian treatment when jointly
learning domain-invariant representations and classifiers for domain
generalization. Further, our method consistently delivers state-of-the-art mean
accuracy on all benchmarks.
|
Photoexcitation is well-known to trigger electronic metastable states and
lead to phenomena like long-lived photoluminescence and photoconductivity. In
contrast, persistent photo-response due to ionic metastable states is rare. In
this work, we report persistent structural and ferroelectric photo-responses
due to proton metastable states via a nuclear quantum mechanism in
ferroelectric croconic acid, in which the proton-transfer origin of
ferroelectricity is important for the ionic metastable states. We show that,
after photoexcitation, the changes of structural and ferroelectric properties
relax in about 1000 s, while the photoconductivity decays within 1 s,
indicating the dominant ionic origin of the responses. The photogenerated
internal bias field that survives polarization switching process suggests
another proton transfer route and metastable state, in addition to the
metastable states resulting from proton transfer along the hydrogen bonds
proposed previously. Analysis based on Frank Condon principle reveals the
quantum mechanical nature of the proton-transfer process both within the
hydrogen bonds and out of the hydrogen bonds, where small mass of proton and
significant change of potential landscape due to the excited electronic states
are the key. The demonstration of persistent photo-responses due to the proton
metastable states unveils a nuclear quantum mechanism for photo-tunability of
materials, which is expected to impact many material properties sensitive to
ionic positions.
|
Developing an agent in reinforcement learning (RL) that is capable of
performing complex control tasks directly from high-dimensional observation
such as raw pixels is yet a challenge as efforts are made towards improving
sample efficiency and generalization. This paper considers a learning framework
for Curiosity Contrastive Forward Dynamics Model (CCFDM) in achieving a more
sample-efficient RL based directly on raw pixels. CCFDM incorporates a forward
dynamics model (FDM) and performs contrastive learning to train its deep
convolutional neural network-based image encoder (IE) to extract conducive
spatial and temporal information for achieving a more sample efficiency for RL.
In addition, during training, CCFDM provides intrinsic rewards, produced based
on FDM prediction error, encourages the curiosity of the RL agent to improve
exploration. The diverge and less-repetitive observations provide by both our
exploration strategy and data augmentation available in contrastive learning
improve not only the sample efficiency but also the generalization. Performance
of existing model-free RL methods such as Soft Actor-Critic built on top of
CCFDM outperforms prior state-of-the-art pixel-based RL methods on the DeepMind
Control Suite benchmark.
|
In a decentralized system with $m$ machines, we study the selfish scheduling
problem where each user strategically chooses which machine to use. Each
machine incurs a cost, which is a function of the total load assigned to it,
and some cost-sharing mechanism distributes this cost among the machine's
users. The users choose a machine aiming to minimize their own share of the
cost, so the cost-sharing mechanism induces a game among them. We approach this
problem from the perspective of a designer who can select which cost-sharing
mechanism to use, aiming to minimize the price of anarchy (PoA) of the induced
games.
Recent work introduced the class of \emph{resource-aware} cost-sharing
mechanisms, whose decisions can depend on the set of machines in the system,
but are oblivious to the total number of users. These mechanisms can guarantee
low PoA bounds for instances where the cost functions of the machines are all
convex or concave, but can suffer from very high PoA for cost functions that
deviate from these families.
In this paper we show that if we enhance the class of resource-aware
mechanisms with some prior information regarding the users, then they can
achieve low PoA for a much more general family of cost functions. We first show
that, as long as the mechanism knows just two of the participating users, then
it can assign special roles to them and ensure a constant PoA. We then extend
this idea to settings where the mechanism has access to the probability with
which each user is present in the system. For all these instances, we provide a
mechanism that achieves an expected PoA that is logarithmic in the expected
number of users.
|
This work is devoted to the structure of the time-discrete Green-Naghdi
equations including bathymetry. We use the projection structure of the
equations to characterize homogeneous and inhomogeneous boundary conditions for
which the semi-discrete equations are well-posed. This structure allows us to
propose efficient and robust numerical treatment of the boundary conditions
that ensures entropy stability of the scheme by construction. Numerical
evidence is provided to illustrate that our approach is suitable for situations
of practical interest that are not covered by existing theory.
|
We describe a new phenomenon in models of coalescence and fragmentation, that
of gel-shatter cycles. These are dynamical, unforced, stochastic cycles in
which slow, approximately deterministic coalescence up to and beyond gelation
is followed by abrupt random shattering. We describe their appearance in
simulations of stochastic models with multiplicative kernels for coalescence
and spontaneous fragmentation into monomers (`shattering'). The regime in which
such cycles occur is characterized by a cyclicity order parameter, and we
provide a simple scaling argument which describes both this regime and those
which border it.
|
We show that every graded ideal of a Leavitt path algebra is graded
isomorphic to a Leavitt path algebra. It is known that a graded ideal $I$ of a
Leavitt path algebra is isomorphic to the Leavitt path algebra of a graph,
known as the generalized hedgehog graph, which is defined based on certain sets
of vertices uniquely determined by $I$. However, this isomorphism may not be
graded. We show that replacing the short "spines" of the generalized hedgehog
graph with possibly fewer, but then necessarily longer spines, we obtain a
graph (which we call the porcupine graph) such that its Leavitt path algebra is
graded isomorphic to $I$. Our proof adapts to show that for every closed
gauge-invariant ideal $J$ of a graph $C^*$-algebra, there is a gauge-invariant
$*$-isomorphism mapping the graph $C^*$-algebra of the porcupine graph of $J$
onto $J.$
|
We introduce methods for deriving analytic solutions from
differential-algebraic systems of equations (DAEs), as well as methods for
deriving governing equations for analytic characterization which is currently
limited to very small systems as it is carried out by hand. Analytic solutions
to the system and analytic characterization through governing equations provide
insights into the behaviors of DAEs as well as the parametric regions of
operation for each potential behavior. For each system (DAEs), and choice of
dependent variable, there is a corresponding governing equation which is
univariate ODE or PDE that is typically higher order than the constitutive
equations of the system. We first introduce a direct formulation for
representing systems of linear DAEs. Unlike state space formulations, our
formulation follows very directly from the system of constitutive equations
without the need for introducing state variables or singular matrices. Using
this formulation for the system of constitutive equations (DAEs), we develop
methods for deriving analytic expressions for the full solution (complementary
and particular) for all dependent variables of systems that consist of constant
coefficient ordinary-DAEs and special cases of partial-DAEs. We also develop
methods for deriving the governing equation for a chosen dependent variable for
the constant coefficient ordinary-DAEs and partial-DAEs as well as special
cases of variable coefficient DAEs. The methods can be automated with symbolic
coding environments thereby allowing for dealing with systems of any size while
retaining analytic nature. This is relevant for interpretable modeling,
analytic characterization and estimation, and engineering design in which the
objective is to tune parameter values to achieve specific behavior. Such
insights cannot directly be obtained using numerical simulations.
|
The next decade will be an exciting period for solar astrophysics, as new
ground- and space-based instrumentation will provide unprecedented observations
of the solar atmosphere and heliosphere. The synergy between modeling effort
and comprehensive analysis of observations is crucial for the understanding of
the physical processes behind the observed phenomena. However, the
unprecedented wealth of data on one hand, and the complexity of the physical
phenomena on the other, require the development of new approaches in both data
analysis and numerical modeling. In this white paper, we summarize recent
numerical achievements to reproduce structure, dynamics, and observed phenomena
from the photosphere to the low corona and outline challenges we expect to face
for the interpretation of future observations.
|
We study the algebraic properties of binary relations whose underlying
digraph is smooth, that is has no source or sink. Such objects have been
studied as surjective hyper-operations (shops) on the corresponding vertex set,
and as binary relations that are defined everywhere and whose inverse is also
defined everywhere. In the latter formulation, they have been called
multipermutations. We study the lattice structure of sets (monoids) of
multipermutations over an n-element domain. Through a Galois connection, these
monoids form the algebraic counterparts to sets of relations closed under
definability in positive first-order logic without equality. The first side of
this Galois connection has been elaborated previously, we show the other side
here. We study the property of inverse on multipermutations and how it connects
our monoids to groups. We use our results to give a simple dichotomy theorem
for the evaluation problem of positive first-order logic without equality on
the class of structures whose preserving multipermutations form a monoid closed
under inverse. These problems turn out either to be in Logspace or to be
Pspace-complete. We go on to study the monoid of all multipermutations on an
n-element domain, under usual composition of relations. We characterise its
Green relations, regular elements and show that it does not admit a generating
set that is polynomial on n.
|
Portfolio optimization methods suffer from a catalogue of known problems,
mainly due to the facts that pair correlations of asset returns are unstable,
and that extremal risk measures such as maximum drawdown are difficult to
predict due to the non-Gaussianity of portfolio returns. \\ In order to look at
optimal portfolios for arbitrary risk penalty functions, we construct portfolio
shapes where the penalty is proportional to a moment of the returns of
arbitrary order $p>2$. \\ The resulting component weight in the portfolio
scales sub-linearly with its return, with the power-law $w \propto
\mu^{1/(p-1)}$. This leads to significantly improved diversification when
compared to Kelly portfolios, due to the dilution of the winner-takes-all
effect.\\ In the limit of penalty order $p\rightarrow\infty$, we recover the
simple trading heuristic whereby assets are allocated a fixed positive weight
when their return exceeds the hurdle rate, and zero otherwise. Infinite order
power-law portfolios thus fall into the class of perfectly diversified
portfolios.
|
Optical wireless communication (OWC) meets the demands of the future
six-generation mobile network (6G) as it operates at several hundreds of
Terahertz and has the potential to enable data rate in the order of Tbps.
However, most beam steering OWC technologies require high-accuracy positioning
and high-speed control. Resonant beam communication (RBCom), as one kind of
non-positioning OWC technologies, has been proposed for high-rate mobile
communications. The mobility of RBCom relies on its self-alignment
characteristic where no positioning is required. In a previous study, an
external-cavity second-harmonic-generation (SHG) RBCom system has been proposed
for eliminating the echo interference inside the resonator. However, its energy
conversion efficiency and complexity are of concern. In this paper, we propose
an intra-cavity SHG RBCom system to simplify the system design and improve the
energy conversion efficiency. We elaborate the system structure and establish
an analytical model. Numerical results show that the energy consumption of the
proposed intra-cavity design is reduced to reach the same level of channel
capacity at the receiver compared with the external-cavity one.
|
A foundational question in the theory of linear compartmental models is how
to assess whether a model is identifiable -- that is, whether parameter values
can be inferred from noiseless data -- directly from the combinatorics of the
model. We completely answer this question for those models (with one input and
one output) in which the underlying graph is a bidirectional tree. Such models
include two families of models appearing often in biological applications:
catenary and mammillary models. Our proofs are enabled by two supporting
results, which are interesting in their own right. First, we give the first
general formula for the coefficients of input-output equations (certain
equations that can be used to determine identifiability). Second, we prove that
identifiability is preserved when a model is enlarged in specific ways
involving adding a new compartment with a bidirected edge to an existing
compartment.
|
Biometric technologies, especially face recognition, have become an essential
part of identity management systems worldwide. In deployments of biometrics,
secure storage of biometric information is necessary in order to protect the
users' privacy. In this context, biometric cryptosystems are designed to meet
key requirements of biometric information protection enabling a
privacy-preserving storage and comparison of biometric data.
This work investigates the application of a well-known biometric
cryptosystem, i.e. the improved fuzzy vault scheme, to facial feature vectors
extracted through deep convolutional neural networks. To this end, a feature
transformation method is introduced which maps fixed-length real-valued deep
feature vectors to integer-valued feature sets. As part of said feature
transformation, a detailed analysis of different feature quantisation and
binarisation techniques is conducted. At key binding, obtained feature sets are
locked in an unlinkable improved fuzzy vault. For key retrieval, the efficiency
of different polynomial reconstruction techniques is investigated. The proposed
feature transformation method and template protection scheme are agnostic of
the biometric characteristic. In experiments, an unlinkable improved deep face
fuzzy vault-based template protection scheme is constructed employing features
extracted with a state-of-the-art deep convolutional neural network trained
with the additive angular margin loss (ArcFace). For the best configuration, a
false non-match rate below 1% at a false match rate of 0.01%, is achieved in
cross-database experiments on the FERET and FRGCv2 face databases. On average,
a security level of up to approximately 28 bits is obtained. This work presents
an effective face-based fuzzy vault scheme providing privacy protection of
facial reference data as well as digital key derivation from face.
|
A mapping from Fock space boson states to qubits is given and an underlying
digital quantum simulation algorithm of bosons is derived. We realize the
algorithm in MPS (matrix product state) which simulating real time dynamics of
Yukawa coupling in varies initial states and coupling constants. This proposal
may be achieved in superconductivity NISQ (noisy intermediate-scale quantum)
computer not far future.
|
Over the past few years, graph neural networks (GNN) and label
propagation-based methods have made significant progress in addressing node
classification tasks on graphs. However, in addition to their reliance on
elaborate architectures and algorithms, there are several key technical details
that are frequently overlooked, and yet nonetheless can play a vital role in
achieving satisfactory performance. In this paper, we first summarize a series
of existing tricks-of-the-trade, and then propose several new ones related to
label usage, loss function formulation, and model design that can significantly
improve various GNN architectures. We empirically evaluate their impact on
final node classification accuracy by conducting ablation studies and
demonstrate consistently-improved performance, often to an extent that
outweighs the gains from more dramatic changes in the underlying GNN
architecture. Notably, many of the top-ranked models on the Open Graph
Benchmark (OGB) leaderboard and KDDCUP 2021 Large-Scale Challenge MAG240M-LSC
benefit from these techniques we initiated.
|
Hypothesis. Isoniazid is one of the primary drugs used in tuberculosis
treatment. Isoniazid encapsulation in liposomal vesicles can improve drug
therapeutic index and minimize toxic and side effects. In this work, we
consider mixtures of hydrogenated soy phosphatidylcholine/phosphatidylglycerol
(HSPC/DPPG) to get novel biocompatible liposomes for isoniazid pulmonary
delivery. Our goal is to understand if the entrapped drug affects bilayer
structure.
Experiments. HSPC-DPPG unilamellar liposomes are prepared and characterized
by dynamic light scattering, $\zeta$-potential, fluorescence anisotropy and
Transmission Electron Microscopy. Isoniazid encapsulation is determined by UV
and Laser Transmission Spectroscopy. Calorimetry, light scattering and Surface
Pressure measurements are used to get insight on adsorption and thermodynamic
properties of lipid bilayers in the presence of the drug.
Findings. We find that INH-lipid interaction can increase the entrapment
capability of the carrier due to isoniazid adsorption. The preferential
INH-HSPC dipole-dipole interaction promotes modification of lipid packing and
ordering and favors the condensation of a HSPC-richer phase in molar excess of
DPPG. Our findings highlight the importance of fundamental investigations of
drug-lipid interactions for the optimal design of liposomal nanocarriers.
|
Studies of disordered spin chains have recently experienced a renewed
interest, inspired by the question to which extent the exact numerical
calculations comply with the existence of a many-body localization phase
transition. For the paradigmatic random field Heisenberg spin chains, many
intriguing features were observed when the disorder is considerable compared to
the spin interaction strength. Here, we introduce a phenomenological theory
that may explain some of those features. The theory is based on the proximity
to the noninteracting limit, in which the system is an Anderson insulator.
Taking the spin imbalance as an exemplary observable, we demonstrate that the
proximity to the local integrals of motion of the Anderson insulator determines
the dynamics of the observable at infinite temperature. In finite interacting
systems our theory quantitatively describes its integrated spectral function
for a wide range of disorders.
|
We investigated the adsorption of iodine on silver (111) in ultra-high
vacuum. Using low-temperature scanning tunneling microscopy (LT-STM)
measurements we catalog the complex surface structures on the local scale. We
identified three distinct phases with increasing iodine coverage which we
tentatively associate with three phases previously reported in LEED experiments
(sqrt(3)x sqrt(3)R30, "triangular", "hexagonal"). We used Fourier space and
real space analysis to fully characterize each phase. While Fourier analysis
most easily connects our measurements to previous LEED studies, the real space
inspection reveals local variations in the superstructures of the "hexagonal"
and "triangular" phase. The latter, observed here for the first time by LT-STM,
stabilized by one or two adatoms sitting at the center of a rosette-like iodine
reconstruction. The most stunning discovery is that variation in the adatom
separation of the "triangular" phase reconstruct the Ag (111) surface lattice.
|
We investigate the impact of different assumptions in the modeling of
one-loop galaxy bias on the recovery of cosmological parameters, as a follow up
of the analysis done in the first paper of the series at fixed cosmology. We
use three different synthetic galaxy samples whose clustering properties match
the ones of the CMASS and LOWZ catalogues of BOSS and the SDSS Main Galaxy
Sample. We investigate the relevance of allowing for either short range
non-locality or scale-dependent stochasticity by fitting the real-space galaxy
auto power spectrum or the combination of galaxy-galaxy and galaxy-matter power
spectrum. From a comparison among the goodness-of-fit ($\chi^2$), unbiasedness
of cosmological parameters (FoB), and figure-of-merit (FoM), we find that a
four-parameter model (linear, quadratic, cubic non-local bias, and constant
shot-noise) with fixed quadratic tidal bias provides a robust modelling choice
for the auto power spectrum of the three samples, up to $k_{\rm
max}=0.3\,h\,\mathrm{Mpc}^{-1}$ and for an effective volume of
$6\,h^{-3}\,\mathrm{Gpc}^3$. Instead, a joint analysis of the two observables
fails at larger scales, and a model extension with either higher derivatives or
scale-dependent shot-noise is necessary to reach a similar $k_{\rm max}$, with
the latter providing the most stable results. These findings are obtained with
three, either hybrid or perturbative, prescriptions for the matter power
spectrum, \texttt{RESPRESSO}, gRPT and EFT. In all cases, the inclusion of
scale-dependent shot-noise increases the range of validity of the model in
terms of FoB and $\chi^2$. Interestingly, these model extensions with
additional free parameters do not necessarily lead to an increase in the
maximally achievable FoM for the cosmological parameters
$\left(h,\,\Omega_ch^2,\,A_s\right)$, which are generally consistent to those
of the simpler model at smaller $k_{\rm max}$.
|
We consider the two-body problem in a periodic potential, and study the
bound-state dispersion of a spin-$\uparrow$ fermion that is interacting with a
spin-$\downarrow$ fermion through a short-range attractive interaction. Based
on a variational approach, we obtain the exact solution of the dispersion in
the form of a set of self-consistency equations, and apply it to tight-binding
Hamiltonians with onsite interactions. We pay special attention to the
bipartite lattices with a two-point basis that exhibit time-reversal symmetry,
and show that the lowest-energy bound states disperse quadratically with
momentum, whose effective-mass tensor is partially controlled by the quantum
metric tensor of the underlying Bloch states. In particular, we apply our
theory to the Mielke checkerboard lattice, and study the special role played by
the interband processes in producing a finite effective mass for the bound
states in a non-isolated flat band.
|
The final aim of this paper is to expand the sparse group of X-ray binaries
with gamma-ray counterparts as laboratories to study high-energy processes
under physical conditions that periodically repeat. A follow-up of a candidate
system has been carried out. We applied both photometric and spectroscopic
techniques in the optical domain together with a period analysis using the
phase dispersion minimization and CLEAN methods. A tentative period search was
also conducted in the gamma-ray domain. Our main result is having established
the binary nature of the optical star and X-ray emitter HD 3191 towards the
Fermi gamma-ray source 4FGL J0035.8+6131, the last one proposed to be
associated with a blazar of an unknown type. An orbital period close to 16 days
is reported for HD 3191 together with a likely rotation, or pulsation, period
of about 0.6 d. Although no convincing evidence for the orbital cycle has been
found in the Fermi light curve up to now, the confirmed presence of a high-mass
X-ray binary towards 4FGL J0035.8+6131 now strengthens the need for caution
about its true nature.
|
This study addresses the question whether visually grounded speech
recognition (VGS) models learn to capture sentence semantics without access to
any prior linguistic knowledge. We produce synthetic and natural spoken
versions of a well known semantic textual similarity database and show that our
VGS model produces embeddings that correlate well with human semantic
similarity judgements. Our results show that a model trained on a small
image-caption database outperforms two models trained on much larger databases,
indicating that database size is not all that matters. We also investigate the
importance of having multiple captions per image and find that this is indeed
helpful even if the total number of images is lower, suggesting that
paraphrasing is a valuable learning signal. While the general trend in the
field is to create ever larger datasets to train models on, our findings
indicate other characteristics of the database can just as important important.
|
Commonly used automatic speech recognition (ASR) systems can be classified
into frame-synchronous and label-synchronous categories, based on whether the
speech is decoded on a per-frame or per-label basis. Frame-synchronous systems,
such as traditional hidden Markov model systems, can easily incorporate
existing knowledge and can support streaming ASR applications.
Label-synchronous systems, based on attention-based encoder-decoder models, can
jointly learn the acoustic and language information with a single model, which
can be regarded as audio-grounded language models. In this paper, we propose
rescoring the N-best hypotheses or lattices produced by a first-pass
frame-synchronous system with a label-synchronous system in a second-pass. By
exploiting the complementary modelling of the different approaches, the
combined two-pass systems achieve competitive performance without using any
extra speech or text data on two standard ASR tasks. For the 80-hour AMI IHM
dataset, the combined system has a 13.7% word error rate (WER) on the
evaluation set, which is up to a 29% relative WER reduction over the individual
systems. For the 300-hour Switchboard dataset, the WERs of the combined system
are 5.7% and 12.1% on Switchboard and CallHome subsets of Hub5'00, and 13.2%
and 7.6% on Switchboard Cellular and Fisher subsets of RT03, up to a 33%
relative reduction in WER over the individual systems.
|
In this paper, we study the general form of three-point functions of
conserved current multiplets $S_{\alpha(k)}= S_{(\alpha_1 \dots \alpha_k)}$ of
arbitrary rank in four-dimensional ${\mathcal N}=1$ superconformal theory. We
find that the correlation function of three such operators $\langle
\bar{S}_{\dot{\alpha}(k)} (z_1) S_{\beta(k+l)} (z_2) \bar{S}_{\dot{\gamma}(l)}
(z_3) \rangle$ is fixed by the superconformal symmetry up to a single complex
coefficient though the precise form of the correlator depends on the values of
$k$ and $l$. In addition, we present the general structure of mixed correlators
of the form $\langle \bar{S}_{\dot{\alpha}(k)} (z_1) S_{\alpha(k)} (z_2) L(z_3)
\rangle$ and $\langle \bar{S}_{\dot{\alpha}(k)} (z_1) S_{\alpha(k)} (z_2)
J_{\gamma \dot{\gamma}} (z_3) \rangle$, where $L$ is the flavour current
multiplet and $J_{\gamma \dot{\gamma}}$ is the supercurrent.
|
Machine Learning (ML) techniques are becoming an invaluable support for
network intrusion detection, especially in revealing anomalous flows, which
often hide cyber-threats. Typically, ML algorithms are exploited to
classify/recognize data traffic on the basis of statistical features such as
inter-arrival times, packets length distribution, mean number of flows, etc.
Dealing with the vast diversity and number of features that typically
characterize data traffic is a hard problem. This results in the following
issues: i) the presence of so many features leads to lengthy training processes
(particularly when features are highly correlated), while prediction accuracy
does not proportionally improve; ii) some of the features may introduce bias
during the classification process, particularly those that have scarce relation
with the data traffic to be classified. To this end, by reducing the feature
space and retaining only the most significant features, Feature Selection (FS)
becomes a crucial pre-processing step in network management and, specifically,
for the purposes of network intrusion detection. In this review paper, we
complement other surveys in multiple ways: i) evaluating more recent datasets
(updated w.r.t. obsolete KDD 99) by means of a designed-from-scratch
Python-based procedure; ii) providing a synopsis of most credited FS approaches
in the field of intrusion detection, including Multi-Objective Evolutionary
techniques; iii) assessing various experimental analyses such as feature
correlation, time complexity, and performance. Our comparisons offer useful
guidelines to network/security managers who are considering the incorporation
of ML concepts into network intrusion detection, where trade-offs between
performance and resource consumption are crucial.
|
Finding a reduction of complex, high-dimensional dynamics to its essential,
low-dimensional "heart" remains a challenging yet necessary prerequisite for
designing efficient numerical approaches. Machine learning methods have the
potential to provide a general framework to automatically discover such
representations. In this paper, we consider multiscale stochastic systems with
local slow-fast time scale separation and propose a new method to encode in an
artificial neural network a map that extracts the slow representation from the
system. The architecture of the network consists of an encoder-decoder pair
that we train in a supervised manner to learn the appropriate low-dimensional
embedding in the bottleneck layer. We test the method on a number of examples
that illustrate the ability to discover a correct slow representation.
Moreover, we provide an error measure to assess the quality of the embedding
and demonstrate that pruning the network can pinpoint an essential coordinates
of the system to build the slow representation.
|
In this paper, we establish optimal Berry--Esseen bounds for the generalized
$U$-statistics. The proof is based on a new Berry--Esseen theorem for
exchangeable pair approach by Stein's method under a general linearity
condition setting. As applications, an optimal convergence rate of the normal
approximation for subgraph counts in Erd\"os--R\'enyi graphs and graphon-random
graph is obtained.
|
High-rate generation of hybrid photon-matter entanglement remains a
fundamental building block of quantum network architectures enabling protocols
such as quantum secure communication or quantum distributed computing. While a
tremendous effort has been made to overcome technological constraints limiting
the efficiency and coherence times of current systems, an important
complementary approach is to employ parallel and multiplexed architectures.
Here we follow this approach experimentally demonstrating the generation of
bipartite polarization-entangled photonic states across more than 500 modes,
with a programmable delay for the second photon enabled by qubit storage in a
wavevector multiplexed cold-atomic quantum memory. We demonstrate Clauser,
Horne, Shimony, Holt inequality violation by over 3 standard deviations,
lasting for at least 45 {\mu}s storage time for half of the modes. The ability
to shape hybrid entanglement between the polarization and wavevector degrees of
freedom provides not only multiplexing capabilities but also brings prospects
for novel protocols.
|
A cohomology theory, associated to a $n$-Lie algebra and a representation
space of it, is introduced. It is observed that this cohomology theory is
qualified to encode the generalized derivation extensions, and that it
coincides, for $n=3$, with the known cohomology of $n$-Lie algebras. The
abelian extensions and infinitesimal deformations of $n$-Lie algebras, on the
other hand, are shown to be characterized by the usual cohomology of $n$-Lie
algebras. Furthermore, the Hochschild-Serre spectral sequence of the Lie
algebra cohomology is upgraded to the level of $n$-Lie algebras, and is applied
to the cohomology of generalized derivation extensions.
|
The scaled relative graph (SRG) of an operator is a subset of the complex
plane. It captures several salient features of an operator, such as
contractiveness, and can be used to reveal the geometric nature of many of the
inequality based arguments used in the convergence analyses of fixed point
iterations. In this paper we show that the SRG of a linear operator can be
determined from the numerical range of a closely related linear operator.
Furthermore we demonstrate that the SRG of a linear operator has a range of
spectral and convexity properties, and satisfies an analogue of Hildebrant's
theorem.
|
Several approaches to the formulation of a fractional theory of calculus of
"variable order" have appeared in the literature over the years. Unfortunately,
most of these proposals lack a rigorous mathematical framework. We consider an
alternative view on the problem, originally proposed by G. Scarpi in the early
seventies, based on a naive modification of the representation in the Laplace
domain of standard kernels functions involved in (constant-order) fractional
calculus. We frame Scarpi's ideas within recent theory of General Fractional
Derivatives and Integrals, that mostly rely on the Sonine condition, and
investigate the main properties of the emerging variable-order operators. Then,
taking advantage of powerful and easy-to-use numerical methods for the
inversion of Laplace transforms of functions defined in the Laplace domain, we
discuss some practical applications of the variable-order Scarpi integral and
derivative.
|
Time-dependent dynamical properties of a fluid can not be estimated from a
single configuration without performing a simulation. Here we show, however,
that the scaling properties of both structure and dynamics can be predicted
from a single configuration. The new method is demonstrated to work very well
for equilibrium dynamics of the Kob-Andersen Binary Lennard-Jones mixture.
Furthermore, the method is applied to isobaric cooling where the liquid falls
out of equilibrium and forms a glass, demonstrating that the method requires
neither equilibrium nor constant volume conditions to work, in contrast to
existing methods.
|
Given an undirected graph, $G$, and vertices, $s$ and $t$ in $G$, the
tracking paths problem is that of finding the smallest subset of vertices in
$G$ whose intersection with any $s$-$t$ path results in a unique sequence. This
problem is known to be NP-complete and has applications to animal migration
tracking and detecting marathon course-cutting, but its approximability is
largely unknown. In this paper, we address this latter issue, giving novel
algorithms having approximation ratios of $(1+\epsilon)$, $O(\lg OPT)$ and
$O(\lg n)$, for $H$-minor-free, general, and weighted graphs, respectively. We
also give a linear kernel for $H$-minor-free graphs and make improvements to
the quadratic kernel for general graphs.
|
We analyze the well-known equivalence between the quadratic Kontsevich-Penner
and Hermitian matrix models from the point of view of superintegrability
relations, i.e. explicit formulas for character averages. This is not that
trivial on the Kontsevich side and seems important for further studies of
various deformations of Kontsevich models. In particular, the Brezin-Hikami
extension of the above equivalence becomes straightforward.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.