abstract
stringlengths 42
2.09k
|
---|
WALOP (Wide-Area Linear Optical Polarimeter)-South, to be mounted on the 1m
SAAO telescope in South Africa, is first of the two WALOP instruments currently
under development for carrying out the PASIPHAE survey. Scheduled for
commissioning in the year 2021, the WALOP instruments will be used to measure
the linear polarization of around $10^{6}$ stars in the SDSS-r broadband with
$0.1~\%$ polarimetric accuracy, covering 4000 square degrees in the Galactic
polar regions. The combined capabilities of one-shot linear polarimetry, high
polarimetric accuracy ($< 0.1~\%$) and polarimetric sensitivity ($< 0.05~\%$),
and a large field of view (FOV) of $35\times35~arcminutes$ make WALOP-South a
unique astronomical instrument. In a single exposure, it is designed to measure
the Stokes parameters $I$, $q$ and $u$ in the SDSS-r broadband and narrowband
filters between $500-700~nm$. During each measurement, four images of the full
field corresponding to the polarization angles of $0^{\circ}$, $45^{\circ}$,
$90^{\circ}$ and $135^{\circ}$ will be imaged on four detectors and carrying
out differential photometry on these images will yield the Stokes parameters.
Major challenges in designing WALOP-South instrument include- (a) in the
optical design, correcting for the spectral dispersion introduced by large
split angle Wollaston Prisms used as polarization analyzers as well as
aberrations from the wide field, and (b) making an optomechanical design
adherent to the tolerances required to obtain good imaging and polarimetric
performance under all temperature conditions as well as telescope pointing
positions. We present the optical and optomechanical design for WALOP-South
which overcomes these challenges.
|
As a maintainer of an open source software project, you are usually happy
about contributions in the form of pull requests that bring the project a step
forward. Past studies have shown that when reviewing a pull request, not only
its content is taken into account, but also, for example, the social
characteristics of the contributor. Whether a contribution is accepted and how
long this takes therefore depends not only on the content of the contribution.
What we only have indications for so far, however, is that pull requests from
bots may be prioritized lower, even if the bots are explicitly deployed by the
development team and are considered useful.
One goal of the bot research and development community is to design helpful
bots to effectively support software development in a variety of ways. To get
closer to this goal, in this GitHub mining study, we examine the measurable
differences in how maintainers interact with manually created pull requests
from humans compared to those created automatically by bots.
About one third of all pull requests on GitHub currently come from bots.
While pull requests from humans are accepted and merged in 72.53% of all cases,
this applies to only 37.38% of bot pull requests. Furthermore, it takes
significantly longer for a bot pull request to be interacted with and for it to
be merged, even though they contain fewer changes on average than human pull
requests. These results suggest that bots have yet to realize their full
potential.
|
The structure of protostellar cores can often be approximated by isothermal
Bonnor-Ebert spheres (BES) which are stabilized by an external pressure. For
the typical pressure of $10^4k_B\,\mathrm{K\,cm^{-3}}$ to
$10^5k_B\,\mathrm{K\,cm^{-3}}$ found in molecular clouds, cores with masses
below $1.5\,{\rm M_\odot}$ are stable against gravitational collapse. In this
paper, we analyze the efficiency of triggering a gravitational collapse by a
nearby stellar wind, which represents an interesting scenario for triggered
low-mass star formation. We derive analytically a new stability criterion for a
BES compressed by a stellar wind, which depends on its initial nondimensional
radius $\xi_{max}$. If the stability limit is violated the wind triggers a core
collapse. Otherwise, the core is destroyed by the wind. We estimate its
validity range to $2.5<\xi_{max}<4.2$ and confirm this in simulations with the
SPH Code GADGET-3. The efficiency to trigger a gravitational collapse strongly
decreases for $\xi_{max}<2.5$ since in this case destruction and acceleration
of the whole sphere begin to dominate. We were unable to trigger a collapse for
$\xi_{max}<2$, which leads to the conclusion that a stellar wind can move the
smallest unstable stellar mass to $0.5\,\mathrm{M_\odot}$ and destabilizing
even smaller cores would require an external pressure larger than
$10^5k_B\,\mathrm{K\,cm^{-3}}$. For $\xi_{max}>4.2$ the expected wind strength
according to our criterion is small enough so that the compression is slower
than the sound speed of the BES and sound waves can be triggered. In this case
our criterion underestimates somewhat the onset of collapse and detailed
numerical analyses are required.
|
Among the ODEs peculiarities -- specially those of Mechanics -- besides the
problem of leading them to quadratures and to solve them either in series or in
closed form, one is faced with the inversion. E.g. when one wishes to pass from
time as function of lagrangian coordinates to these last as functions of time.
This paper solves in almost closed form the system of non linear ODEs of the
2D-motion (say, co-ordinates $\theta$ and $\psi$) of a gravity-free double
pendulum (GFDP) not subjected to any force. In such a way its movement is ruled
by initial conditions only. The relevant strongly non linear ODEs, have been
put back to hyper-elliptic quadratures which, through the Integral
Representation Theorem (hereinafter IRT) have been driven to the Lauricella
hypergeometric functions $F_D^{(j)}, j=3, 4, 5, 6 $. The IRT has been applied
after a change of variable which improves their use and accelerates the series
convergence. The $\psi$ is given in terms of $F_D^{(4)}$ -- which is inverted
by means of the Fourier Series tool and put as an argument inside the
$F_D^{(5)}$ -- in such a way allowing the $\theta$ computations. We succeed in
a insight knowledge of time laws and trajectories of both bobs forming the
GFDP, which -- after the inversion -- is therefore completely solved in
explicit closed form. Suitable sample problems of the three possible cases of
motion are carried out and their analysis closes the work. The Lauricella
functions employed here to solve the differential equations -- in lack of
specific SW packages -- have been implemented thanks to some reduction theorems
which will form the object of a next paper. To our best knowledge, this work
adds a new contribution as it concerns detection and inversion of solutions of
nonlinear hamiltonian systems.
|
We show that adding differential privacy to Explainable Boosting Machines
(EBMs), a recent method for training interpretable ML models, yields
state-of-the-art accuracy while protecting privacy. Our experiments on multiple
classification and regression datasets show that DP-EBM models suffer
surprisingly little accuracy loss even with strong differential privacy
guarantees. In addition to high accuracy, two other benefits of applying DP to
EBMs are: a) trained models provide exact global and local interpretability,
which is often important in settings where differential privacy is needed; and
b) the models can be edited after training without loss of privacy to correct
errors which DP noise may have introduced.
|
Human personality traits are the key drivers behind our decision-making,
influencing our life path on a daily basis. Inference of personality traits,
such as Myers-Briggs Personality Type, as well as an understanding of
dependencies between personality traits and users' behavior on various social
media platforms is of crucial importance to modern research and industry
applications. The emergence of diverse and cross-purpose social media avenues
makes it possible to perform user personality profiling automatically and
efficiently based on data represented across multiple data modalities. However,
the research efforts on personality profiling from multi-source multi-modal
social media data are relatively sparse, and the level of impact of different
social network data on machine learning performance has yet to be
comprehensively evaluated. Furthermore, there is not such dataset in the
research community to benchmark. This study is one of the first attempts
towards bridging such an important research gap. Specifically, in this work, we
infer the Myers-Briggs Personality Type indicators, by applying a novel
multi-view fusion framework, called "PERS" and comparing the performance
results not just across data modalities but also with respect to different
social network data sources. Our experimental results demonstrate the PERS's
ability to learn from multi-view data for personality profiling by efficiently
leveraging on the significantly different data arriving from diverse social
multimedia sources. We have also found that the selection of a machine learning
approach is of crucial importance when choosing social network data sources and
that people tend to reveal multiple facets of their personality in different
social media avenues. Our released social multimedia dataset facilitates future
research on this direction.
|
In his 1987 paper, Todorcevic remarks that Sierpinski's onto mapping
principle (1932) and the Erdos-Hajnal-Milner negative Ramsey relation (1966)
are equivalent to each other, and follow from the existence of a Luzin set.
Recently, Guzman and Miller showed that these two principles are also
equivalent to the existence of a nonmeager set of reals of cardinality
$\aleph_1$. We expand this circle of equivalences and show that these
propositions are equivalent also to the high-dimensional version of the
Erdos-Hajnal-Milner negative Ramsey relation, thereby improving a CH theorem of
Galvin (1980).
Then we consider the validity of these relations in the context of strong
colorings over partitions and prove the consistency of a positive Ramsey
relation, as follows: It is consistent with the existence of both a Luzin set
and of a Souslin tree that for some countable partition p, all colorings are
p-special.
|
Mastery of order-disorder processes in highly non-equilibrium nanostructured
oxides has significant implications for the development of emerging energy
technologies. However, we are presently limited in our ability to quantify and
harness these processes at high spatial, chemical, and temporal resolution,
particularly in extreme environments. Here we describe the percolation of
disorder at the model oxide interface LaMnO$_3$ / SrTiO$_3$, which we visualize
during in situ ion irradiation in the transmission electron microscope. We
observe the formation of a network of disorder during the initial stages of ion
irradiation and track the global progression of the system to full disorder. We
couple these measurements with detailed structural and chemical probes,
examining possible underlying defect mechanisms responsible for this unique
percolative behavior.
|
The Principal-Agent Theory model is widely used to explain governance role
where there is a separation of ownership and control, as it defines clear
boundaries between governance and executives. However, examination of recent
corporate failure reveals the concerning contribution of the Board of Directors
to such failures and calls into question governance effectiveness in the
presence of a powerful and charismatic CEO. This study proposes a framework for
analyzing the relationship between the Board of Directors and the CEO, and how
certain relationships affect the power structure and behavior of the Board,
which leads to a role reversal in the Principal-Agent Theory, as the Board
assumes the role of the CEO's agent. This study's results may help create a red
flag for a board and leader's behavior that may result in governance failure.
|
We prove that there are at least as many exact embedded Lagrangian fillings
as seeds for Legendrian links of affine type $\tilde{\mathsf{D}}
\tilde{\mathsf{E}}$. We also provide as many Lagrangian fillings with certain
symmetries as seeds of type $\tilde{\mathsf{B}}_n$, $\tilde{\mathsf{F}}_4$,
$\tilde{\mathsf{G}}_2$, and $\mathsf{E}_6^{(2)}$. These families are the first
known Legendrian links with infinitely many fillings that exhaust all seeds in
the corresponding cluster structures. Furthermore, we show that Legendrian
realization of Coxeter mutation of type $\tilde{\mathsf{D}}$ corresponds to the
Legendrian loop considered by Casals and Ng.
|
Most modern unsupervised domain adaptation (UDA) approaches are rooted in
domain alignment, i.e., learning to align source and target features to learn a
target domain classifier using source labels. In semi-supervised domain
adaptation (SSDA), when the learner can access few target domain labels, prior
approaches have followed UDA theory to use domain alignment for learning. We
show that the case of SSDA is different and a good target classifier can be
learned without needing alignment. We use self-supervised pretraining (via
rotation prediction) and consistency regularization to achieve well separated
target clusters, aiding in learning a low error target classifier. With our
Pretraining and Consistency (PAC) approach, we achieve state of the art target
accuracy on this semi-supervised domain adaptation task, surpassing multiple
adversarial domain alignment methods, across multiple datasets. PAC, while
using simple techniques, performs remarkably well on large and challenging SSDA
benchmarks like DomainNet and Visda-17, often outperforming recent state of the
art by sizeable margins. Code for our experiments can be found at
https://github.com/venkatesh-saligrama/PAC
|
In this paper we derive quantitative estimates in the context of stochastic
homogenization for integral functionals defined on finite partitions, where the
random surface integrand is assumed to be stationary. Requiring the integrand
to satisfy in addition a multiscale functional inequality, we control
quantitatively the fluctuations of the asymptotic cell formulas defining the
homogenized surface integrand. As a byproduct we obtain a simplified cell
formula where we replace cubes by almost flat hyperrectangles.
|
Searches for periodicity in time series are often done with models of
periodic signals, whose statistical significance is assessed via false alarm
probabilities or Bayes factors. However, a statistically significant periodic
model might not originate from a strictly periodic source. In astronomy in
particular, one expects transient signals that show periodicity for a certain
amount of time before vanishing. This situation is encountered for instance in
the search for planets in radial velocity data. While planetary signals are
expected to have a stable phase, amplitude and frequency - except when strong
planet-planet interactions are present - signals induced by stellar activity
will typically not exhibit the same stability. In the present article, we
explore the use of periodic functions multiplied by time windows to diagnose
whether an apparently periodic signal is truly so. We suggest diagnostics to
check whether a signal is consistently present in the time series, and has a
stable phase, amplitude and period. The tests are expressed both in a
periodogram and Bayesian framework. Our methods are applied to the Solar
HARPS-N data as well as HD 215152, HD 69830 and HD 13808. We find that (i) the
HARPS-N Solar data exhibits signals at the Solar rotation period and its first
harmonic ($\sim$ 13.4 days). The frequency and phase of the 13.4 days signal
appear constant within the estimation uncertainties, but its amplitude presents
significant variations which can be mapped to activity levels. (ii) as
previously reported, we find four, three and two planets orbiting HD 215152, HD
69830 and HD 13808.
|
The task of multi-label image classification is to recognize all the object
labels presented in an image. Though advancing for years, small objects,
similar objects and objects with high conditional probability are still the
main bottlenecks of previous convolutional neural network(CNN) based models,
limited by convolutional kernels' representational capacity. Recent vision
transformer networks utilize the self-attention mechanism to extract the
feature of pixel granularity, which expresses richer local semantic
information, while is insufficient for mining global spatial dependence. In
this paper, we point out the three crucial problems that CNN-based methods
encounter and explore the possibility of conducting specific transformer
modules to settle them. We put forward a Multi-label Transformer
architecture(MlTr) constructed with windows partitioning, in-window pixel
attention, cross-window attention, particularly improving the performance of
multi-label image classification tasks. The proposed MlTr shows
state-of-the-art results on various prevalent multi-label datasets such as
MS-COCO, Pascal-VOC, and NUS-WIDE with 88.5%, 95.8%, and 65.5% respectively.
The code will be available soon at https://github.com/starmemda/MlTr/
|
The purpose of this study is to examine Olympic champions' characteristics on
Instagram to first understand whether differences exist between male and female
athletes and then to find possible correlations between these characteristics.
We utilized a content analytic method to analyze Olympic gold medalists'
photographs on Instagram. By this way we fetched data from Instagram pages of
all those Rio2016 Olympic gold medalists who had their account publicly
available. The analysis of data revealed the existence of a positive monotonic
relationship between the ratio of following/follower and the ratio of
engagement to follower for men gold medalists, and a strong negative monotonic
relationship between age and ratio of self-presenting post of both men and
women gold medalists which even take a linear form for men. These findings
aligned with the relative theories and literature may come together to help the
athletes to manage and expand their personal brand in social media.
|
The evolution of young stars and disks is driven by the interplay of several
processes, notably accretion and ejection of material. Critical to correctly
describe the conditions of planet formation, these processes are best probed
spectroscopically. About five-hundred orbits of the Hubble Space Telescope
(HST) are being devoted in 2020-2022 to the ULLYSES public survey of about 70
low-mass (M<2Msun) young (age<10 Myr) stars at UV wavelengths. Here we present
the PENELLOPE Large Program that is being carried out at the ESO Very Large
Telescope (VLT) to acquire, contemporaneous to HST, optical ESPRESSO/UVES
high-resolution spectra to investigate the kinematics of the emitting gas, and
UV-to-NIR X-Shooter medium-resolution flux-calibrated spectra to provide the
fundamental parameters that HST data alone cannot provide, such as extinction
and stellar properties. The data obtained by PENELLOPE have no proprietary
time, and the fully reduced spectra are made available to the whole community.
Here, we describe the data and the first scientific analysis of the accretion
properties for the sample of thirteen targets located in the Orion OB1
association and in the sigma-Orionis cluster, observed in Nov-Dec 2020. We find
that the accretion rates are in line with those observed previously in
similarly young star-forming regions, with a variability on a timescale of days
of <3. The comparison of the fits to the continuum excess emission obtained
with a slab model on the X-Shooter spectra and the HST/STIS spectra shows a
shortcoming in the X-Shooter estimates of <10%, well within the assumed
uncertainty. Its origin can be either a wrong UV extinction curve or due to the
simplicity of this modelling, and will be investigated in the course of the
PENELLOPE program. The combined ULLYSES and PENELLOPE data will be key for a
better understanding of the accretion/ejection mechanisms in young stars.
|
Advances in imagery at atomic and near-atomic resolution, such as cryogenic
electron microscopy (cryo-EM), have led to an influx of high resolution images
of proteins and other macromolecular structures to data banks worldwide.
Producing a protein structure from the discrete voxel grid data of cryo-EM maps
involves interpolation into the continuous spatial domain. We present a novel
data format called the neural cryo-EM map, which is formed from a set of neural
networks that accurately parameterize cryo-EM maps and provide native,
spatially continuous data for density and gradient. As a case study of this
data format, we create graph-based interpretations of high resolution
experimental cryo-EM maps. Normalized cryo-EM map values interpolated using the
non-linear neural cryo-EM format are more accurate, consistently scoring less
than 0.01 mean absolute error, than a conventional tri-linear interpolation,
which scores up to 0.12 mean absolute error. Our graph-based interpretations of
115 experimental cryo-EM maps from 1.15 to 4.0 Angstrom resolution provide high
coverage of the underlying amino acid residue locations, while accuracy of
nodes is correlated with resolution. The nodes of graphs created from atomic
resolution maps (higher than 1.6 Angstroms) provide greater than 99% residue
coverage as well as 85% full atomic coverage with a mean of than 0.19 Angstrom
root mean squared deviation (RMSD). Other graphs have a mean 84% residue
coverage with less specificity of the nodes due to experimental noise and
differences of density context at lower resolutions. This work may be
generalized for transforming any 3D grid-based data format into non-linear,
continuous, and differentiable format for the downstream geometric deep
learning applications.
|
Many real-life applications involve estimation of curves that exhibit
complicated shapes including jumps or varying-frequency oscillations. Practical
methods have been devised that can adapt to a locally varying complexity of an
unknown function (e.g. variable-knot splines, sparse wavelet reconstructions,
kernel methods or trees/forests). However, the overwhelming majority of
existing asymptotic minimaxity theory is predicated on homogeneous smoothness
assumptions. Focusing on locally Holderian functions, we provide new locally
adaptive posterior concentration rate results under the supremum loss for
widely used Bayesian machine learning techniques in white noise and
non-parametric regression. In particular, we show that popular spike-and-slab
priors and Bayesian CART are uniformly locally adaptive. In addition, we
propose a new class of repulsive partitioning priors which relate to variable
knot splines and which are exact-rate adaptive. For uncertainty quantification,
we construct locally adaptive confidence bands whose width depends on the local
smoothness and which achieve uniform asymptotic coverage under local
self-similarity. To illustrate that spatial adaptation is not at all automatic,
we provide lower-bound results showing that popular hierarchical Gaussian
process priors fall short of spatial adaptation.
|
Back-translation is an effective strategy to improve the performance of
Neural Machine Translation~(NMT) by generating pseudo-parallel data. However,
several recent works have found that better translation quality of the
pseudo-parallel data does not necessarily lead to better final translation
models, while lower-quality but more diverse data often yields stronger
results. In this paper, we propose a novel method to generate pseudo-parallel
data from a pre-trained back-translation model. Our method is a meta-learning
algorithm which adapts a pre-trained back-translation model so that the
pseudo-parallel data it generates would train a forward-translation model to do
well on a validation set. In our evaluations in both the standard datasets WMT
En-De'14 and WMT En-Fr'14, as well as a multilingual translation setting, our
method leads to significant improvements over strong baselines. Our code will
be made available.
|
We investigate the problem of fast-forwarding quantum evolution, whereby the
dynamics of certain quantum systems can be simulated with gate complexity that
is sublinear in the evolution time. We provide a definition of fast-forwarding
that considers the model of quantum computation, the Hamiltonians that induce
the evolution, and the properties of the initial states. Our definition
accounts for any asymptotic complexity improvement of the general case and we
use it to demonstrate fast-forwarding in several quantum systems. In
particular, we show that some local spin systems whose Hamiltonians can be
taken into block diagonal form using an efficient quantum circuit, such as
those that are permutation-invariant, can be exponentially fast-forwarded. We
also show that certain classes of positive semidefinite local spin systems,
also known as frustration-free, can be polynomially fast-forwarded, provided
the initial state is supported on a subspace of sufficiently low energies.
Last, we show that all quadratic fermionic systems and number-conserving
quadratic bosonic systems can be exponentially fast-forwarded in a model where
quantum gates are exponentials of specific fermionic or bosonic operators,
respectively. Our results extend the classes of physical Hamiltonians that were
previously known to be fast-forwarded, while not necessarily requiring methods
that diagonalize the Hamiltonians efficiently. We further develop a connection
between fast-forwarding and precise energy measurements that also accounts for
polynomial improvements.
|
Social Networks' omnipresence and ease of use has revolutionized the
generation and distribution of information in today's world. However, easy
access to information does not equal an increased level of public knowledge.
Unlike traditional media channels, social networks also facilitate faster and
wider spread of disinformation and misinformation. Viral spread of false
information has serious implications on the behaviors, attitudes and beliefs of
the public, and ultimately can seriously endanger the democratic processes.
Limiting false information's negative impact through early detection and
control of extensive spread presents the main challenge facing researchers
today. In this survey paper, we extensively analyze a wide range of different
solutions for the early detection of fake news in the existing literature. More
precisely, we examine Machine Learning (ML) models for the identification and
classification of fake news, online fake news detection competitions,
statistical outputs as well as the advantages and disadvantages of some of the
available data sets. Finally, we evaluate the online web browsing tools
available for detecting and mitigating fake news and present some open research
challenges.
|
To accommodate the explosive growth of the Internet-of-Things (IoT),
incorporating interference alignment (IA) into existing multiple access (MA)
schemes is under investigation. However, when it is applied in MIMO networks to
improve the system compacity, the incoming problem regarding information delay
arises which does not meet the requirement of low-latency. Therefore, in this
paper, we first propose a new metric, degree of delay (DoD), to quantify the
issue of information delay, and characterize DoD for three typical transmission
schemes, i.e., TDMA, beamforming based TDMA (BD-TDMA), and retrospective
interference alignment (RIA). By analyzing DoD in these schemes, its value
mainly depends on three factors, i.e., delay sensitive factor, size of data
set, and queueing delay slot. The first two reflect the relationship between
quality of service (QoS) and information delay sensitivity, and normalize time
cost for each symbol, respectively. These two factors are independent of the
transmission schemes, and thus we aim to reduce the queueing delay slot to
improve DoD. Herein, three novel joint IA schemes are proposed for MIMO
downlink networks with different number of users. That is, hybrid antenna array
based partial interference elimination and retrospective interference
regeneration scheme (HAA-PIE-RIR), HAA based improved PIE and RIR scheme
(HAA-IPIE-RIR), and HAA based cyclic interference elimination and RIR scheme
(HAA-CIE-RIR). Based on the first scheme, the second scheme extends the
application scenarios from $2$-user to $K$-user while causing heavy
computational burden. The third scheme relieves such computational burden,
though it has certain degree of freedom (DoF) loss due to insufficient
utilization of space resources.
|
We present new H$\alpha$ photometry for the Star-Formation Reference Survey
(SFRS), a representative sample of star-forming galaxies in the local Universe.
Combining these data with the panchromatic coverage of the SFRS, we provide
calibrations of H$\alpha$-based star-formation rates (SFRs) with and without
correction for the contribution of [$\rm N_{^{II}}$] emission. We consider the
effect of extinction corrections based on the Balmer decrement, infrared excess
(IRX), and spectral energy distribution (SED) fits. We compare the SFR
estimates derived from SED fits, polycyclic aromatic hydrocarbons, hybrid
indicators such as 24 $\mu$m + H$\alpha$, 8 $\mu$m + H$\alpha$, FIR + FUV, and
H$\alpha$ emission for a sample of purely star-forming galaxies. We provide a
new calibration for 1.4 GHz-based SFRs by comparing to the H$\alpha$ emission,
and we measure a dependence of the radio-to-H$\alpha$ emission ratio based on
galaxy stellar mass. Active galactic nuclei introduce biases in the
calibrations of different SFR indicators but have only a minimal effect on the
inferred SFR densities from galaxy surveys. Finally, we quantify the
correlation between galaxy metallicity and extinction.
|
Fungi cells are capable of sensing extracellular cues through reception,
transduction and response systems which allow them to communicate with their
host and adapt to their environment. They display effective regulatory protein
expressions which enhance and regulate their response and adaptation to a
variety of triggers such as stress, hormones, light, chemicals and host
factors. In our recent studies, we have shown that $Pleurotus$ oyster fungi
generate electrical potential impulses in the form of spike events as a result
of their exposure to environmental, mechanical and chemical triggers,
demonstrating that it is possible to discern the nature of stimuli from the
fungi electrical responses. Harnessing the power of fungi sensing and
intelligent capabilities, we explored the communication protocols of fungi as
reporters of human chemical secretions such as hormones, addressing the
question if fungi can sense human signals. We exposed $Pleurotus$ oyster fungi
to cortisol, directly applied to a surface of a hemp shavings substrate
colonised by fungi, and recorded the electrical activity of fungi. The response
of fungi to cortisol was also supplementary studied through the application of
X-ray to identify changes in the fungi tissue, where receiving cortisol by the
substrate can inhibit the flow of calcium and, in turn, reduce its
physiological changes. This study could pave the way for future research on
adaptive fungal wearables capable for detecting physiological states of humans
and biosensors made of living fungi.
|
Developers of AI-Intensive Systems--i.e., systems that involve both
"traditional" software and Artificial Intelligence"are recognizing the need to
organize development systematically and use engineered methods and tools. Since
an AI-Intensive System (AIIS) relies heavily on software, it is expected that
Software Engineering (SE) methods and tools can help. However, AIIS development
differs from the development of "traditional" software systems in a few
substantial aspects. Hence, traditional SE methods and tools are not suitable
or sufficient by themselves and need to be adapted and extended. A quest for
"SE for AI" methods and tools has started. We believe that, in this effort, we
should learn from experience and avoid repeating some of the mistakes made in
the quest for SE in past years. To this end, a fundamental instrument is a set
of concepts and a notation to deal with AIIS and the problems that characterize
their development processes. In this paper, we propose to describe AIIS via a
notation that was proposed for SE and embeds a set of concepts that are
suitable to represent AIIS as well. We demonstrate the usage of the notation by
modeling some characteristics that are particularly relevant for AIIS.
|
Tables on the web constitute a valuable data source for many applications,
like factual search and knowledge base augmentation. However, as genuine tables
containing relational knowledge only account for a small proportion of tables
on the web, reliable genuine web table classification is a crucial first step
of table extraction. Previous works usually rely on explicit feature
construction from the HTML code. In contrast, we propose an approach for web
table classification by exploiting the full visual appearance of a table, which
works purely by applying a convolutional neural network on the rendered image
of the web table. Since these visual features can be extracted automatically,
our approach circumvents the need for explicit feature construction. A new hand
labeled gold standard dataset containing HTML source code and images for 13,112
tables was generated for this task. Transfer learning techniques are applied to
well known VGG16 and ResNet50 architectures. The evaluation of CNN image
classification with fine tuned ResNet50 (F1 93.29%) shows that this approach
achieves results comparable to previous solutions using explicitly defined HTML
code based features. By combining visual and explicit features, an F-measure of
93.70% can be achieved by Random Forest classification, which beats current
state of the art methods.
|
We show an application of a subdiffusion equation with Caputo fractional time
derivative with respect to another function $g$ to describe subdiffusion in a
medium having a structure evolving over time. In this case a continuous
transition from subdiffusion to other type of diffusion may occur. The process
can be interpreted as "ordinary" subdiffusion with fixed subdiffusion parameter
(subdiffusion exponent) $\alpha$ in which time scale is changed by the function
$g$. As example, we consider the transition from "ordinary" subdiffusion to
ultraslow diffusion. The function $g$ generates the additional aging process
superimposed on the "standard" aging generated by "ordinary" subdiffusion. The
aging process is analyzed using coefficient of relative aging of
$g$--subdiffusion with respect to "ordinary" subdiffusion. The method of
solving the $g$-subdiffusion equation is also presented.
|
Galaxies can be classified as passive ellipticals or star-forming discs.
Ellipticals dominate at the high end of the mass range, and therefore there
must be a mechanism responsible for the quenching of star-forming galaxies.
This could either be due to the secular processes linked to the mass and star
formation of galaxies or to external processes linked to the surrounding
environment. In this paper, we analytically model the processes that govern
galaxy evolution and quantify their contribution. We have specifically studied
the effects of mass quenching, gas stripping, and mergers on galaxy quenching.
To achieve this, we first assumed a set of differential equations that describe
the processes that shape galaxy evolution. We then modelled the parameters of
these equations by maximising likelihood. These equations describe the
evolution of galaxies individually, but the parameters of the equations are
constrained by matching the extrapolated intermediate-redshift galaxies with
the low-redshift galaxy population. In this study, we modelled the processes
that change star formation and stellar mass in massive galaxies from the GAMA
survey between z~0.4 and the present. We identified and quantified the
contributions from mass quenching, gas stripping, and mergers to galaxy
quenching. The quenching timescale is on average 1.2 Gyr and a closer look
reveals support for the slow-then-rapid quenching scenario. The major merging
rate of galaxies is about once per 10~Gyr, while the rate of ram pressure
stripping is significantly higher. In galaxies with decreasing star formation,
we show that star formation is lost to fast quenching mechanisms such as ram
pressure stripping and is countered by mergers, at a rate of about 41%
Gyr$^{-1}$ and to mass quenching 49% Gyr$^{-1}$. (abridged)
|
Low-rank tensors are an established framework for high-dimensional
least-squares problems. We propose to extend this framework by including the
concept of block-sparsity. In the context of polynomial regression each
sparsity pattern corresponds to some subspace of homogeneous multivariate
polynomials. This allows us to adapt the ansatz space to align better with
known sample complexity results. The resulting method is tested in numerical
experiments and demonstrates improved computational resource utilization and
sample efficiency.
|
We construct the global phase portraits of inflationary dynamics in
teleparallel gravity models with a scalar field nonminimally coupled to torsion
scalar. The adopted set of variables can clearly distinguish between different
asymptotic states as fixed points, including the kinetic and inflationary
regimes. The key role in the description of inflation is played by the
heteroclinic orbits which run from the asymptotic saddle points to the late
time attractor point and are approximated by nonminimal slow roll conditions.
To seek the asymptotic fixed points we outline a heuristic method in terms of
the "effective potential" and "effective mass", which can be applied for any
nonminimally coupled theories. As particular examples we study positive
quadratic nonminimal couplings with quadratic and quartic potentials, and note
how the portraits differ qualitatively from the known scalar-curvature
counterparts. For quadratic models inflation can only occur at small nonminimal
coupling to torsion, as for larger coupling the asymptotic de Sitter saddle
point disappears from the physical phase space. Teleparallel models with
quartic potentials are not viable for inflation at all, since for small
nonminimal coupling the asymptotic saddle point exhibits weaker than
exponential expansion, and for larger coupling disappears too.
|
We consider a dynamic network of individuals that may hold one of two
different opinions in a two-party society. As a dynamical model, agents can
endlessly create and delete links to satisfy a preferred degree, and the
network is shaped by \emph{homophily}, a form of social interaction.
Characterized by the parameter $J \in [-1,1]$, the latter plays a role similar
to Ising spins: agents create links to others of the same opinion with
probability $(1+J)/2$, and delete them with probability $(1-J)/2$. Using Monte
Carlo simulations and mean-field theory, we focus on the network structure in
the steady state. We study the effects of $J$ on degree distributions and the
fraction of cross-party links. While the extreme cases of homophily or
heterophily ($J= \pm 1$) are easily understood to result in complete
polarization or anti-polarization, intermediate values of $J$ lead to
interesting features of the network. Our model exhibits the intriguing feature
of an "overwhelming transition" occurring when communities of different sizes
are subject to sufficient heterophily: agents of the minority group are
oversubscribed and their average degree greatly exceeds that of the majority
group. In addition, we introduce an original measure of polarization which
displays distinct advantages over the commonly used average edge homogeneity.
|
Learning based representation has become the key to the success of many
computer vision systems. While many 3D representations have been proposed, it
is still an unaddressed problem how to represent a dynamically changing 3D
object. In this paper, we introduce a compositional representation for 4D
captures, i.e. a deforming 3D object over a temporal span, that disentangles
shape, initial state, and motion respectively. Each component is represented by
a latent code via a trained encoder. To model the motion, a neural Ordinary
Differential Equation (ODE) is trained to update the initial state conditioned
on the learned motion code, and a decoder takes the shape code and the updated
state code to reconstruct the 3D model at each time stamp. To this end, we
propose an Identity Exchange Training (IET) strategy to encourage the network
to learn effectively decoupling each component. Extensive experiments
demonstrate that the proposed method outperforms existing state-of-the-art deep
learning based methods on 4D reconstruction, and significantly improves on
various tasks, including motion transfer and completion.
|
In this paper, we discuss the In\"on\"u-Winger contraction of the conformal
algebra. We start with the light-cone form of the Poincar\'e algebra and extend
it to write down the conformal algebra in $d$ dimensions. To contract the
conformal algebra, we choose five dimensions for simplicity and compactify the
third transverse direction in to a circle of radius $R$ following Kaluza-Klein
dimensional reduction method. We identify the inverse radius, $1/R$, as the
contraction parameter. After the contraction, the resulting representation is
found to be the continuous spin representation in four dimensions. Even though
the scaling symmetry survives the contraction, but the special conformal
translation vector changes and behaves like the four-momentum vector. We also
discussed the generalization to $d$ dimensions.
|
Predicting (1) when the next hospital admission occurs and (2) what will
happen in the next admission about a patient by mining electronic health record
(EHR) data can provide granular readmission predictions to assist clinical
decision making. Recurrent neural network (RNN) and point process models are
usually employed in modelling temporal sequential data. Simple RNN models
assume that sequences of hospital visits follow strict causal dependencies
between consecutive visits. However, in the real-world, a patient may have
multiple co-existing chronic medical conditions, i.e., multimorbidity, which
results in a cascade of visits where a non-immediate historical visit can be
most influential to the next visit. Although a point process (e.g., Hawkes
process) is able to model a cascade temporal relationship, it strongly relies
on a prior generative process assumption. We propose a novel model, MEDCAS, to
address these challenges. MEDCAS combines the strengths of RNN-based models and
point processes by integrating point processes in modelling visit types and
time gaps into an attention-based sequence-to-sequence learning model, which is
able to capture the temporal cascade relationships. To supplement the patients
with short visit sequences, a structural modelling technique with graph-based
methods is used to construct the markers of the point process in MEDCAS.
Extensive experiments on three real-world EHR datasets have been performed and
the results demonstrate that \texttt{MEDCAS} outperforms state-of-the-art
models in both tasks.
|
The use of non-differentiable priors in Bayesian statistics has become
increasingly popular, in particular in Bayesian imaging analysis. Current state
of the art methods are approximate in the sense that they replace the posterior
with a smooth approximation via Moreau-Yosida envelopes, and apply
gradient-based discretized diffusions to sample from the resulting
distribution. We characterize the error of the Moreau-Yosida approximation and
propose a novel implementation using underdamped Langevin dynamics. In
misson-critical cases, however, replacing the posterior with an approximation
may not be a viable option. Instead, we show that Piecewise-Deterministic
Markov Processes (PDMP) can be utilized for exact posterior inference from
distributions satisfying almost everywhere differentiability. Furthermore, in
contrast with diffusion-based methods, the suggested PDMP-based samplers place
no assumptions on the prior shape, nor require access to a computationally
cheap proximal operator, and consequently have a much broader scope of
application. Through detailed numerical examples, including a
non-differentiable circular distribution and a non-convex genomics model, we
elucidate the relative strengths of these sampling methods on problems of
moderate to high dimensions, underlining the benefits of PDMP-based methods
when accurate sampling is decisive.
|
Ultra sparse-view computed tomography (CT) algorithms can reduce radiation
exposure of patients, but those algorithms lack an explicit cycle consistency
loss minimization and an explicit log-likelihood maximization in testing. Here,
we propose X2CT-FLOW for the maximum a posteriori (MAP) reconstruction of a
three-dimensional (3D) chest CT image from a single or a few two-dimensional
(2D) projection images using a progressive flow-based deep generative model,
especially for ultra low-dose protocols. The MAP reconstruction can
simultaneously optimize the cycle consistency loss and the log-likelihood. The
proposed algorithm is built upon a newly developed progressive flow-based deep
generative model, which is featured with exact log-likelihood estimation,
efficient sampling, and progressive learning. We applied X2CT-FLOW to
reconstruction of 3D chest CT images from biplanar projection images without
noise contamination (assuming a standard-dose protocol) and with strong noise
contamination (assuming an ultra low-dose protocol). With the standard-dose
protocol, our images reconstructed from 2D projected images and 3D ground-truth
CT images showed good agreement in terms of structural similarity (SSIM, 0.7675
on average), peak signal-to-noise ratio (PSNR, 25.89 dB on average), mean
absolute error (MAE, 0.02364 on average), and normalized root mean square error
(NRMSE, 0.05731 on average). Moreover, with the ultra low-dose protocol, our
images reconstructed from 2D projected images and the 3D ground-truth CT images
also showed good agreement in terms of SSIM (0.7008 on average), PSNR (23.58 dB
on average), MAE (0.02991 on average), and NRMSE (0.07349 on average).
|
In this paper, we have studied the performance and role of local optimizers
in quantum variational circuits. We studied the performance of the two most
popular optimizers and compared their results with some popular classical
machine learning algorithms. The classical algorithms we used in our study are
support vector machine (SVM), gradient boosting (GB), and random forest (RF).
These were compared with a variational quantum classifier (VQC) using two sets
of local optimizers viz AQGD and COBYLA. For experimenting with VQC, IBM
Quantum Experience and IBM Qiskit was used while for classical machine learning
models, sci-kit learn was used. The results show that machine learning on noisy
immediate scale quantum machines can produce comparable results as on classical
machines. For our experiments, we have used a popular restaurant sentiment
analysis dataset. The extracted features from this dataset and then after
applying PCA reduced the feature set into 5 features. Quantum ML models were
trained using 100 epochs and 150 epochs on using EfficientSU2 variational
circuit. Overall, four Quantum ML models were trained and three Classical ML
models were trained. The performance of the trained models was evaluated using
standard evaluation measures viz, Accuracy, Precision, Recall, F-Score. In all
the cases AQGD optimizer-based model with 100 Epochs performed better than all
other models. It produced an accuracy of 77% and an F-Score of 0.785 which were
highest across all the trained models.
|
Classification algorithms have been widely adopted to detect anomalies for
various systems, e.g., IoT, cloud and face recognition, under the common
assumption that the data source is clean, i.e., features and labels are
correctly set. However, data collected from the wild can be unreliable due to
careless annotations or malicious data transformation for incorrect anomaly
detection. In this paper, we extend a two-layer on-line data selection
framework: Robust Anomaly Detector (RAD) with a newly designed ensemble
prediction where both layers contribute to the final anomaly detection
decision. To adapt to the on-line nature of anomaly detection, we consider
additional features of conflicting opinions of classifiers, repetitive
cleaning, and oracle knowledge. We on-line learn from incoming data streams and
continuously cleanse the data, so as to adapt to the increasing learning
capacity from the larger accumulated data set. Moreover, we explore the concept
of oracle learning that provides additional information of true labels for
difficult data points. We specifically focus on three use cases, (i) detecting
10 classes of IoT attacks, (ii) predicting 4 classes of task failures of big
data jobs, and (iii) recognising 100 celebrities faces. Our evaluation results
show that RAD can robustly improve the accuracy of anomaly detection, to reach
up to 98.95% for IoT device attacks (i.e., +7%), up to 85.03% for cloud task
failures (i.e., +14%) under 40% label noise, and for its extension, it can
reach up to 77.51% for face recognition (i.e., +39%) under 30% label noise. The
proposed RAD and its extensions are general and can be applied to different
anomaly detection algorithms.
|
So far the null results from axion searches have enforced a huge hierarchy
between the Peccei-Quinn and electroweak symmetry breaking scales. Then the
inevitable Higgs portal poses a large fine tuning on the standard model Higgs
scalar. Now we find if the Peccei-Quinn global symmetry has a set of residually
discrete symmetries, these global and discrete symmetries can achieve a chain
breaking at low scales such as the accessible TeV scale. This novel mechanism
can accommodate some new phenomena including a sizable coupling of the standard
model Higgs boson to the axion.
|
BaNi$_{2}$As$_{2}$ is a non-magnetic analogue of BaFe$_{2}$As$_{2}$, the
parent compound of a prototype ferro-pnictide high-temperature superconductor.
Recent diffraction studies on BaNi$_{2}$As$_{2}$ demonstrate the existence of
two types of periodic lattice distortions above and below the tetragonal to
triclinic phase transition, suggesting charge-density-wave (CDW) order to
compete with superconductivity. We apply time-resolved optical spectroscopy and
demonstrate the existence of collective CDW amplitude modes. The smooth
evolution of these modes through the structural phase transition implies the
CDW order in the triclinic phase smoothly evolves from the unidirectional CDW
in the tetragonal phase and suggests that the CDW order drives the structural
phase transition.
|
The solenoid scan is one of the most common methods for the in-situ
measurement of the thermal emittance of a photocathode in an rf photoinjector.
The fringe field of the solenoid overlaps with the gun rf field in quite a
number of photoinjectors, which makes accurate knowledge of the transfer matrix
challenging, thus increases the measurement uncertainty of the thermal
emittance. This paper summarizes two methods that have been used to solve the
overlap issue and explains their deficiencies. Furthermore, we provide a new
method to eliminate the measurement error due to the overlap issue in solenoid
scans. The new method is systematically demonstrated using theoretical
derivations, beam dynamics simulations, and experimental data based on the
photoinjector configurations from three different groups, proving that the
measurement error with the new method is very small and can be ignored in most
of the photoinjector configurations.
|
Reinforcement learning (RL) research focuses on general solutions that can be
applied across different domains. This results in methods that RL practitioners
can use in almost any domain. However, recent studies often lack the
engineering steps ("tricks") which may be needed to effectively use RL, such as
reward shaping, curriculum learning, and splitting a large task into smaller
chunks. Such tricks are common, if not necessary, to achieve state-of-the-art
results and win RL competitions. To ease the engineering efforts, we distill
descriptions of tricks from state-of-the-art results and study how well these
tricks can improve a standard deep Q-learning agent. The long-term goal of this
work is to enable combining proven RL methods with domain-specific tricks by
providing a unified software framework and accompanying insights in multiple
domains.
|
We study private synthetic data generation for query release, where the goal
is to construct a sanitized version of a sensitive dataset, subject to
differential privacy, that approximately preserves the answers to a large
collection of statistical queries. We first present an algorithmic framework
that unifies a long line of iterative algorithms in the literature. Under this
framework, we propose two new methods. The first method, private entropy
projection (PEP), can be viewed as an advanced variant of MWEM that adaptively
reuses past query measurements to boost accuracy. Our second method, generative
networks with the exponential mechanism (GEM), circumvents computational
bottlenecks in algorithms such as MWEM and PEP by optimizing over generative
models parameterized by neural networks, which capture a rich family of
distributions while enabling fast gradient-based optimization. We demonstrate
that PEP and GEM empirically outperform existing algorithms. Furthermore, we
show that GEM nicely incorporates prior information from public data while
overcoming limitations of PMW^Pub, the existing state-of-the-art method that
also leverages public data.
|
We show that a one-dimensional regular continuous Markov process \(\X\) with
scale function \(s\) is a Feller--Dynkin process precisely if the space
transformed process \(s (X)\) is a martingale when stopped at the boundaries of
its state space. As a consequence, the Feller--Dynkin and the martingale
property are equivalent for regular diffusions on natural scale with open state
space. By means of a counterexample, we also show that this equivalence fails
for multi-dimensional diffusions. Moreover, for It\^o diffusions we discuss
relations to Cauchy problems.
|
Recent facial image synthesis methods have been mainly based on conditional
generative models. Sketch-based conditions can effectively describe the
geometry of faces, including the contours of facial components, hair
structures, as well as salient edges (e.g., wrinkles) on face surfaces but lack
effective control of appearance, which is influenced by color, material,
lighting condition, etc. To have more control of generated results, one
possible approach is to apply existing disentangling works to disentangle face
images into geometry and appearance representations. However, existing
disentangling methods are not optimized for human face editing, and cannot
achieve fine control of facial details such as wrinkles. To address this issue,
we propose DeepFaceEditing, a structured disentanglement framework specifically
designed for face images to support face generation and editing with
disentangled control of geometry and appearance. We adopt a local-to-global
approach to incorporate the face domain knowledge: local component images are
decomposed into geometry and appearance representations, which are fused
consistently using a global fusion module to improve generation quality. We
exploit sketches to assist in extracting a better geometry representation,
which also supports intuitive geometry editing via sketching. The resulting
method can either extract the geometry and appearance representations from face
images, or directly extract the geometry representation from face sketches.
Such representations allow users to easily edit and synthesize face images,
with decoupled control of their geometry and appearance. Both qualitative and
quantitative evaluations show the superior detail and appearance control
abilities of our method compared to state-of-the-art methods.
|
We provide comprehensive regularity results and optimal conditions for a
general class of functionals involving Orlicz multi-phase of the type
\begin{align}
\label{abst:1}
v\mapsto \int_{\Omega} F(x,v,Dv)\,dx, \end{align} exhibiting non-standard
growth conditions and non-uniformly elliptic properties.
The model functional under consideration is given by the Orlicz multi-phase
integral \begin{align}
\label{abst:2}
v\mapsto \int_{\Omega} f(x,v)\left[ G(|Dv|) +
\sum\limits_{k=1}^{N}a_k(x)H_{k}(|Dv|) \right]\,dx,\quad N\geqslant 1,
\end{align} where $G,H_{k}$ are $N$-functions and $ 0\leqslant a_{k}(\cdot)\in
L^{\infty}(\Omega) $ with $0 < \nu \leqslant f(\cdot) \leqslant L$. Its
ellipticity ratio varies according to the geometry of the level sets
$\{a_{k}(x)=0\}$ of the modulating coefficient functions $a_{k}(\cdot)$ for
every $k\in \{1,\ldots,N\}$.
We give a unified treatment to show various regularity results for such
multi-phase problems with the coefficient functions
$\{a_{k}(\cdot)\}_{k=1}^{N}$ not necessarily H\"older continuous even for a
lower level of the regularity. Moreover, assuming that minima of the functional
above belong to better spaces such as $C^{0,\gamma}(\Omega)$ or
$L^{\kappa}(\Omega)$ for some $\gamma\in (0,1)$ and $\kappa\in (1,\infty]$, we
address optimal conditions on nonlinearity for each variant under which we
build comprehensive regularity results.
On the other hand, since there is a lack of homogeneity properties in the
nonlinearity, we consider an appropriate scaling with keeping the structures of
the problems under which we apply Harmonic type approximation in the setting
varying on the a priori assumption on minima. We believe that the methods and
proofs developed in this paper are suitable to build regularity theorems for a
larger class of non-autonomous functionals.
|
We study thermodynamic processes in contact with a heat bath that may have an
arbitrary time-varying periodic temperature profile. Within the framework of
stochastic thermodynamics, and for models of thermo-dynamic engines in the
idealized case of underdamped particles in the low-friction regime, we derive
explicit bounds as well as optimal control protocols that draw maximum power
and achieve maximum efficiency at any specified level of power.
|
We experimentally investigate the effect of electron temperature on transport
in the two-dimensional Dirac surface states of the three-dimensional
topological insulator HgTe. We find that around the minimal conductivity point,
where both electrons and holes are present, heating the carriers with a DC
current results in a non-monotonic differential resistance of narrow channels.
We show that the observed initial increase in resistance can be attributed to
electron-hole scattering, while the decrease follows naturally from the change
in Fermi energy of the charge carriers. Both effects are governed dominantly by
a van Hove singularity in the bulk valence band. The results demonstrate the
importance of interband electron-hole scattering in the transport properties of
topological insulators.
|
In this work, we present a program in the computational environment,
GeoGebra, that enables a graphical study of Newton's Method. Using this
computational device, we will analyze Newton's Method convergence applied to
various examples of real functions. Then, it will be given a guide to the
construction of the program in GeoGebra.
|
In recent paper Fakkousy et al. show that the 3D H\'{e}non-Heiles system with
Hamiltonian $ H = \frac{1}{2} (p_1 ^2 + p_2 ^2 + p_3 ^2) +\frac{1}{2} (A q_1 ^2
+ C q_2 ^2 + B q_3 ^2) + (\alpha q_1 ^2 + \gamma q_2 ^2)q_3 +
\frac{\beta}{3}q_3 ^3 $ is integrable in sense of Liouville when $\alpha =
\gamma, \frac{\alpha}{\beta} = 1, A = B = C$; or $\alpha = \gamma,
\frac{\alpha}{\beta} = \frac{1}{6}, A = C$, $B$-arbitrary; or $\alpha = \gamma,
\frac{\alpha}{\beta} = \frac{1}{16}, A = C, \frac{A}{B} = \frac{1}{16}$ (and of
course, when $\alpha=\gamma=0$, in which case the Hamiltonian is separable). It
is known that the second case remains integrable for $A, C, B$ arbitrary. Using
Morales-Ramis theory, we prove that there are no other cases of integrability
for this system.
|
We derive novel explicit formulas for the inverses of truncated block
Toeplitz matrices that correspond to a multivariate minimal stationary process.
The main ingredients of the formulas are the Fourier coefficients of the phase
function attached to the spectral density of the process. The derivation of the
formulas is based on a recently developed finite prediction theory applied to
the dual process of the stationary process. We illustrate the usefulness of the
formulas by two applications. The first one is a strong convergence result for
solutions of general block Toeplitz systems for a multivariate short-memory
process. The second application is closed-form formulas for the inverses of
truncated block Toeplitz matrices corresponding to a multivariate ARMA process.
The significance of the latter is that they provide us with a linear-time
algorithm to compute the solutions of corresponding block Toeplitz systems.
|
In this article, we consider mixed local and nonlocal Sobolev
$(q,p)$-inequalities with extremal in the case $0<q<1<p<\infty$. We prove that
the extremal of such inequalities is unique up to a multiplicative constant
that is associated with a singular elliptic problem involving the mixed local
and nonlocal $p$-Laplace operator. Moreover, it is proved that the mixed
Sobolev inequalities are necessary and sufficient condition for the existence
of weak solutions of such singular problems. As a consequence, a relation
between the singular $p$-Laplace and mixed local and nonlocal $p$-Laplace
equation is established. Finally, we investigate the existence, uniqueness,
regularity and symmetry properties of weak solutions for such problems.
|
Let $A$ be a Noetherian local ring with the maximal ideal $\mathfrak{m}$ and
$I$ be an $\mathfrak{m}$-primary ideal in $A$. In this paper, we study a
boundary condition of an inequality on Hilbert coefficients of an
$I$-admissible filtration $\mathcal{I}$. When $A$ is a Buchsbaum local ring,
the above equality forces Buchsbaumness on the associated graded ring of
filtration. Our result provides a positive resolution of a question of Corso in
a general set up of filtration.
|
Three dimensional (3D) resource reuse is an important design requirement for
the prospective 6G wireless communication systems. Hence, we propose a
cooperative 3D beamformer for use in 3D space. Explicitly, we harness multiple
base station antennas for joint zero forcing transmit pre-coding for beaming
the transmit signals in specific 3D directions. The technique advocated is
judiciously configured for use in both cell-based and cell-free wireless
architectures. We evaluated the performance of the proposed scheme using the
novel metric of Volumetric Spectral Efficiency (VSE). We also characterized the
performance of the scheme in terms of its spectral efficiency (SE) and Bit
Error Rate (BER) through extensive simulation studies.
|
Recent information extraction approaches have relied on training deep neural
models. However, such models can easily overfit noisy labels and suffer from
performance degradation. While it is very costly to filter noisy labels in
large learning resources, recent studies show that such labels take more
training steps to be memorized and are more frequently forgotten than clean
labels, therefore are identifiable in training. Motivated by such properties,
we propose a simple co-regularization framework for entity-centric information
extraction, which consists of several neural models with identical structures
but different parameter initialization. These models are jointly optimized with
the task-specific losses and are regularized to generate similar predictions
based on an agreement loss, which prevents overfitting on noisy labels.
Extensive experiments on two widely used but noisy benchmarks for information
extraction, TACRED and CoNLL03, demonstrate the effectiveness of our framework.
We release our code to the community for future research.
|
Gene genealogies are frequently studied by measuring properties such as their
height ($H$), length ($L$), sum of external branches ($E$), sum of internal
branches ($I$), and mean of their two basal branches ($B$), and the coalescence
times that contribute to the other genealogical features ($T$). These tree
properties and their relationships can provide insight into the effects of
population-genetic processes on genealogies and genetic sequences. Here, under
the coalescent model, we study the 15 correlations among pairs of features of
genealogical trees: $H_n$, $L_n$, $E_n$, $I_n$, $B_n$, and $T_k$ for a sample
of size $n$, with $2 \leq k \leq n$. We report high correlations among $H_n$,
$L_n$, $I_n,$ and $B_n$, with all pairwise correlations of these quantities
having values greater than or equal to $\sqrt{6} [6 \zeta(3) + 6 - \pi^2] / (
\pi \sqrt{18 + 9\pi^2 - \pi^4}) \approx 0.84930$ in the limit as $n \rightarrow
\infty$. Although $E_n$ has an expectation of 2 for all $n$ and $H_n$ has
expectation 2 in the limit as $n \rightarrow \infty$, their limiting
correlation is 0. The results contribute toward understanding features of the
shapes of coalescent trees.
|
The discrete phase space and continuous time representation of relativistic
quantum mechanics is further investigated here as a continuation of paper I
[1]. The main mathematical construct used here will be that of an area-filling
Peano curve. We show that the limit of a sequence of a class of Peano curves is
a Peano circle denoted as $\bar{S}^{1}_{n}$, a circle of radius $\sqrt{2n+1}$
where $n \in \{0,1,\cdots\}$. We interpret this two-dimensional Peano circle in
our framework as a phase cell inside our two-dimensional discrete phase plane.
We postulate that a first quantized Planck oscillator, being very light, and
small beyond current experimental detection, occupies this phase cell
$\bar{S}^{1}_{n}$. The time evolution of this Peano circle sweeps out a
two-dimensional vertical cylinder analogous to the world-sheet of string
theory. Extending this to three dimensional space, we introduce a
$(2+2+2)$-dimensional phase space hyper-tori $\bar{S}^{1}_{n^1} \times
\bar{S}^{1}_{n^2} \times \bar{S}^{1}_{n^3}$ as the appropriate phase cell in
the physical dimensional discrete phase space. A geometric interpretation of
this structure in state space is given in terms of product fibre bundles. We
also study free scalar Bosons in the background $[(2+2+2)+1]$-dimensional
discrete phase space and continuous time state space using the relativistic
partial difference-differential Klein-Gordon equation. The second quantized
field quantas of this system can cohabit with the tiny Planck oscillators
inside the $\bar{S}^{1}_{n^1} \times \bar{S}^{1}_{n^2} \times
\bar{S}^{1}_{n^3}$ phase cells for eternity. Finally, a generalized free second
quantized Klein-Gordon equation in a higher $[(2+2+2)N+1]$-dimensional discrete
state space is explored. The resulting discrete phase space dimension is
compared to the significant spatial dimensions of some of the popular models of
string theory.
|
Fluctuation-dissipation relations (FDRs) and time-reversal symmetry (TRS),
two pillars of statistical mechanics, are both broken in generic
driven-dissipative systems. These systems rather lead to non-equilibrium steady
states far from thermal equilibrium. Driven-dissipative Ising-type models,
however, are widely believed to exhibit effective thermal critical behavior
near their phase transitions. Contrary to this picture, we show that both the
FDR and TRS are broken even macroscopically at, or near, criticality. This is
shown by inspecting different observables, both even and odd operators under
time-reversal transformation, that overlap with the order parameter.
Remarkably, however, a modified form of the FDR as well as TRS still holds, but
with drastic consequences for the correlation and response functions as well as
the Onsager reciprocity relations. Finally, we find that, at criticality, TRS
remains broken even in the weakly-dissipative limit.
|
Batch policy optimization considers leveraging existing data for policy
construction before interacting with an environment. Although interest in this
problem has grown significantly in recent years, its theoretical foundations
remain under-developed. To advance the understanding of this problem, we
provide three results that characterize the limits and possibilities of batch
policy optimization in the finite-armed stochastic bandit setting. First, we
introduce a class of confidence-adjusted index algorithms that unifies
optimistic and pessimistic principles in a common framework, which enables a
general analysis. For this family, we show that any confidence-adjusted index
algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral.
Our analysis reveals that instance-dependent optimality, commonly used to
establish optimality of on-line stochastic bandit algorithms, cannot be
achieved by any algorithm in the batch setting. In particular, for any
algorithm that performs optimally in some environment, there exists another
environment where the same algorithm suffers arbitrarily larger regret.
Therefore, to establish a framework for distinguishing algorithms, we introduce
a new weighted-minimax criterion that considers the inherent difficulty of
optimal value prediction. We demonstrate how this criterion can be used to
justify commonly used pessimistic principles for batch policy optimization.
|
We prove that with high probability maximum sizes of induced forests in dense
binomial random graphs are concentrated in two consecutive values.
|
We compute the variance asymptotics for the number of real zeros of
trigonometric polynomials with random dependent Gaussian coefficients and show
that under mild conditions, the asymptotic behavior is the same as in the
independent framework. In fact our proof goes beyond this framework and makes
explicit the variance asymptotics of various models of random Gaussian
polynomials. Though we use the Kac--Rice formula, we do not use the explicit
closed formula for the second moment of the number of zeros, but we rather rely
on intrinsic properties of the Kac--Rice density.
|
Automatic Speech Recognition (ASR) systems generalize poorly on accented
speech. The phonetic and linguistic variability of accents present hard
challenges for ASR systems today in both data collection and modeling
strategies. The resulting bias in ASR performance across accents comes at a
cost to both users and providers of ASR.
We present a survey of current promising approaches to accented speech
recognition and highlight the key challenges in the space. Approaches mostly
focus on single model generalization and accent feature engineering. Among the
challenges, lack of a standard benchmark makes research and comparison
especially difficult.
|
In recent years, many different approaches have been proposed to quantify the
performances of soccer players. Since player performances are challenging to
quantify directly due to the low-scoring nature of soccer, most approaches
estimate the expected impact of the players' on-the-ball actions on the
scoreline. While effective, these approaches are yet to be widely embraced by
soccer practitioners. The soccer analytics community has primarily focused on
improving the accuracy of the models, while the explainability of the produced
metrics is often much more important to practitioners.
To help bridge the gap between scientists and practitioners, we introduce an
explainable Generalized Additive Model that estimates the expected value for
shots. Unlike existing models, our model leverages features corresponding to
widespread soccer concepts. To this end, we represent the locations of shots by
fuzzily assigning the shots to designated zones on the pitch that practitioners
are familiar with. Our experimental evaluation shows that our model is as
accurate as existing models, while being easier to explain to soccer
practitioners.
|
Along the rapid development of deep learning techniques in generative models,
it is becoming an urgent issue to combine machine intelligence with human
intelligence to solve the practical applications. Motivated by this
methodology, this work aims to adjust the machine generated character fonts
with the effort of human workers in the perception study. Although numerous
fonts are available online for public usage, it is difficult and challenging to
generate and explore a font to meet the preferences for common users. To solve
the specific issue, we propose the perceptual manifold of fonts to visualize
the perceptual adjustment in the latent space of a generative model of fonts.
In our framework, we adopt the variational autoencoder network for the font
generation. Then, we conduct a perceptual study on the generated fonts from the
multi-dimensional latent space of the generative model. After we obtained the
distribution data of specific preferences, we utilize manifold learning
approach to visualize the font distribution. In contrast to the conventional
user interface in our user study, the proposed font-exploring user interface is
efficient and helpful in the designated user preference.
|
Arcades of flare loops form as a consequence of magnetic reconnection
powering solar flares and eruptions. We analyse the morphology and evolution of
flare arcades that formed during five well-known eruptive flares. We show that
the arcades have a common saddle-like shape. The saddles occur despite the fact
that the flares were of different classes (C to X), occurred in different
magnetic environments, and were observed in various projections. The saddles
are related to the presence of longer, relatively-higher, and inclined flare
loops, consistently observed at the ends of the arcades, which we term
`cantles'. Our observations indicate that cantles typically join straight
portions of flare ribbons with hooked extensions of the conjugate ribbons. The
origin of the cantles is investigated in stereoscopic observations of the 2011
May 9 eruptive flare carried out by the Atmospheric Imaging Assembly (AIA) and
Extreme Ultraviolet Imager (EUVI). The mutual separation of the instruments led
to ideal observational conditions allowing for simultaneous analysis of the
evolving cantle and the underlying ribbon hook. Based on our analysis we
suggest that the formation of one of the cantles can be explained by magnetic
reconnection between the erupting structure and its overlying arcades. We
propose that the morphology of flare arcades can provide information about the
reconnection geometries in which the individual flare loops originate.
|
Our understanding of strong gravity near supermassive compact objects has
recently improved thanks to the measurements made by the Event Horizon
Telescope (EHT). We use here the M87* shadow size to infer constraints on the
physical charges of a large variety of nonrotating or rotating black holes. For
example, we show that the quality of the measurements is already sufficient to
rule out that M87* is a highly charged dilaton black hole. Similarly, when
considering black holes with two physical and independent charges, we are able
to exclude considerable regions of the space of parameters for the
doubly-charged dilaton and the Sen black holes.
|
Medical imaging deep learning models are often large and complex, requiring
specialized hardware to train and evaluate these models. To address such
issues, we propose the PocketNet paradigm to reduce the size of deep learning
models by throttling the growth of the number of channels in convolutional
neural networks. We demonstrate that, for a range of segmentation and
classification tasks, PocketNet architectures produce results comparable to
that of conventional neural networks while reducing the number of parameters by
multiple orders of magnitude, using up to 90% less GPU memory, and speeding up
training times by up to 40%, thereby allowing such models to be trained and
deployed in resource-constrained settings.
|
We study the behavior of the tail of a measure $\mu^{\boxtimes t}$, where
$\boxtimes t$ is the $t$-fold free multiplicative convolution power for $t\geq
1$. We focus on the case where $\mu$ is a probability measure on the positive
half-line with a regularly varying tail i.e. of the form $x^{-\alpha} L(x)$,
where $L$ is slowly varying. We obtain a phase transition in the behavior of
the tail of $\mu^{\boxplus t}$ between regimes $\alpha<1$ and $\alpha>1$. Our
main tool is a description of the regularly varying tails of $\mu$ in terms of
the behavior of the corresponding $S$-transform at $0^-$. We also describe the
tails of $\boxtimes$ infinitely divisible measures in terms of the tails of
corresponding L\'evy measure, treat symmetric measures with regularly varying
tails and prove the free analog of the Breiman lemma.
|
Motivated by the need of {\em social distancing} during a pandemic, we
consider an approach to schedule the visitors of a facility (e.g., a general
store). Our algorithms take input from the citizens and schedule the store's
discrete time-slots based on their importance to visit the facility. Naturally,
the formulation applies to several similar problems. We consider {\em
indivisible} job requests that take single or multiple slots to complete. The
salient properties of our approach are: it (a)~ensures social distancing by
ensuring a maximum population in a given time-slot at the facility, (b)~aims to
prioritize individuals based on the importance of the jobs, (c)~maintains
truthfulness of the reported importance by adding a {\em cooling-off} period
after their allocated time-slot, during which the individual cannot re-access
the same facility, (d)~guarantees voluntary participation of the citizens, and
yet (e)~is computationally tractable. The mechanisms we propose are prior-free.
We show that the problem becomes NP-complete for indivisible multi-slot
demands, and provide a polynomial-time mechanism that is truthful, individually
rational, and approximately optimal. Experiments with data collected from a
store show that visitors with more important (single-slot) jobs are allocated
more preferred slots, which comes at the cost of a longer cooling-off period
and significantly reduces social congestion. For the multi-slot jobs, our
mechanism yields reasonable approximation while reducing the computation time
significantly.
|
Customization is a general trend in software engineering, demanding systems
that support variable stakeholder requirements. Two opposing strategies are
commonly used to create variants: software clone & own and software
configuration with an integrated platform. Organizations often start with the
former, which is cheap, agile, and supports quick innovation, but does not
scale. The latter scales by establishing an integrated platform that shares
software assets between variants, but requires high up-front investments or
risky migration processes. So, could we have a method that allows an easy
transition or even combine the benefits of both strategies? We propose a method
and tool that supports a truly incremental development of variant-rich systems,
exploiting a spectrum between both opposing strategies. We design, formalize,
and prototype the variability-management framework virtual platform. It bridges
clone & own and platform-oriented development. Relying on
programming-language-independent conceptual structures representing software
assets, it offers operators for engineering and evolving a system, comprising:
traditional, asset-oriented operators and novel, feature-oriented operators for
incrementally adopting concepts of an integrated platform. The operators record
meta-data that is exploited by other operators to support the transition. Among
others, they eliminate expensive feature-location effort or the need to trace
clones. Our evaluation simulates the evolution of a real-world, clone-based
system, measuring its costs and benefits.
|
The Kronecker product-based algorithm for context-free path querying (CFPQ)
was proposed by Orachev et al. (2020). We reduce this algorithm to operations
over Boolean matrices and extend it with the mechanism to extract all paths of
interest. We also prove $O(n^3/\log{n})$ time complexity of the proposed
algorithm, where n is a number of vertices of the input graph. Thus, we provide
the alternative way to construct a slightly subcubic algorithm for CFPQ which
is based on linear algebra and incremental transitive closure (a classic
graph-theoretic problem), as opposed to the algorithm with the same complexity
proposed by Chaudhuri (2008). Our evaluation shows that our algorithm is a good
candidate to be the universal algorithm for both regular and context-free path
querying.
|
We estimate the black hole spin parameter in GRS 1915+105 using the
continuum-fitting method with revised mass and inclination constraints based on
the very long baseline interferometric parallax measurement of the distance to
this source. We fit Rossi X-ray Timing Explorer observations selected to be
accretion disk-dominated spectral states as described in McClinotck et al.
(2006) and Middleton et al. (2006), which previously gave discrepant spin
estimates with this method. We find that, using the new system parameters, the
spin in both datasets increased, providing a best-fit spin of $a_*=0.86$ for
the Middleton et al. data and a poor fit for the McClintock et al. dataset,
which becomes pegged at the BHSPEC model limit of $a_*=0.99$. We explore the
impact of the uncertainties in the system parameters, showing that the best-fit
spin ranges from $a_*= 0.4$ to 0.99 for the Middleton et al. dataset and allows
reasonable fits to the McClintock et al. dataset with near maximal spin for
system distances greater than $\sim 10$ kpc. We discuss the uncertainties and
implications of these estimates.
|
The CHIME/FRB Project has recently released its first catalog of fast radio
bursts (FRBs), containing 492 unique sources. We present results from angular
cross-correlations of CHIME/FRB sources with galaxy catalogs. We find a
statistically significant ($p$-value $\sim 10^{-4}$, accounting for
look-elsewhere factors) cross-correlation between CHIME FRBs and galaxies in
the redshift range $0.3 \lesssim z \lesssim 0.5$, in three photometric galaxy
surveys: WISE$\times$SCOS, DESI-BGS, and DESI-LRG. The level of
cross-correlation is consistent with an order-one fraction of the CHIME FRBs
being in the same dark matter halos as survey galaxies in this redshift range.
We find statistical evidence for a population of FRBs with large host
dispersion measure ($\sim 400$ pc cm$^{-3}$), and show that this can plausibly
arise from gas in large halos ($M \sim 10^{14} M_\odot$), for FRBs near the
halo center ($r \lesssim 100$ kpc). These results will improve in future
CHIME/FRB catalogs, with more FRBs and better angular resolution.
|
3D object detection with a single image is an essential and challenging task
for autonomous driving. Recently, keypoint-based monocular 3D object detection
has made tremendous progress and achieved great speed-accuracy trade-off.
However, there still exists a huge gap with LIDAR-based methods in terms of
accuracy. To improve their performance without sacrificing efficiency, we
propose a sort of lightweight feature pyramid network called Lite-FPN to
achieve multi-scale feature fusion in an effective and efficient way, which can
boost the multi-scale detection capability of keypoint-based detectors.
Besides, the misalignment between classification score and localization
precision is further relieved by introducing a novel regression loss named
attention loss. With the proposed loss, predictions with high confidence but
poor localization are treated with more attention during the training phase.
Comparative experiments based on several state-of-the-art keypoint-based
detectors on the KITTI dataset show that our proposed methods manage to achieve
significant improvements in both accuracy and frame rate. The code and
pretrained models will be released at
\url{https://github.com/yanglei18/Lite-FPN}.
|
Extracting information from documents usually relies on natural language
processing methods working on one-dimensional sequences of text. In some cases,
for example, for the extraction of key information from semi-structured
documents, such as invoice-documents, spatial and formatting information of
text are crucial to understand the contextual meaning. Convolutional neural
networks are already common in computer vision models to process and extract
relationships in multidimensional data. Therefore, natural language processing
models have already been combined with computer vision models in the past, to
benefit from e.g. positional information and to improve performance of these
key information extraction models. Existing models were either trained on
unpublished data sets or on an annotated collection of receipts, which did not
focus on PDF-like documents. Hence, in this research project a template-based
document generator was created to compare state-of-the-art models for
information extraction. An existing information extraction model "Chargrid"
(Katti et al., 2019) was reconstructed and the impact of a bounding box
regression decoder, as well as the impact of an NLP pre-processing step was
evaluated for information extraction from documents. The results have shown
that NLP based pre-processing is beneficial for model performance. However, the
use of a bounding box regression decoder increases the model performance only
for fields that do not follow a rectangular shape.
|
In this paper, we provide (i) a rigorous general theory to elicit conditions
on (tail-dependent) heavy-tailed cyber-risk distributions under which a risk
management firm might find it (non)sustainable to provide aggregate cyber-risk
coverage services for smart societies, and (ii)a real-data driven numerical
study to validate claims made in theory assuming boundedly rational cyber-risk
managers, alongside providing ideas to boost markets that aggregate dependent
cyber-risks with heavy-tails.To the best of our knowledge, this is the only
complete general theory till date on the feasibility of aggregate cyber-risk
management.
|
The discovery of pulsars is of great significance in the field of physics and
astronomy. As the astronomical equipment produces a large amount of pulsar
data, an algorithm for automatically identifying pulsars becomes urgent. We
propose a deep learning framework for pulsar recognition. In response to the
extreme imbalance between positive and negative examples and the hard negative
sample issue presented in the HTRU Medlat Training Data,there are two coping
strategies in our framework: the smart under-sampling and the improved loss
function. We also apply the early-fusion strategy to integrate features
obtained from different attributes before classification to improve the
performance. To our best knowledge,this is the first study that integrates
these strategies and techniques together in pulsar recognition. The experiment
results show that our framework outperforms previous works with the respect to
either the training time or F1 score. We can not only speed up the training
time by 10X compared with the state-of-the-art work, but also get a competitive
result in terms of F1 score.
|
Quantum coherence and quantum correlations are studied in the strongly
interacting system composed of two qubits and an oscillator with the presence
of a parametric medium. To analytically solve the system, we employ the
adiabatic approximation approach. It assumes each qubit's characteristic
frequency is substantially lower than the oscillator frequency. To validate our
approximation, a good agreement between the calculated energy spectrum of the
Hamiltonian with its numerical result is presented. The time evolution of the
reduced density matrices of the two-qubit and the oscillator subsystems are
computed from the tripartite initial state. Starting with a factorized
two-qubit initial state, the quasi-periodicity in the revival and collapse
phenomenon that occurs in the two-qubit population inversion is studied. Based
on the measure of relative entropy of coherence, we investigate the quantum
coherence and its explicit dependence on the parametric term both for the
two-qubit and the individual qubit subsystems by adopting different choices of
the initial states. Similarly, the existence of quantum correlations is
demonstrated by studying the geometric discord and concurrence. Besides, by
numerically minimizing the Hilbert-Schmidt distance, the dynamically produced
near maximally entangled states are reconstructed. The reconstructed states are
observed to be nearly pure generalized Bell states. Furthermore, utilizing the
oscillator density matrix, the quadrature variance and phase-space distribution
of the associated Husimi $Q$-function are computed in the minimum entropy
regime and conclude that the obtained nearly pure evolved state is a squeezed
coherent state.
|
It is an open question to give a combinatorial interpretation of the Falk
invariant of a hyperplane arrangement, i.e. the third rank of successive
quotients in the lower central series of the fundamental group of the
arrangement. In this article, we give a combinatorial formula for this
invariant in the case of hyperplane arrangements that are complete lift
representation of certain gain graphs. As a corollary, we compute the Falk
invariant for the cone of the braid, Shi, Linial and semiorder arrangements.
|
We discuss compatibility between various quantum aspects of bosonic fields,
relevant for quantum optics and quantum thermodynamics, and the mesoscopic
formalism of reduced state of the field (RSF). In particular, we derive exact
conditions under which Gaussian and Bogoliubov-type evolutions can be cast into
the RSF framework. In that regard, special emphasis is put on Gaussian thermal
operations. To strengthen the link between the RSF formalism and the notion of
classicality for bosonic quantum fields, we prove that RSF contains no
information about entanglement in two-mode Gaussian states. For the same
purpose, we show that the entropic characterisation of RSF by means of the von
Neumann entropy is qualitatively the same as its description based on the Wehrl
entropy. Our findings help bridge the conceptual gap between quantum and
classical mechanics.
|
Many machine learning techniques incorporate identity-preserving
transformations into their models to generalize their performance to previously
unseen data. These transformations are typically selected from a set of
functions that are known to maintain the identity of an input when applied
(e.g., rotation, translation, flipping, and scaling). However, there are many
natural variations that cannot be labeled for supervision or defined through
examination of the data. As suggested by the manifold hypothesis, many of these
natural variations live on or near a low-dimensional, nonlinear manifold.
Several techniques represent manifold variations through a set of learned Lie
group operators that define directions of motion on the manifold. However
theses approaches are limited because they require transformation labels when
training their models and they lack a method for determining which regions of
the manifold are appropriate for applying each specific operator. We address
these limitations by introducing a learning strategy that does not require
transformation labels and developing a method that learns the local regions
where each operator is likely to be used while preserving the identity of
inputs. Experiments on MNIST and Fashion MNIST highlight our model's ability to
learn identity-preserving transformations on multi-class datasets.
Additionally, we train on CelebA to showcase our model's ability to learn
semantically meaningful transformations on complex datasets in an unsupervised
manner.
|
We study the fundamental design automation problem of equivalence checking in
the NISQ (Noisy Intermediate-Scale Quantum) computing realm where quantum noise
is present inevitably. The notion of approximate equivalence of (possibly
noisy) quantum circuits is defined based on the Jamiolkowski fidelity which
measures the average distance between output states of two super-operators when
the input is chosen at random. By employing tensor network contraction, we
present two algorithms, aiming at different situations where the number of
noises varies, for computing the fidelity between an ideal quantum circuit and
its noisy implementation. The effectiveness of our algorithms is demonstrated
by experimenting on benchmarks of real NISQ circuits. When compared with the
state-of-the-art implementation incorporated in Qiskit, experimental results
show that the proposed algorithms outperform in both efficiency and
scalability.
|
In this article we consider the length functional defined on the space of
immersed planar curves. The $L^2(ds)$ Riemannian metric gives rise to the curve
shortening flow as the gradient flow of the length functional. Motivated by the
triviality of the metric topology in this space, we consider the gradient flow
of the length functional with respect to the $H^1(ds)$-metric. Circles with
radius $r_0$ shrink with $r(t) = \sqrt{W(e^{c-2t})}$ under the flow, where $W$
is the Lambert $W$ function and $c = r_0^2 + \log r_0^2$. We conduct a thorough
study of this flow, giving existence of eternal solutions and convergence for
general initial data, preservation of regularity in various spaces, qualitative
properties of the flow after an appropriate rescaling, and numerical
simulations.
|
Relational knowledge bases (KBs) are commonly used to represent world
knowledge in machines. However, while advantageous for their high degree of
precision and interpretability, KBs are usually organized according to
manually-defined schemas, which limit their expressiveness and require
significant human efforts to engineer and maintain. In this review, we take a
natural language processing perspective to these limitations, examining how
they may be addressed in part by training deep contextual language models (LMs)
to internalize and express relational knowledge in more flexible forms. We
propose to organize knowledge representation strategies in LMs by the level of
KB supervision provided, from no KB supervision at all to entity- and
relation-level supervision. Our contributions are threefold: (1) We provide a
high-level, extensible taxonomy for knowledge representation in LMs; (2) Within
our taxonomy, we highlight notable models, evaluation tasks, and findings, in
order to provide an up-to-date review of current knowledge representation
capabilities in LMs; and (3) We suggest future research directions that build
upon the complementary aspects of LMs and KBs as knowledge representations.
|
In this paper, we show that the (admissible) character stack, which is a
stack version of the character variety, is an open substack of the
Teichm\"uller stack of homogeneous spaces of SL(2,C). We show that the
tautological family over the representation variety, given by deforming the
holonomy, is always a complete family. This is a generalisation of the work of
E. Ghys about deformations of complex structures these homogeneous spaces.
|
In this manuscript we demonstrate a method to reconstruct the wavefront of
focused beams from a measured diffraction pattern behind a diffracting mask in
real-time. The phase problem is solved by means of a neural network, which is
trained with simulated data and verified with experimental data. The neural
network allows live reconstructions within a few milliseconds, which previously
with iterative phase retrieval took several seconds, thus allowing the
adjustment of complex systems and correction by adaptive optics in real time.
The neural network additionally outperforms iterative phase retrieval with high
noise diffraction patterns.
|
To realize high-accuracy classification of high spatial resolution (HSR)
images, this letter proposes a new multi-feature fusion-based scene
classification framework (MF2SCF) by fusing local, global, and color features
of HSR images. Specifically, we first extract the local features with the help
of image slicing and densely connected convolutional networks (DenseNet), where
the outputs of dense blocks in the fine-tuned DenseNet-121 model are jointly
averaged and concatenated to describe local features. Second, from the
perspective of complex networks (CN), we model a HSR image as an undirected
graph based on pixel distance, intensity, and gradient, and obtain a gray-scale
image (GSI), a gradient of image (GoI), and three CN-based feature images to
delineate global features. To make the global feature descriptor resist to the
impact of rotation and illumination, we apply uniform local binary patterns
(LBP) on GSI, GoI, and feature images, respectively, and generate the final
global feature representation by concatenating spatial histograms. Third, the
color features are determined based on the normalized HSV histogram, where HSV
stands for hue, saturation, and value, respectively. Finally, three feature
vectors are jointly concatenated for scene classification. Experiment results
show that MF2SCF significantly improves the classification accuracy compared
with state-of-the-art LBP-based methods and deep learning-based methods.
|
In this paper we study quasilinear elliptic equations driven by the double
phase operator and a right-hand side which has the combined effect of a
singular and of a parametric term. Based on the fibering method by using the
Nehari manifold we are going to prove the existence of at least two weak
solutions for such problem when the parameter is sufficiently small.
|
Deep learning has made significant impacts on multi-view stereo systems.
State-of-the-art approaches typically involve building a cost volume, followed
by multiple 3D convolution operations to recover the input image's pixel-wise
depth. While such end-to-end learning of plane-sweeping stereo advances public
benchmarks' accuracy, they are typically very slow to compute. We present
\ouralg, a highly efficient multi-view stereo algorithm that seamlessly
integrates multi-view constraints into single-view networks via an attention
mechanism. Since \ouralg only builds on 2D convolutions, it is at least
$2\times$ faster than all the notable counterparts. Moreover, our algorithm
produces precise depth estimations and 3D reconstructions, achieving
state-of-the-art results on challenging benchmarks ScanNet, SUN3D, RGBD, and
the classical DTU dataset. our algorithm also out-performs all other algorithms
in the setting of inexact camera poses. Our code is released at
\url{https://github.com/zhenpeiyang/MVS2D}
|
We are interested in solutions of the nonlinear Klein-Gordon equation (NLKG)
in $\mathbb{R}^{1+d}$, $d\ge1$, which behave as a soliton or a sum of solitons
in large time. In the spirit of other articles focusing on the supercritical
generalized Korteweg-de Vries equations and on the nonlinear Schr{\"o}dinger
equations, we obtain an $N$-parameter family of solutions of (NLKG) which
converges exponentially fast to a sum of given (unstable) solitons. For $N =
1$, this family completely describes the set of solutions converging to the
soliton considered; for $N\ge 2$, we prove uniqueness in a class with explicit
algebraic rate of convergence.
|
In this paper we completely solve the family of parametrised Thue equations
\[
X(X-F_n Y)(X-2^n Y)-Y^3=\pm 1, \] where $F_n$ is the $n$-th Fibonacci number.
In particular, for any integer $n\geq 3$ the Thue equation has only the trivial
solutions $(\pm 1,0), (0,\mp 1), \mp(F_n,1), \mp(2^n,1)$.
|
Indexing intervals is a fundamental problem, finding a wide range of
applications. Recent work on managing large collections of intervals in main
memory focused on overlap joins and temporal aggregation problems. In this
paper, we propose novel and efficient in-memory indexing techniques for
intervals, with a focus on interval range queries, which are a basic component
of many search and analysis tasks. First, we propose an optimized version of a
single-level (flat) domain-partitioning approach, which may have large space
requirements due to excessive replication. Then, we propose a hierarchical
partitioning approach, which assigns each interval to at most two partitions
per level and has controlled space requirements. Novel elements of our
techniques include the division of the intervals at each partition into groups
based on whether they begin inside or before the partition boundaries, reducing
the information stored at each partition to the absolutely necessary, and the
effective handling of data sparsity and skew. Experimental results on real and
synthetic interval sets of different characteristics show that our approaches
are typically one order of magnitude faster than the state-of-the-art.
|
We propose a novel storage scheme for three-nucleon (3N) interaction matrix
elements relevant for the normal-ordered two-body approximation used
extensively in ab initio calculations of atomic nuclei. This scheme reduces the
required memory by approximately two orders of magnitude, which allows the
generation of 3N interaction matrix elements with the standard truncation of
$E_{\rm 3max}=28$, well beyond the previous limit of 18. We demonstrate that
this is sufficient to obtain the ground-state energy of $^{132}$Sn converged to
within a few MeV with respect to the $E_{\rm 3max}$ truncation.In addition, we
study the asymptotic convergence behavior and perform extrapolations to the
un-truncated limit. Finally, we investigate the impact of truncations made when
evolving free-space 3N interactions with the similarity renormalization group.
We find that the contribution of blocks with angular momentum $J_{\rm rel}>9/2$
to the ground-state energy is dominated by a basis-truncation artifact which
vanishes in the large-space limit, so these computationally expensive
components can be neglected. For the two sets of nuclear interactions employed
in this work, the resulting binding energy of $^{132}$Sn agrees with the
experimental value within theoretical uncertainties. This work enables
converged ab initio calculations of heavy nuclei.
|
We show that the standard notion of entanglement is not defined for
gravitationally anomalous two-dimensional theories because they do not admit a
local tensor factorization of the Hilbert space into local Hilbert spaces.
Qualitatively, the modular flow cannot act consistently and unitarily in a
finite region, if there are different numbers of states with a given energy
traveling in the two opposite directions. We make this precise by decomposing
it into two observations: First, a two-dimensional CFT admits a consistent
quantization on a space with boundary only if it is not anomalous. Second, a
local tensor factorization always leads to a definition of consistent, unitary,
energy-preserving boundary condition. As a corollary we establish a
generalization of the Nielsen-Ninomiya theorem to all two-dimensional unitary
local QFTs: No continuum quantum field theory in two dimensions can admit a
lattice regulator unless its gravitational anomaly vanishes. We also show that
the conclusion can be generalized to six dimensions by dimensional reduction on
a four-manifold of nonvanishing signature. We advocate that these points be
used to reinterpret the gravitational anomaly
quantum-information-theoretically, as a fundamental obstruction to the
localization of quantum information.
|
Thermal jitter (phase noise) from a free-running ring oscillator is a common,
easily implementable physical randomness source in True Random Number
Generators (TRNGs). We show how to evaluate entropy, autocorrelation, and bit
pattern distributions of ring oscillator noise sources, even with low jitter
levels or some bias. Entropy justification is required in NIST 800-90B and
AIS-31 testing and for applications such as the RISC-V entropy source
extension. Our numerical evaluation algorithms outperform Monte Carlo
simulations in speed and accuracy. We also propose a new lower bound estimation
formula for the entropy of ring oscillator sources which applies more generally
than previous ones.
|
This paper applies t-SNE, a visualisation technique familiar from Deep Neural
Network research to argumentation graphs by applying it to the output of graph
embeddings generated using several different methods. It shows that such a
visualisation approach can work for argumentation and show interesting
structural properties of argumentation graphs, opening up paths for further
research in the area.
|
The fracture stress of materials typically depends on the sample size and is
traditionally explained in terms of extreme value statistics. A recent work
reported results on the carrying capacity of long polyamide and polyester wires
and interpret the results in terms of a probabilistic argument known as the St.
Petersburg paradox. Here, we show that the same results can be better explained
in terms of extreme value statistics. We also discuss the relevance of rate
dependent effects.
|
This paper proposes a model to explain the potential role of inter-group
conflicts in determining the rise and fall of signaling norms. In one
population, assortative matching according to types is sustained by signaling.
In the other population, individuals do not signal and they are randomly
matched. Types evolve within each population. At the same time, the two
populations may engage in conflicts. Due to assortative matching, high types
grow faster in the population with signaling, yet they bear the cost of
signaling, which lowers their population's fitness in the long run. We show
that the survival of the signaling population depends crucially on the timing
and the intensity of inter-group conflicts.
|
For predicting the kinetics of nucleic acid reactions, continuous-time Markov
chains (CTMCs) are widely used. The rate of a reaction can be obtained through
the mean first passage time (MFPT) of its CTMC. However, a typical issue in
CTMCs is that the number of states could be large, making MFPT estimation
challenging, particularly for events that happen on a long time scale (rare
events). We propose the pathway elaboration method, a time-efficient
probabilistic truncation-based approach for detailed-balance CTMCs. It can be
used for estimating the MFPT for rare events in addition to rapidly evaluating
perturbed parameters without expensive recomputations. We demonstrate that
pathway elaboration is suitable for predicting nucleic acid kinetics by
conducting computational experiments on 267 measurements that cover a wide
range of rates for different types of reactions. We utilize pathway elaboration
to gain insight on the kinetics of two contrasting reactions, one being a rare
event. We then compare the performance of pathway elaboration with the
stochastic simulation algorithm (SSA) for MFPT estimation on 237 of the
reactions for which SSA is feasible. We further build truncated CTMCs with SSA
and transition path sampling (TPS) to compare with pathway elaboration.
Finally, we use pathway elaboration to rapidly evaluate perturbed model
parameters during optimization with respect to experimentally measured rates
for these 237 reactions. The testing error on the remaining 30 reactions, which
involved rare events and were not feasible to simulate with SSA, improved
comparably with the training error. Our framework and dataset are available at
https://github.com/ DNA-and-Natural-Algorithms-Group/PathwayElaboration.
|
We study an optimal control problem for a simple transportation model on a
path graph. We give a closed form solution for the optimal controller, which
can also account for planned disturbances using feed-forward. The optimal
controller is highly structured, which allows the controller to be implemented
using only local communication, conducted through two sweeps through the graph.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.