abstract
stringlengths 42
2.09k
|
---|
We study the problem of simulating a two-user multiple access channel over a
multiple access network of noiseless links. Two encoders observe independent
and identically distributed (i.i.d.) copies of a source random variable each,
while a decoder observes i.i.d. copies of a side-information random variable.
There are rate-limited noiseless communication links and independent pairwise
shared randomness resources between each encoder and the decoder. The decoder
has to output approximately i.i.d. copies of another random variable jointly
distributed with the two sources and the side information. We are interested in
the rate tuples which permit this simulation. This setting can be thought of as
a multi-terminal generalization of the point-to-point channel simulation
problem studied by Bennett et al. (2002) and Cuff (2013). General inner and
outer bounds on the rate region are derived. For the specific case where the
sources at the encoders are conditionally independent given the
side-information at the decoder, we completely characterize the rate region.
Our bounds recover the existing results on function computation over such
multi-terminal networks. We then show through an example that an additional
independent source of shared randomness between the encoders strictly improves
the communication rate requirements, even if the additional randomness is not
available to the decoder. Furthermore, we provide inner and outer bounds for
this more general setting with independent pairwise shared randomness resources
between all the three possible node pairs.
|
A cluster algebra is a commutative algebra whose structure is decided by a
skew-symmetrizable matrix or a quiver. When a skew-symmetrizable matrix is
invariant under an action of a finite group and this action is admissible, the
folded cluster algebra is obtained from the original one. Any cluster algebra
of non-simply-laced affine type can be obtained by folding a cluster algebra of
simply-laced affine type with a specific $G$-action. In this paper, we study
the combinatorial properties of quivers in the cluster algebra of affine type.
We prove that for any quiver of simply-laced affine type, $G$-invariance and
$G$-admissibility are equivalent. This leads us to prove that the set of
$G$-invariant seeds forms the folded cluster pattern.
|
We consider the decay $B\to\ell\ell\ell^{\prime}\nu$, taking into account the
leading $1/m_b$ and $q^2$ corrections calculated in the QCD factorization
framework as well as the soft corrections calculated employing dispersion
relations and quark-hadron duality. We extend the existing results for the
radiative decay $B\to\gamma\ell\nu$ to the case of non-zero (but small) $q^2$,
the invariant mass squared of the dilepton pair $\ell^+\ell^-$. This restricts
us to the case $\ell\neq\ell'$ as otherwise the same sign $\ell$ and $\ell'$
cannot be distinguished. We further study the sensitivity of the results to the
leading moment of the $B$-meson distribution amplitude and discuss the
potential to extract this quantity at LHCb and the Belle II experiment.
|
We develop a resource theory of symmetric distinguishability, the fundamental
objects of which are elementary quantum information sources, i.e., sources that
emit one of two possible quantum states with given prior probabilities. Such a
source can be represented by a classical-quantum state of a composite system
$XA$, corresponding to an ensemble of two quantum states, with $X$ being
classical and $A$ being quantum. We study the resource theory for two different
classes of free operations: $(i)$ ${\rm{CPTP}}_A$, which consists of quantum
channels acting only on $A$, and $(ii)$ conditional doubly stochastic (CDS)
maps acting on $XA$. We introduce the notion of symmetric distinguishability of
an elementary source and prove that it is a monotone under both these classes
of free operations. We study the tasks of distillation and dilution of
symmetric distinguishability, both in the one-shot and asymptotic regimes. We
prove that in the asymptotic regime, the optimal rate of converting one
elementary source to another is equal to the ratio of their quantum Chernoff
divergences, under both these classes of free operations. This imparts a new
operational interpretation to the quantum Chernoff divergence. We also obtain
interesting operational interpretations of the Thompson metric, in the context
of the dilution of symmetric distinguishability.
|
In this study, we investigated the employment status of recent University of
Ottawa physics MSc and PhD graduates, finding that 94% of graduates are either
employed or pursuing further physics education one year post-graduation. Our
database was populated from the public online repository of MSc and PhD theses
submitted between the academic years of 2011 to 2019, with employment
information collected in 2020 from the professional social media platform
LinkedIn. Our results highlight that graduates primarily find employment
quickly and in their field of study, with most graduates employed in either
academia or physics-related industries. We also found that a significant
portion of employed graduates, 20%, find employment in non-traditional physics
careers, such as business management and healthcare. Graduates with careers in
academia tend to have lower online connectivity compared to graduates with
careers in industry or non-traditional fields, suggesting a greater importance
for online networking for students interested in non-academic careers.
|
Bound-systems of $\Xi^-$--$^{14}_{}{\rm N}$ are studied via $\Xi^-$ capture
at rest followed by emission of a twin single-$\Lambda$ hypernucleus in the
emulsion detectors. Two events forming extremely deep $\Xi^-$ bound states were
obtained by analysis of a hybrid method in the E07 experiment at J-PARC and
reanalysis of the E373 experiment at KEK-PS. The decay mode of one event was
assigned as $\Xi^-+^{14}_{}{\rm N}\to^{5}_{\Lambda}{\rm
He}$+$^{5}_{\Lambda}{\rm He}$+$^{4}_{}{\rm He}$+n. Since there are no excited
states for daughter particles, the binding energy of the $\Xi^-$ hyperon,
$B_{\Xi^-}$, in $^{14}_{}{\rm N}$ nucleus was uniquely determined to be 6.27
$\pm$ 0.27 MeV. Another $\Xi^-$--$^{14}_{}{\rm N}$ system via the decay
$^{9}_{\Lambda}{\rm Be}$ + $^{5}_{\Lambda}{\rm He}$ + n brings a $B_{\Xi^-}$
value, 8.00 $\pm$ 0.77 MeV or 4.96 $\pm$ 0.77 MeV, where the two possible
values of $B_{\Xi^-}$ correspond to the ground and the excited states of the
daughter $^{9}_{\Lambda}{\rm Be}$ nucleus, respectively. Because the
$B_{\Xi^-}$ values are larger than those of the previously reported events
(KISO and IBUKI), which are both interpreted as the nuclear $1p$ state of the
$\Xi^-$--$^{14}_{}{\rm N}$ system, these new events give the first indication
of the nuclear $1s$ state of the $\Xi$ hypernucleus, $^{15}_{\Xi}{\rm C}$.
|
Recent research on disorder effects in topological phases in quasicrystalline
systems has received much attention. In this work, by numerically computing the
(spin) Bott index and the thermal conductance, we reveal the effects of
disorder on a class D chiral topological superconductor and a class DIII
time-reversal-invariant topological superconductor in a two-dimensional
Ammann-Beenker tiling quasicrystalline lattice. We demonstrate that both the
topologically protected chiral and helical Majorana edge modes are robust
against weak disorder in the quasicrystalline lattice. More fascinating is the
discovery of disorder-induced topologically nontrivial phases exhibiting chiral
and helical Majorana edge modes in class D and DIII topological superconductor
systems, respectively. Our findings open the door for the research on
disorder-induced Majorana edge modes in quasicrystalline systems.
|
Inverse patchy colloids are nano- to micro-scale particles with a surface
divided into differently charged regions. This class of colloids combines
directional, selective bonding with a relatively simple particle design: owing
to the competitive interplay between the orientation-dependent attraction and
repulsion -- induced by the interactions between like/oppositely charged areas
-- experimentally accessible surface patterns are complex enough to favor the
stabilization of specific structures of interest. Most important, the behavior
of heterogeneously charged units can be ideally controlled by means of external
parameters, such as the pH and the salt concentration. We present a concise
review about this class of systems, spanning the range from the synthesis of
model inverse patchy particles to their self-assembly, covering their
coarse-grained modeling and the related numerical/analytical treatments.
|
With the development of earth observation technology, massive amounts of
remote sensing (RS) images are acquired. To find useful information from these
images, cross-modal RS image-voice retrieval provides a new insight. This paper
aims to study the task of RS image-voice retrieval so as to search effective
information from massive amounts of RS data. Existing methods for RS
image-voice retrieval rely primarily on the pairwise relationship to narrow the
heterogeneous semantic gap between images and voices. However, apart from the
pairwise relationship included in the datasets, the intra-modality and
non-paired inter-modality relationships should also be taken into account
simultaneously, since the semantic consistency among non-paired representations
plays an important role in the RS image-voice retrieval task. Inspired by this,
a semantics-consistent representation learning (SCRL) method is proposed for RS
image-voice retrieval. The main novelty is that the proposed method takes the
pairwise, intra-modality, and non-paired inter-modality relationships into
account simultaneously, thereby improving the semantic consistency of the
learned representations for the RS image-voice retrieval. The proposed SCRL
method consists of two main steps: 1) semantics encoding and 2)
semantics-consistent representation learning. Firstly, an image encoding
network is adopted to extract high-level image features with a transfer
learning strategy, and a voice encoding network with dilated convolution is
devised to obtain high-level voice features. Secondly, a consistent
representation space is conducted by modeling the three kinds of relationships
to narrow the heterogeneous semantic gap and learn semantics-consistent
representations across two modalities. Extensive experimental results on three
challenging RS image-voice datasets show the effectiveness of the proposed
method.
|
We have analysed the Ca-K images obtained at Kodaikanal Observatory as a
function of latitude and time for the period of 1913 - 2004 covering the Solar
Cycle 15 to 23. We have classified the chromospheric activity into plage,
Enhanced Network (EN), Active Network (AN), and Quiet Network (QN) areas to
differentiate between large strong active and small weak active regions. The
strong active regions represent toroidal and weak active regions poloidal
component of the magnetic field. We find that plages areas mostly up to 50 deg
latitude belt vary with about 11-year Solar Cycle. We also find that weak
activity represented by EN, AN and QN varies with about 11-year with
significant amplitude up to about 50 deg latitude in both the hemispheres. The
amplitude of variation is minimum around 50 deg latitude and again increases by
small amount in the polar region. In addition, the plots of plages, EN, AN and
QN as a function of time indicate the maximum of activity at different latitude
occur at different epoch. To determine the phase difference for the different
latitude belts, we have computed the cross-correlation coefficients of other
latitude belts with 35 deg latitude belt. We find that activity shifts from
mid-latitude belts towards equatorial belts at fast speed at the beginning of
Solar Cycle and at slower speed as the cycle progresses. The speed of shift
varies between approximately 19 and 3 m/s considering all the data for the
observed period. This speed can be linked with speed of meridional flows those
believed to occur between convection zone and the surface of the Sun.
|
This study discusses the Review Bomb, a phenomenon consisting of a massive
attack by groups of Internet users on a website that displays users' review on
products. It gained attention, especially on websites that aggregate numerical
ratings. Although this phenomenon can be considered an example of online
misinformation, it differs from conventional spam review, which happens within
larger time spans. In particular, the Bomb occurs suddenly and for a short
time, because in this way it leverages the notorious problem of cold-start: if
reviews are submitted by a lot of fresh new accounts, it makes hard to justify
preventative measures. The present research work is focused on the case of The
Last of Us Part II, a video game published by Sony, that was the target of the
widest phenomenon of Review Bomb, occurred in June 2020. By performing an
observational analysis of a linguistic corpus of English reviews and the
features of its users, this study confirms that the Bomb was an ideological
attack aimed at breaking down the rating system of the platform Metacritic.
Evidence supports that the bombing had the unintended consequence to induce a
reaction from users, ending into a consistent polarisation of ratings towards
extreme values. The results not only display the theory of polarity in online
reviews, but them also provide insights for the research on the problem of
cold-start detection of spam review. In particular, it illustrates the
relevance of detecting users discussing contextual elements instead of the
product and users with anomalous features.
|
Arrays of nanoparticles exploited in light scattering applications commonly
only feature either a periodic or a rather random arrangement of its
constituents. For the periodic case, light scattering is mostly governed by the
strong spatial correlations of the arrangement, expressed by the structure
factor. For the random case, structural correlations cancel each other out and
light scattering is mostly governed by the scattering properties of the
individual scatterer, expressed by the form factor. In contrast to these
extreme cases, we show here, for the first time, that hyperuniform disorder in
self-organized large-area arrays of high refractive index nanodisks enables
both structure and form factor to impact the resulting scattering pattern,
offering novel means to tailor light scattering. The scattering response from
our nearly hyperuniform interfaces can be exploited in a large variety of
applications and constitutes a novel class of advanced optical materials.
|
We study the clustering task under anisotropic Gaussian Mixture Models where
the covariance matrices from different clusters are unknown and are not
necessarily the identical matrix. We characterize the dependence of
signal-to-noise ratios on the cluster centers and covariance matrices and
obtain the minimax lower bound for the clustering problem. In addition, we
propose a computationally feasible procedure and prove it achieves the optimal
rate within a few iterations. The proposed procedure is a hard EM type
algorithm, and it can also be seen as a variant of the Lloyd's algorithm that
is adjusted to the anisotropic covariance matrices.
|
Deep learning based generative adversarial networks (GAN) can effectively
perform image reconstruction with under-sampled MR data. In general, a large
number of training samples are required to improve the reconstruction
performance of a certain model. However, in real clinical applications, it is
difficult to obtain tens of thousands of raw patient data to train the model
since saving k-space data is not in the routine clinical flow. Therefore,
enhancing the generalizability of a network based on small samples is urgently
needed. In this study, three novel applications were explored based on parallel
imaging combined with the GAN model (PI-GAN) and transfer learning. The model
was pre-trained with public Calgary brain images and then fine-tuned for use in
(1) patients with tumors in our center; (2) different anatomies, including knee
and liver; (3) different k-space sampling masks with acceleration factors (AFs)
of 2 and 6. As for the brain tumor dataset, the transfer learning results could
remove the artifacts found in PI-GAN and yield smoother brain edges. The
transfer learning results for the knee and liver were superior to those of the
PI-GAN model trained with its own dataset using a smaller number of training
cases. However, the learning procedure converged more slowly in the knee
datasets compared to the learning in the brain tumor datasets. The
reconstruction performance was improved by transfer learning both in the models
with AFs of 2 and 6. Of these two models, the one with AF=2 showed better
results. The results also showed that transfer learning with the pre-trained
model could solve the problem of inconsistency between the training and test
datasets and facilitate generalization to unseen data.
|
Deep learning algorithms are a key component of many state-of-the-art vision
systems, especially as Convolutional Neural Networks (CNN) outperform most
solutions in the sense of accuracy. To apply such algorithms in real-time
applications, one has to address the challenges of memory and computational
complexity. To deal with the first issue, we use networks with reduced
precision, specifically a binary neural network (also known as XNOR). To
satisfy the computational requirements, we propose to use highly parallel and
low-power FPGA devices. In this work, we explore the possibility of
accelerating XNOR networks for traffic sign classification. The trained binary
networks are implemented on the ZCU 104 development board, equipped with a Zynq
UltraScale+ MPSoC device using two different approaches. Firstly, we propose a
custom HDL accelerator for XNOR networks, which enables the inference with
almost 450 fps. Even better results are obtained with the second method - the
Xilinx FINN accelerator - enabling to process input images with around 550
frame rate. Both approaches provide over 96% accuracy on the test set.
|
This paper presents a novel Sliding Mode Control (SMC) algorithm to handle
mismatched uncertainties in systems via a novel Self-Learning Disturbance
Observer (SLDO). A computationally efficient SLDO is developed within a
framework of feedback-error learning scheme in which a conventional estimation
law and a Neuro-Fuzzy Structure (NFS) work in parallel. In this framework, the
NFS estimates the mismatched disturbances and becomes the leading disturbance
estimator while the former feeds the learning error to the NFS to learn system
behavior. The simulation results demonstrate that the proposed SMC based on
SLDO (SMC-SLDO) ensures the robust control performance in the presence of
mismatched time-varying uncertainties when compared to SMC, integral SMC (ISMC)
and SMC based on a Basic Nonlinear Disturbance Observer (SMC-BNDO), and also
remains the nominal control performance in the absence of mismatched
uncertainties. Additionally, the SMC-SLDO not only counteracts mismatched
time-varying uncertainties but also improve the transient response performance
in the presence of mismatched time-invariant uncertainties. Moreover, the
controller gain of the SMC-SLDO is required to be selected larger than the
upper bound of the disturbance estimation error rather than the upper bound of
the actual disturbance to guarantee the system stability which results in
eliminating the chattering effects on the control signal.
|
Graph Neural Networks (GNNs) require a relatively large number of labeled
nodes and a reliable/uncorrupted graph connectivity structure in order to
obtain good performance on the semi-supervised node classification task. The
performance of GNNs can degrade significantly as the number of labeled nodes
decreases or the graph connectivity structure is corrupted by adversarial
attacks or due to noises in data measurement /collection. Therefore, it is
important to develop GNN models that are able to achieve good performance when
there is limited supervision knowledge -- a few labeled nodes and noisy graph
structures. In this paper, we propose a novel Dual GNN learning framework to
address this challenge task. The proposed framework has two GNN based node
prediction modules. The primary module uses the input graph structure to induce
regular node embeddings and predictions with a regular GNN baseline, while the
auxiliary module constructs a new graph structure through fine-grained spectral
clusterings and learns new node embeddings and predictions. By integrating the
two modules in a dual GNN learning framework, we perform joint learning in an
end-to-end fashion. This general framework can be applied on many GNN baseline
models. The experimental results validate that the proposed dual GNN framework
can greatly outperform the GNN baseline methods when the labeled nodes are
scarce and the graph connectivity structure is noisy.
|
This study examines habits and perceptions related to pay to publish and open
access practices in fields that have attracted little research to date:
philosophy and ethics. The study is undertaken in the Spanish context, where
the culture of publication and the book and journal publishing industry has
some specific characteristics with regard to paying to publish, such as not
offering open access distribution of books published for a fee. The study draws
on data from a survey of 201 researchers, a public debate with 26 researchers,
and 14 in-depth interviews. The results reveal some interesting insights on the
criteria researchers apply when selecting publishers and journals for their
work, the extent of paying to publish (widespread in the case of books and
modest for journals) and the debates that arise over the effects it has on
manuscript review and unequal access to resources to cover publication fees.
Data on the extent of open access and the researchers views on dissemination of
publicly funded research are also presented.
|
Despite evidence for the existence of dark matter (DM) from very high and low
redshifts, a moderate amount of DM particle decay remains a valid possibility.
This includes both models with very long-lived yet unstable particles or mixed
scenarios where only a small fraction of dark matter is allowed to decay. In
this paper, we investigate how DM particles decaying into radiation affect
non-linear structure formation. We look at the power spectrum and its redshift
evolution, varying both the decay lifetime ($\tau$) and the fraction of
decaying-to-total dark matter ($f$), and we propose a fitting function that
reaches sub-percent precision below $k\sim10$ h/Mpc. Based on this fit, we
perform a forecast analysis for a Euclid-like weak lensing (WL) survey,
including both massive neutrino and baryonic feedback parameters. We find that
with WL observations alone, it is possible to rule out decay lifetimes smaller
than $\tau=75$ Gyr (at 95 percent CL) for the case that all DM is unstable.
This constraint improves to $\tau=182$ Gyr if the WL data is combined with CMB
priors from the Planck satellite and to $\tau=275$ Gyr if we further assume
baryonic feedback to be fully constrained by upcoming Sunyaev-Zeldovich or
X-ray data. The latter shows a factor of 3.2 improvement compared to
constraints from CMB data alone. Regarding the scenario of a strongly decaying
sub-component of dark matter with $\tau\sim 30$ Gyr or lower, it will be
possible to rule out a decaying-to-total fraction of $f>0.49$, $f>0.21$, and
$f>0.13$ (at the 95 percent CL) for the same three scenarios. We conclude that
the upcoming stage-IV WL surveys will allow us to significantly improve current
constraints on the stability of the dark matter sector.
|
The inscribed angle theorem, a famous result about the angle subtended by a
chord within a circle, is well known and commonly taught in school curricula.
In this paper, we present a generalisation of this result (and other related
circle theorems) to the rectangular hyperbola. The notion of angle is replaced
by pseudo-angle, defined via the Minkowski inner product. Indeed, in Minkowski
space, the unit hyperbola is the set of points a unit metric distance from the
origin, analogous to the Euclidean unit circle. While this is a result of pure
geometrical interest, the connection to Minkowski space allows an
interpretation in terms of special relativity where, in the limit $c\to\infty$,
it leads to a familiar result from non-relativistic dynamics. This
non-relativistic result can be interpreted as an inscribed angle theorem for
the parabola, which we show can also be obtained from the Euclidean inscribed
angle theorem by taking the limit of a family of ellipses ananlogous to the
non-relativistic limit $c\to\infty$. This simple result could be used as a
pedagogical example to consolidate understanding of pseudo-angles in
non-Euclidean spaces or to demonstrate the power of analytic continuation.
|
Human pose estimation is a major computer vision problem with applications
ranging from augmented reality and video capture to surveillance and movement
tracking. In the medical context, the latter may be an important biomarker for
neurological impairments in infants. Whilst many methods exist, their
application has been limited by the need for well annotated large datasets and
the inability to generalize to humans of different shapes and body
compositions, e.g. children and infants. In this paper we present a novel
method for learning pose estimators for human adults and infants in an
unsupervised fashion. We approach this as a learnable template matching problem
facilitated by deep feature extractors. Human-interpretable landmarks are
estimated by transforming a template consisting of predefined body parts that
are characterized by 2D Gaussian distributions. Enforcing a connectivity prior
guides our model to meaningful human shape representations. We demonstrate the
effectiveness of our approach on two different datasets including adults and
infants.
|
The Rindler spacetime describing a series of accelerating observers is Ricci
flat, but it still has novel optical effects. In the case of WKB approximation,
we derive the light geodesics in the Rindler frame based on the covariant wave
equation and geodesic equations. Then, we use ABCD matrix optics method to
explore the propagation characteristics of Rindler frame, thus link three
different optical transformation scenes (geometry, gravity and vacuum
refractive index) together. Moreover, the propagation characteristics of hollow
beam in Rindler spacetime are described analytically. Those characteristics are
quite different from the ones in the flat spacetime. Based on these
calculations, we simply demonstrate the position uncertain relationship between
the transverse beam size and the momentum, which surprisingly coincides with
the derivation of quantization. We hope that we can provide one simple method
to analyze the beam propagation in the accelerated frame.
|
We present a novel approach for disentangling the content of a text image
from all aspects of its appearance. The appearance representation we derive can
then be applied to new content, for one-shot transfer of the source style to
new content. We learn this disentanglement in a self-supervised manner. Our
method processes entire word boxes, without requiring segmentation of text from
background, per-character processing, or making assumptions on string lengths.
We show results in different text domains which were previously handled by
specialized methods, e.g., scene text, handwritten text. To these ends, we make
a number of technical contributions: (1) We disentangle the style and content
of a textual image into a non-parametric, fixed-dimensional vector. (2) We
propose a novel approach inspired by StyleGAN but conditioned over the example
style at different resolution and content. (3) We present novel self-supervised
training criteria which preserve both source style and target content using a
pre-trained font classifier and text recognizer. Finally, (4) we also introduce
Imgur5K, a new challenging dataset for handwritten word images. We offer
numerous qualitative photo-realistic results of our method. We further show
that our method surpasses previous work in quantitative tests on scene text and
handwriting datasets, as well as in a user study.
|
In this paper we present our system for the detection and classification of
acoustic scenes and events (DCASE) 2020 Challenge Task 4: Sound event detection
and separation in domestic environments. We introduce two new models: the
forward-backward convolutional recurrent neural network (FBCRNN) and the
tag-conditioned convolutional neural network (CNN). The FBCRNN employs two
recurrent neural network (RNN) classifiers sharing the same CNN for
preprocessing. With one RNN processing a recording in forward direction and the
other in backward direction, the two networks are trained to jointly predict
audio tags, i.e., weak labels, at each time step within a recording, given that
at each time step they have jointly processed the whole recording. The proposed
training encourages the classifiers to tag events as soon as possible.
Therefore, after training, the networks can be applied to shorter audio
segments of, e.g., 200 ms, allowing sound event detection (SED). Further, we
propose a tag-conditioned CNN to complement SED. It is trained to predict
strong labels while using (predicted) tags, i.e., weak labels, as additional
input. For training pseudo strong labels from a FBCRNN ensemble are used. The
presented system scored the fourth and third place in the systems and teams
rankings, respectively. Subsequent improvements allow our system to even
outperform the challenge baseline and winner systems in average by,
respectively, 18.0% and 2.2% event-based F1-score on the validation set. Source
code is publicly available at https://github.com/fgnt/pb_sed.
|
Block-sparse signal recovery without knowledge of block sizes and boundaries,
such as those encountered in multi-antenna mmWave channel models, is a hard
problem for compressed sensing (CS) algorithms. We propose a novel Sparse
Bayesian Learning (SBL) method for block-sparse recovery based on popular CS
based regularizers with the function input variable related to total variation
(TV). Contrary to conventional approaches that impose the regularization on the
signal components, we regularize the SBL hyperparameters. This iterative
TV-regularized SBL algorithm employs a majorization-minimization approach and
reduces each iteration to a convex optimization problem, enabling a flexible
choice of numerical solvers. The numerical results illustrate that the
TV-regularized SBL algorithm is robust to the nature of the block structure and
able to recover signals with both block-patterned and isolated components,
proving useful for various signal recovery systems.
|
Population synthesis studies of binary black-hole mergers often lack robust
black-hole spin estimates as they cannot accurately follow tidal spin-up during
the late black-hole-Wolf-Rayet evolutionary phase. We provide an analytical
approximation of the dimensionless second-born black-hole spin given the binary
orbital period and Wolf-Rayet stellar mass at helium depletion or carbon
depletion. These approximations are obtained from fitting a sample of around
$10^5$ detailed MESA simulations that follow the evolution and spin up of close
black-hole--Wolf-Rayet systems with metallicities in the range
$[10^{-4},1.5Z_\odot]$. Following the potential spin up of the Wolf-Rayet
progenitor, the second-born black-hole spin is calculated using up-to-date core
collapse prescriptions that account for any potential disk formation in the
collapsing Wolf-Rayet star. The fits for second-born black hole spin provided
in this work can be readily applied to any astrophysical modeling that relies
on rapid population synthesis, and will be useful for the interpretation of
gravitational-wave sources using such models.
|
Vector representations have become a central element in semantic language
modelling, leading to mathematical overlaps with many fields including quantum
theory. Compositionality is a core goal for such representations: given
representations for 'wet' and 'fish', how should the concept 'wet fish' be
represented?
This position paper surveys this question from two points of view. The first
considers the question of whether an explicit mathematical representation can
be successful using only tools from within linear algebra, or whether other
mathematical tools are needed. The second considers whether semantic vector
composition should be explicitly described mathematically, or whether it can be
a model-internal side-effect of training a neural network.
A third and newer question is whether a compositional model can be
implemented on a quantum computer. Given the fundamentally linear nature of
quantum mechanics, we propose that these questions are related, and that this
survey may help to highlight candidate operations for future quantum
implementation.
|
Combination and aggregation techniques can significantly improve forecast
accuracy. This also holds for probabilistic forecasting methods where
predictive distributions are combined. There are several time-varying and
adaptive weighting schemes such as Bayesian model averaging (BMA). However, the
quality of different forecasts may vary not only over time but also within the
distribution. For example, some distribution forecasts may be more accurate in
the center of the distributions, while others are better at predicting the
tails. Therefore, we introduce a new weighting method that considers the
differences in performance over time and within the distribution. We discuss
pointwise combination algorithms based on aggregation across quantiles that
optimize with respect to the continuous ranked probability score (CRPS). After
analyzing the theoretical properties of pointwise CRPS learning, we discuss B-
and P-Spline-based estimation techniques for batch and online learning, based
on quantile regression and prediction with expert advice. We prove that the
proposed fully adaptive Bernstein online aggregation (BOA) method for pointwise
CRPS online learning has optimal convergence properties. They are confirmed in
simulations and a probabilistic forecasting study for European emission
allowance (EUA) prices.
|
This paper presents the definition and implementation of a quantum computer
architecture to enable creating a new computational device - a quantum computer
as an accelerator In this paper, we present explicitly the idea of a quantum
accelerator which contains the full stack of the layers of an accelerator. Such
a stack starts at the highest level describing the target application of the
accelerator. Important to realise is that qubits are defined as perfect qubits,
implying they do not decohere and perform good quantum gate operations. The
next layer abstracts the quantum logic outlining the algorithm that is to be
executed on the quantum accelerator. In our case, the logic is expressed in the
universal quantum-classical hybrid computation language developed in the group,
called OpenQL. We also have to start thinking about how to verify, validate and
test the quantum software such that the compiler generates a correct version of
the quantum circuit. The OpenQL compiler translates the program to a common
assembly language, called cQASM. We need to develop a quantum operating system
that manages all the hardware of the micro-architecture. The layer below the
micro-architecture is responsible of the mapping and routing of the qubits on
the topology such that the nearest-neighbour-constraint can be be respected. At
any moment in the future when we are capable of generating multiple good
qubits, the compiler can convert the cQASM to generate the eQASM, which is
executable on a particular experimental device incorporating the
platform-specific parameters. This way, we are able to distinguish clearly the
experimental research towards better qubits, and the industrial and societal
applications that need to be developed and executed on a quantum device.
|
Correspondence-based rotation search and point cloud registration are two
fundamental problems in robotics and computer vision. However, the presence of
outliers, sometimes even occupying the great majority of the putative
correspondences, can make many existing algorithms either fail or have very
high computational cost. In this paper, we present RANSIC (RANdom Sampling with
Invariant Compatibility), a fast and highly robust method applicable to both
problems based on a new paradigm combining random sampling with invariance and
compatibility. Generally, RANSIC starts with randomly selecting small subsets
from the correspondence set, then seeks potential inliers as graph vertices
from the random subsets through the compatibility tests of invariants
established in each problem, and eventually returns the eligible inliers when
there exists at least one K-degree vertex (K is automatically updated depending
on the problem) and the residual errors satisfy a certain termination condition
at the same time. In multiple synthetic and real experiments, we demonstrate
that RANSIC is fast for use, robust against over 95% outliers, and also able to
recall approximately 100% inliers, outperforming other state-of-the-art solvers
for both the rotation search and the point cloud registration problems.
|
Computational models of biological processes provide one of the most powerful
methods for a detailed analysis of the mechanisms that drive the behavior of
complex systems. Logic-based modeling has enhanced our understanding and
interpretation of those systems. Defining rules that determine how the output
activity of biological entities is regulated by their respective inputs has
proven to be challenging, due to increasingly larger models and the presence of
noise in data, allowing multiple model parameterizations to fit the
experimental observations.
We present several Boolean function metrics that provide modelers with the
appropriate framework to analyze the impact of a particular model
parameterization. We demonstrate the link between a semantic characterization
of a Boolean function and its consistency with the model's underlying
regulatory structure. We further define the properties that outline such
consistency and show that several of the Boolean functions under study violate
them, questioning their biological plausibility and subsequent use. We also
illustrate that regulatory functions can have major differences with regard to
their asymptotic output behavior, with some of them being biased towards
specific Boolean outcomes when others are dependent on the ratio between
activating and inhibitory regulators.
Application results show that in a specific signaling cancer network, the
function bias can be used to guide the choice of logical operators for a model
that matches data observations. Moreover, graph analysis indicates that the
standardized Boolean function bias becomes more prominent with increasing
numbers of regulators, confirming the fact that rule specification can
effectively determine regulatory outcome despite the complex dynamics of
biological networks.
|
The application of machine learning(ML) and genetic programming(GP) to the
image compression domain has produced promising results in many cases. The need
for compression arises due to the exorbitant size of data shared on the
internet. Compression is required for text, videos, or images, which are used
almost everywhere on web be it news articles, social media posts, blogs,
educational platforms, medical domain, government services, and many other
websites, need packets for transmission and hence compression is necessary to
avoid overwhelming the network. This paper discusses some of the
implementations of image compression algorithms that use techniques such as
Artificial Neural Networks, Residual Learning, Fuzzy Neural Networks,
Convolutional Neural Nets, Deep Learning, Genetic Algorithms. The paper also
describes an implementation of Vector Quantization using GA to generate
codebook which is used for Lossy image compression. All these approaches prove
to be very contrasting to the standard approaches to processing images due to
the highly parallel and computationally extensive nature of machine learning
algorithms. Such non-linear abilities of ML and GP make it widely popular for
use in multiple domains. Traditional approaches are also combined with
artificially intelligent systems, leading to hybrid systems, to achieve better
results.
|
The thermodynamic properties of Bi-Sn were studied at 600 and 900K using a
quasi-lattice theory. After successful fitting of Gibbs free energies of mixing
and thermodynamic activities, the fitting parameters were used to investigate
the enthalpy of mixing, the entropy of mixing, concentration fluctuations,
Warren-Cowley short range order parameter, surface concentrations and surface
tensions of the binary systems. Positive and symmetrically shaped enthalpies of
mixing were observed in all composition range, while negative excess entropies
of mixing were observed. Bi-Sn showed a slight preference for like-atoms as
nearest neighbours in all composition range. The nature of atomic order in
Bi-Sn at 600 and 900K appeared similar. The highest tendency for
homocoordination exists at composition where mole fraction of Bi is about 40%.
It was also observed that Bi (whose surface tension is lower than that of Sn)
has the highest surface enrichment in the Bi-Sn systems. Unlike many previous
applications of the quasi-lattice theory where constant values were used to
approximate coordination numbers, temperature and composition-dependent
coordination numbers were applied in this work.
|
We have performed ab-initio molecular dynamics simulations to elucidate the
mechanism of the phase transition at high pressure from hexagonal graphite (HG)
to hexagonal diamond (HD) or to cubic diamond (CD). The transition from HG to
HD is found to occur swiftly in very small time of 0.2 ps, with large
cooperative displacements of all the atoms. We observe that alternate layers of
atoms in HG slide in opposite directions by (1/3, 1/6, 0) and (-1/3, -1/6, 0),
respectively, which is about 0.7 {\AA} along the pm[2, 1, 0] direction, while
simultaneously puckering by about pm0.25 {\AA} perpendicular to the a-b plane.
The transition from HG to CD occurred with more complex cooperative
displacements. In this case, six successive HG layers slide in pairs by 1/3
along [0, 1, 0], [-1, -1, 0] and [1, 0, 0], respectively along with the
puckering as above. We have also performed calculations of the phonon spectrum
in HG at high pressure, which reveal soft phonon modes that may facilitate the
phase transition involving the sliding and puckering of the HG layers. The
zero-point vibrational energy and the vibrational entropy are found to have
important role in stabilizing HG up to higher pressures (>10 GPa) and
temperatures than that estimated (<6 GPa) from previous enthalpy calculations.
|
Recent medical imaging studies have given rise to distinct but inter-related
datasets corresponding to multiple experimental tasks or longitudinal visits.
Standard scalar-on-image regression models that fit each dataset separately are
not equipped to leverage information across inter-related images, and existing
multi-task learning approaches are compromised by the inability to account for
the noise that is often observed in images. We propose a novel joint
scalar-on-image regression framework involving wavelet-based image
representations with grouped penalties that are designed to pool information
across inter-related images for joint learning, and which explicitly accounts
for noise in high-dimensional images via a projection-based approach. In the
presence of non-convexity arising due to noisy images, we derive non-asymptotic
error bounds under non-convex as well as convex grouped penalties, even when
the number of voxels increases exponentially with sample size. A projected
gradient descent algorithm is used for computation, which is shown to
approximate the optimal solution via well-defined non-asymptotic optimization
error bounds under noisy images. Extensive simulations and application to a
motivating longitudinal Alzheimer's disease study illustrate significantly
improved predictive ability and greater power to detect true signals, that are
simply missed by existing methods without noise correction due to the
attenuation to null phenomenon.
|
The availability of multi-omics data has revolutionized the life sciences by
creating avenues for integrated system-level approaches. Data integration links
the information across datasets to better understand the underlying biological
processes. However, high-dimensionality, correlations and heterogeneity pose
statistical and computational challenges. We propose a general framework,
probabilistic two-way partial least squares (PO2PLS), which addresses these
challenges. PO2PLS models the relationship between two datasets using joint and
data-specific latent variables. For maximum likelihood estimation of the
parameters, we implement a fast EM algorithm and show that the estimator is
asymptotically normally distributed. A global test for testing the relationship
between two datasets is proposed, and its asymptotic distribution is derived.
Notably, several existing omics integration methods are special cases of
PO2PLS. Via extensive simulations, we show that PO2PLS performs better than
alternatives in feature selection and prediction performance. In addition, the
asymptotic distribution appears to hold when the sample size is sufficiently
large. We illustrate PO2PLS with two examples from commonly used study designs:
a large population cohort and a small case-control study. Besides recovering
known relationships, PO2PLS also identified novel findings. The methods are
implemented in our R-package PO2PLS. Supplementary materials for this article
are available online.
|
Hard sphere systems are often used to model simple fluids. The configuration
spaces of hard spheres in a three-dimensional torus modulo various symmetry
groups are comparatively simple, and could provide valuable information about
the nature of phase transitions. Specifically, the topological changes in the
configuration space as a function of packing fraction have been conjectured to
be related to the onset of first-order phase transitions. The critical
configurations for one to twelve spheres are sampled using a Morse-theoretic
approach, and are available in an online, interactive database. Explicit
triangulations are constructed for the configuration spaces of the two sphere
system, and their topological and geometric properties are studied. The
critical configurations are found to be associated with geometric changes to
the configuration space that connect previously distant regions and reduce the
configuration space diameter as measured by the commute time and diffusion
distances. The number of such critical configurations around the packing
fraction of the solid-liquid phase transition increases exponentially with the
number of spheres, suggesting that the onset of the first-order phase
transition in the thermodynamic limit is associated with a discontinuity in the
configuration space diameter.
|
Answering a long standing question, we give an example of a Hilbert module
and a nonzero bounded right linear map having a kernel with trivial orthogonal
complement. In particular, this kernel is different from its own double
orthogonal complement.
|
Neural networks often require large amounts of data to generalize and can be
ill-suited for modeling small and noisy experimental datasets. Standard network
architectures trained on scarce and noisy data will return predictions that
violate the underlying physics. In this paper, we present methods for embedding
even--odd symmetries and conservation laws in neural networks and propose novel
extensions and use cases for physical constraint embedded neural networks. We
design an even--odd decomposition architecture for disentangling a neural
network parameterized function into its even and odd components and demonstrate
that it can accurately infer symmetries without prior knowledge. We highlight
the noise resilient properties of physical constraint embedded neural networks
and demonstrate their utility as physics-informed noise regulators. Here we
employed a conservation of energy constraint embedded network as a
physics-informed noise regulator for a symbolic regression task. We showed that
our approach returns a symbolic representation of the neural network
parameterized function that aligns well with the underlying physics while
outperforming a baseline symbolic regression approach.
|
Small form-factor, narrowband, and highly directive antennas are of critical
importance in a variety of applications spanning wireless communications,
remote sensing, Raman spectroscopy, and single photon emission enhancement.
Surprisingly, we show that the classical directivity limit can be appreciably
surpassed for electrically small multilayer spherical antennas excited by a
point electric dipole even if limiting ourselves to purely dielectric
materials. Experimentally feasible designs of superdirective antennas are
established by using a stochastic optimization algorithm combined with a
rigorous analytic solution.
|
Recently, [JHEP 20 131 (2020)] obtained (a similar, scaled version of) the
($a,b$)-phase diagram derived from the Kazakov--Zinn-Justin solution of the
Hermitian two-matrix model with interactions \[\mathrm{Tr\,}\Big\{\frac{a}{4}
(A^4+B^4)+\frac{b}{2} ABAB\Big\}\,,\] starting from Functional Renormalization.
We comment on something unexpected: the phase diagram of [JHEP 20 131 (2020)]
is based on a $\beta_b$-function that does not have the one-loop structure of
the Wetterich-Morris Equation. This raises the question of how to reproduce the
phase diagram from a set of $\beta$-functions that is, in its totality,
consistent with Functional Renormalization. A non-minimalist, yet simple
truncation that could lead to the phase diagram is provided. Additionally, we
identify the ensemble for which the result of op. cit. would be entirely
correct.
|
Climate change, which is now considered one of the biggest threats to
humanity, is also the reason behind various other environmental concerns.
Continued negligence might lead us to an irreparably damaged environment. After
the partial failure of the Paris Agreement, it is quite evident that we as
individuals need to come together to bring about a change on a large scale to
have a significant impact. This paper discusses our approach towards obtaining
a realistic measure of the carbon footprint index being consumed by a user
through day-to-day activities performed via a smart phone app and offering
incentives in weekly and monthly leader board rankings along with a reward
system. The app helps ease out decision makings on tasks like travel, shopping,
electricity consumption, and gain a different and rather numerical perspective
over the daily choices.
|
In this article we present recent advances on interval methods for rigorous
computation of Poincar\'e maps. We also discuss the impact of choice of
Poincar\'e section and coordinate system on obtained bounds for computing
Poincar\'e map nearby fixed points.
|
Foraminifera are single-celled marine organisms that construct shells that
remain as fossils in the marine sediments. Classifying and counting these
fossils are important in e.g. paleo-oceanographic and -climatological research.
However, the identification and counting process has been performed manually
since the 1800s and is laborious and time-consuming. In this work, we present a
deep learning-based instance segmentation model for classifying, detecting, and
segmenting microscopic foraminifera. Our model is based on the Mask R-CNN
architecture, using model weight parameters that have learned on the COCO
detection dataset. We use a fine-tuning approach to adapt the parameters on a
novel object detection dataset of more than 7000 microscopic foraminifera and
sediment grains. The model achieves a (COCO-style) average precision of $0.78
\pm 0.00$ on the classification and detection task, and $0.80 \pm 0.00$ on the
segmentation task. When the model is evaluated without challenging sediment
grain images, the average precision for both tasks increases to $0.84 \pm 0.00$
and $0.86 \pm 0.00$, respectively. Prediction results are analyzed both
quantitatively and qualitatively and discussed. Based on our findings we
propose several directions for future work, and conclude that our proposed
model is an important step towards automating the identification and counting
of microscopic foraminifera.
|
For the observational modeling of horizontal abundance distributions and of
magnetic geometries in chemically peculiar (CP) stars, Zeeman Doppler mapping
(ZDM) has become the method of choice. Comparisons between abundance maps
obtained for CP stars and predictions from numerical simulations of atomic
diffusion have always proved unsatisfactory, with the blame routinely put on
theory. Expanding a previous study aimed at clarifying the question of the
uniqueness of ZDM maps, this paper inverts the roles between observational
modeling and time-dependent diffusion results, casting a cold eye on essential
assumptions and algorithms underlying ZDM, in particular the Tikhonov-style
regularization functionals, from 1D to 3D. We show that these have been
established solely for mathematical convenience, but that they in no way
reflect the physical reality in the atmospheres of magnetic CP stars.
Recognizing that the observed strong magnetic fields in most well-mapped stars
require the field geometry to be force-free, we demonstrate that many published
maps do not meet this condition. There follows a discussion of the frequent
changes in magnetic and abundance maps of well observed stars and a caveat
concerning the use of least squares deconvolution in ZDM analyses. It emerges
that because of the complexity and non-linearity of the field-dependent
chemical stratifications, Tikhonov based ZDM inversions cannot recover the true
abundance and magnetic geometries. As our findings additionally show, there is
no way to define a physically meaningful 3D regularization functional instead.
ZDM remains dysfunctional and does not provide any observational constraints
for the modeling of atomic diffusion.
|
In this paper, we propose different algorithms for the solution of a tensor
linear discrete ill-posed problem arising in the application of the meshless
method for solving PDEs in three-dimensional space using multiquadric radial
basis functions. It is well known that the truncated singular value
decomposition (TSVD) is the most common effective solver for ill-conditioned
systems, but unfortunately the operation count for solving a linear system with
the TSVD is computationally expensive for large-scale matrices. In the present
work, we propose algorithms based on the use of the well known Einstein product
for two tensors to define the tensor global Arnoldi and the tensor Gloub Kahan
bidiagonalization algorithms. Using the so-called Tikhonov regularization
technique, we will be able to provide computable approximate regularized
solutions in a few iterations.
|
Fairness is an important property in data-mining applications, including
recommender systems. In this work, we investigate a case where users of a
recommender system need (or want) to be fair to a protected group of items. For
example, in a job market, the user is the recruiter, an item is the job seeker,
and the protected attribute is gender or race. Even if recruiters want to use a
fair talent recommender system, the platform may not provide a fair recommender
system, or recruiters may not be able to ascertain whether the recommender
system's algorithm is fair. In this case, recruiters cannot utilize the
recommender system, or they may become unfair to job seekers. In this work, we
propose methods to enable the users to build their own fair recommender
systems. Our methods can generate fair recommendations even when the platform
does not (or cannot) provide fair recommender systems. The key challenge is
that a user does not have access to the log data of other users or the latent
representations of items. This restriction prohibits us from adopting existing
methods, which are designed for platforms. The main idea is that a user has
access to unfair recommendations provided by the platform. Our methods leverage
the outputs of an unfair recommender system to construct a new fair recommender
system. We empirically validate that our proposed method improves fairness
substantially without harming much performance of the original unfair system.
|
We provide a categorical interpretation for _escrows_, i.e. trading protocols
in trustless environment, where the exchange between two agents is mediated by
a third party where the buyer locks the money until they receive the goods they
want from the seller. A simplified escrow system can be modeled as a certain
kind of _optic_ in a monoidal category $\mathcal M$ (e.g., the category of sets
with cartesian product); escrows can be regarded as morphisms of a category
$\mathcal E(\mathcal M)$, with the same objects of $\mathcal M$, and where the
hom-objects are $\langle X , Y \rangle = \mathsf{Opt}_{\mathcal M}(\left[
\begin{smallmatrix} Y \\ X \end{smallmatrix} \right], \left[
\begin{smallmatrix} X \\ Y \end{smallmatrix} \right])$. When $X$ is a comonoid
and $Y$ is a monoid in $\mathcal M$, $\mathcal E(\mathcal M)(X,Y)$ is a monoid
in $\mathsf{Set}$ (or in the base of enrichment chosen to model one's specific
problem), acting on the set of optics $\left[ \begin{smallmatrix} B \\ B
\end{smallmatrix} \right] \to \left[ \begin{smallmatrix} X \\ Y
\end{smallmatrix} \right]$. Moreover, we define a map $$\lhd : \langle Y , X
\rangle \times \mathsf{Opt}(\left[ \begin{smallmatrix} Y \\ X \end{smallmatrix}
\right], \left[ \begin{smallmatrix} B \\ B \end{smallmatrix} \right]) \to
\mathsf{Opt}(\left[ \begin{smallmatrix} Y \\ X \end{smallmatrix} \right],
\left[ \begin{smallmatrix}{X\otimes B}\\ {Y\otimes B} \end{smallmatrix}
\right])$$ having action-like properties. This has the following
interpretation: the object $B$ acts as an intermediary in a transaction between
$X$ and $Y$, modeled by an escrow in $\langle Y , X \rangle$.
|
We study the mean properties of a large representative sample of 217 galaxies
showing CIII] emission at $2<z<4$, selected from a parent sample of $\sim$750
main-sequence star-forming galaxies in the VANDELS survey. These CIII] emitters
have a broad range of UV luminosities, thus allowing a detailed stacking
analysis to characterize their stellar mass, star formation rate (SFR) and
stellar metallicity, as a function of the UV emission line ratios, EWs, and the
carbon-to-oxygen (C/O) abundance ratio. Reliable CIII] detections represent
$\sim$30% of the parent sample. Extreme CIII] emitters
(EW(CIII])$\gtrsim$8\r{A}) are exceedingly rare ($\sim$3%) in VANDELS. The UV
line ratios of the sample suggest no ionization source other than massive
stars. Stacks with larger EW(CIII]) show larger EW(Ly$\alpha$) and lower
metallicity, but not all CIII] emitters are Ly$\alpha$ emitters. The stellar
metallicities of CIII] emitters are not significantly different from that of
the parent sample, increasing from $\sim$10% to $\sim$40% solar for stellar
masses $\log$(M$_{\star}$/M$_{\odot})\sim$9-10.5. The stellar mass-metallicity
relation of the CIII] emitters is consistent with previous works showing strong
evolution from $z=0$ to $z\sim3$. The C/O abundances of the sample range
35%-150% solar, with a noticeable increase with FUV luminosity and a smooth
decrease with the CIII] EW. We discuss the CIII] emitters in the C/O-Fe/H and
the C/O-O/H planes and find they follow stellar and nebular abundance trends
consistent with those of Milky Way halo and thick disc stars and local HII
galaxies, respectively. A qualitative agreement is also found with chemical
evolution models, which suggests that CIII] emitters at $z\sim$3 are
experiencing an active phase of chemical enrichment.
|
We describe models focused at the understudied problem of translating between
monolingual and code-mixed language pairs. More specifically, we offer a wide
range of models that convert monolingual English text into Hinglish (code-mixed
Hindi and English). Given the recent success of pretrained language models, we
also test the utility of two recent Transformer-based encoder-decoder models
(i.e., mT5 and mBART) on the task finding both to work well. Given the paucity
of training data for code-mixing, we also propose a dependency-free method for
generating code-mixed texts from bilingual distributed representations that we
exploit for improving language model performance. In particular, armed with
this additional data, we adopt a curriculum learning approach where we first
finetune the language models on synthetic data then on gold code-mixed data. We
find that, although simple, our synthetic code-mixing method is competitive
with (and in some cases is even superior to) several standard methods
(backtranslation, method based on equivalence constraint theory) under a
diverse set of conditions. Our work shows that the mT5 model, finetuned
following the curriculum learning procedure, achieves best translation
performance (12.67 BLEU). Our models place first in the overall ranking of the
English-Hinglish official shared task.
|
Open conjectures state that, for every $x\in[0,1]$, the orbit
$\left(x_n\right)_{n=1}^\infty$ of the mean-median recursion
$$x_{n+1}=(n+1)\cdot\mathrm{median}\left(x_1,\ldots,x_{n}\right)-\left(x_1+\cdots+x_n\right),\quad
n\geqslant 3,$$ with initial data $\left(x_1,x_2,x_3\right)=(0,x,1)$, is
eventually constant, and that its transit time and limit functions (of $x$) are
unbounded and continuous, respectively. In this paper we prove that, for the
slightly modified recursion
$$x_{n+1}=n\cdot\mathrm{median}\left(x_1,\ldots,x_{n}\right)-\left(x_1+\cdots+x_n\right),\quad
n\geqslant 3,$$ first suggested by Akiyama, the transit time function is
unbounded but the limit function is discontinuous.
|
The research in Environmental Sound Classification (ESC) has been
progressively growing with the emergence of deep learning algorithms. However,
data scarcity poses a major hurdle for any huge advance in this domain. Data
augmentation offers an excellent solution to this problem. While Generative
Adversarial Networks (GANs) have been successful in generating synthetic speech
and sounds of musical instruments, they have hardly been applied to the
generation of environmental sounds. This paper presents EnvGAN, the first ever
application of GANs for the adversarial generation of environmental sounds. Our
experiments on three standard ESC datasets illustrate that the EnvGAN can
synthesize audio similar to the ones in the datasets. The suggested method of
augmentation outshines most of the futuristic techniques for audio
augmentation.
|
We study a weakly-interacting one-dimensional Bose gas with two impurities
coupled locally to the boson density. We derive analytical results for the
induced interaction between the impurities at arbitrary coupling and separation
$r$. At $r\lesssim \xi$, where $\xi$ denotes the healing length of the Bose
gas, the interaction is well described by the mean-field contribution. Its form
changes as the coupling is increased, approaching a linear function of $r$ at
short distances in the regime of strong coupling. The mean-field contribution
decays exponentially at arbitrary coupling for $r\gg\xi$. At such long
distances, however, the effect of quantum fluctuations becomes important,
giving rise to a long-ranged quantum contribution to the induced interaction.
At longest distances it behaves as $1/r^3$, while at strong coupling we find an
intermediate distance regime with a slower decay, $1/r$. The quantum
contribution in the crossover regime is also calculated. The induced
interaction between impurities (i.e., polarons) is attractive and leads to the
formation of their bound state, known as bipolaron. We discuss its binding
energy.
|
We derive symmetric and antisymmetric kernels by symmetrizing and
antisymmetrizing conventional kernels and analyze their properties. In
particular, we compute the feature space dimensions of the resulting polynomial
kernels, prove that the reproducing kernel Hilbert spaces induced by symmetric
and antisymmetric Gaussian kernels are dense in the space of symmetric and
antisymmetric functions, and propose a Slater determinant representation of the
antisymmetric Gaussian kernel, which allows for an efficient evaluation even if
the state space is high-dimensional. Furthermore, we show that by exploiting
symmetries or antisymmetries the size of the training data set can be
significantly reduced. The results are illustrated with guiding examples and
simple quantum physics and chemistry applications.
|
Combinatorial optimization problem (COP) over graphs is a fundamental
challenge in optimization. Reinforcement learning (RL) has recently emerged as
a new framework to tackle these problems and has demonstrated promising
results. However, most RL solutions employ a greedy manner to construct the
solution incrementally, thus inevitably pose unnecessary dependency on action
sequences and need a lot of problem-specific designs. We propose a general RL
framework that not only exhibits state-of-the-art empirical performance but
also generalizes to a variety class of COPs. Specifically, we define state as a
solution to a problem instance and action as a perturbation to this solution.
We utilize graph neural networks (GNN) to extract latent representations for
given problem instances for state-action encoding, and then apply deep
Q-learning to obtain a policy that gradually refines the solution by flipping
or swapping vertex labels. Experiments are conducted on Maximum $k$-Cut and
Traveling Salesman Problem and performance improvement is achieved against a
set of learning-based and heuristic baselines.
|
A new connection between structure and dynamics in glass-forming liquids is
presented. We show how the origin of spatially localized excitations, as
defined by dynamical facilitation (DF) theory, can be understood from a
structure-based framework. This framework is constructed by associating
excitation events in DF theory to hopping events between energy minima in the
potential energy landscape (PEL). By reducing the PEL to an equal energy well
picture and applying a harmonic approximation, we develop a field theory to
describe elastic fluctuations about inherent states, which are energy
minimizing configurations of the PEL. We model an excitation as a shear
transformation zone (STZ) inducing a localized pure shear deformation onto an
inherent state. We connect STZs to T1 transition events that break the elastic
bonds holding the local structure of an inherent state. A formula for the
excitation energy barrier, denoted as $J_\sigma$, is obtained as a function of
inherent-state elastic moduli and radial distribution function. The energy
barrier from the current theory is compared to one predicted by the DF theory
where good agreement is found in various two-dimensional continuous
poly-disperse atomistic models of glass formers. These results strengthen the
role of structure and elasticity in driving glassy dynamics through the
creation and relaxation of localized excitations.
|
Interfaces impede heat flow in micro/nanostructured systems. Conventional
theories for interfacial thermal transport were derived based on bulk phonon
properties of the materials making up the interface without explicitly
considering the atomistic interfacial details, which are found critical to
correctly describing thermal boundary conductance (TBC). Recent theoretical
studies predicted the existence of localized phonon modes at the interface
which can play an important role in understanding interfacial thermal
transport. However, experimental validation is still lacking. Through a
combination of Raman spectroscopy and high-energy resolution electron
energy-loss spectroscopy (EELS) in a scanning transmission electron microscope,
we report the first experimental observation of localized interfacial phonon
modes at ~12 THz at a high-quality epitaxial Si-Ge interface. These modes are
further confirmed using molecular dynamics simulations with a high-fidelity
neural network interatomic potential, which also yield TBC agreeing well with
that measured from time-domain thermoreflectance (TDTR) experiments.
Simulations find that the interfacial phonon modes have obvious contribution to
the total TBC. Our findings may significantly contribute to the understanding
of interfacial thermal transport physics and have impact on engineering TBC at
interfaces in applications such as electronics thermal management and
thermoelectric energy conversion.
|
The diffusion, "explosion" and "evaporation" of dimers and the subsequent
coalescence are treated in a formal way by identifying and solving the
differential equations deduced from the respective behaviors of dimers in the
different cases. This study leads to analytic formulas allowing to calculate,
in a simple and fast way, the size statistics obtained after the coalescence of
the dimers or their constituents once the dimers have completely disappeared.
These formulas are of capital interest to characterize systems in which the
dimers initially present disappear. 4 Study assumptions. Six assumptions will
be made for this study: 1. Initial probabilities are known.
|
Quantum synchronizable codes are kinds of quantum error-correcting codes that
can not only correct the effects of quantum noise on qubits but also the
misalignment in block synchronization. This paper contributes to constructing
two classes of quantum synchronizable codes by the cyclotomic classes of order
two over $\mathbb{Z}_{2q}$, whose synchronization capabilities can reach the
upper bound under certain conditions. Moreover, the quantum synchronizable
codes possess good error-correcting capability towards bit errors and phase
errors.
|
In this work, we mainly concern the limiting behavior of the electromagnetic
field of two species Vlasov-Maxwell-Botlzmann system in diffusive limits. As
knudsen numbers goes to zero, the electric magnetic and magnetic field may
perserve or vanish. We verify rigorously Navier-Stokes, Navier-Stokes-Poisson
and Navier-Stokes Maxwell limit of the two species Vlasov-Maxwell-Boltzmann
system on the torus in three dimension. The justification is based on the
unified and uniform estimates of solutions to the dimensionless
Vlasov-Maxwell-Boltzmann. The uniform estimates of solutions are obtained by
employing the hypocoercivity of the linear Boltzmann operator and constructing
a equation containing damping term of electric field.
|
We consider repair tasks: given a critic (e.g., compiler) that assesses the
quality of an input, the goal is to train a fixer that converts a bad example
(e.g., code with syntax errors) into a good one (e.g., code with no syntax
errors). Existing works create training data consisting of (bad, good) pairs by
corrupting good examples using heuristics (e.g., dropping tokens). However,
fixers trained on this synthetically-generated data do not extrapolate well to
the real distribution of bad inputs. To bridge this gap, we propose a new
training approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we use
the critic to check a fixer's output on real bad inputs and add good (fixed)
outputs to the training data, and (ii) we train a breaker to generate realistic
bad code from good code. Based on these ideas, we iteratively update the
breaker and the fixer while using them in conjunction to generate more paired
data. We evaluate BIFI on two code repair datasets: GitHub-Python, a new
dataset we introduce where the goal is to repair Python code with AST parse
errors; and DeepFix, where the goal is to repair C code with compiler errors.
BIFI outperforms existing methods, obtaining 90.5% repair accuracy on
GitHub-Python (+28.5%) and 71.7% on DeepFix (+5.6%). Notably, BIFI does not
require any labeled data; we hope it will be a strong starting point for
unsupervised learning of various repair tasks.
|
The main memory access latency has not much improved for more than two
decades while the CPU performance had been exponentially increasing until
recently. Approximate memory is a technique to reduce the DRAM access latency
in return of losing data integrity. It is beneficial for applications that are
robust to noisy input and intermediate data such as artificial intelligence,
multimedia processing, and graph processing. To obtain reasonable outputs from
applications on approximate memory, it is crucial to protect critical data
while accelerating accesses to non-critical data. We refer the minimum size of
a continuous memory region that the same error rate is applied in approximate
memory to as the approximation granularity. A fundamental limitation of
approximate memory is that the approximation granularity is as large as a few
kilo bytes. However, applications may have critical and non-critical data
interleaved with smaller granularity. For example, a data structure for graph
nodes can have pointers (critical) to neighboring nodes and its score
(non-critical, depending on the use-case). This data structure cannot be
directly mapped to approximate memory due to the gap between the approximation
granularity and the granularity of data criticality. We refer to this issue as
the granularity gap problem. In this paper, we first show that many
applications potentially suffer from this problem. Then we propose a framework
to quantitatively evaluate the performance overhead of a possible method to
avoid this problem using known techniques. The evaluation results show that the
performance overhead is non-negligible compared to expected benefit from
approximate memory, suggesting that the granularity gap problem is a
significant concern.
|
While data sharing is crucial for knowledge development, privacy concerns and
strict regulation (e.g., European General Data Protection Regulation (GDPR))
unfortunately limit its full effectiveness. Synthetic tabular data emerges as
an alternative to enable data sharing while fulfilling regulatory and privacy
constraints. The state-of-the-art tabular data synthesizers draw methodologies
from generative Adversarial Networks (GAN) and address two main data types in
the industry, i.e., continuous and categorical. In this paper, we develop
CTAB-GAN, a novel conditional table GAN architecture that can effectively model
diverse data types, including a mix of continuous and categorical variables.
Moreover, we address data imbalance and long-tail issues, i.e., certain
variables have drastic frequency differences across large values. To achieve
those aims, we first introduce the information loss and classification loss to
the conditional GAN. Secondly, we design a novel conditional vector, which
efficiently encodes the mixed data type and skewed distribution of data
variable. We extensively evaluate CTAB-GAN with the state of the art GANs that
generate synthetic tables, in terms of data similarity and analysis utility.
The results on five datasets show that the synthetic data of CTAB-GAN
remarkably resembles the real data for all three types of variables and results
into higher accuracy for five machine learning algorithms, by up to 17%.
|
This work deals with the topological classification of singular foliation
germs on $(\mathbb C^{2},0)$. Working in a suitable class of foliations we fix
the topological invariants given by the separatrix set, the Camacho-Sad indices
and the projective holonomy representations and we construct a minimal family
of foliation germs containing all the topological classes and which is
topologically complete. We prove factorization properties of equisingular
families through this family up to topological conjugacy.
|
For monolayers of chemically active particles at a fluid interface,
collective dynamics are predicted to arise owing to activity-induced Marangoni
flow even if the particles are not self-propelled. Here we test this prediction
by employing a monolayer of spherically symmetric active TiO_2 particles
located at an oil-water interface with or without addition of a non-ionic
surfactant. Due to the spherical symmetry, an individual particle does not
self-propel. However, the gradients produced by the photochemical fuel
degradation give rise to long-ranged Marangoni flows. For the case in which
surfactant is added to the system, we indeed observe the emergence of
collective motion, with dynamics dependent on the particle coverage of the
monolayer. The experimental observations are discussed within the framework of
a simple theoretical mean field model.
|
We consider the problem of making expressive static analyzers interactive.
Formal static analysis is seeing increasingly widespread adoption as a tool for
verification and bug-finding, but even with powerful cloud infrastructure it
can take minutes or hours to get batch analysis results after a code change.
While existing techniques offer some demand-driven or incremental aspects for
certain classes of analysis, the fundamental challenge we tackle is doing both
for arbitrary abstract interpreters.
Our technique, demanded abstract interpretation, lifts program syntax and
analysis state to a dynamically evolving graph structure, in which program
edits, client-issued queries, and evaluation of abstract semantics are all
treated uniformly. The key difficulty addressed by our approach is the
application of general incremental computation techniques to the complex,
cyclic dependency structure induced by abstract interpretation of loops with
widening operators. We prove that desirable abstract interpretation
meta-properties, including soundness and termination, are preserved in our
approach, and that demanded analysis results are equal to those computed by a
batch abstract interpretation. Experimental results suggest promise for a
prototype demanded abstract interpretation framework: by combining incremental
and demand-driven techniques, our framework consistently delivers analysis
results at interactive speeds, answering 95% of queries within 1.2 seconds.
|
In this paper, we introduce the notion of completely non-trivial module of a
Lie conformal algebra. By this notion, we classify all finite irreducible
modules of a class of $\mathbb{Z}^+$-graded Lie conformal algebras
$\mathcal{L}=\bigoplus_{i=0}^{\infty} \mathbb{C}[\partial]L_i$ satisfying $
[{L_0}_\lambda L_0]=(\partial+2\lambda)L_0,$ and $[{L_1}_\lambda L_i]\neq 0$
for any $i\in \mathbb{Z}^+$. These Lie conformal algebras include Block type
Lie conformal algebra $\mathcal{B}(p)$ and map Virasoro Lie conformal algebra
$\mathcal{V}(\mathbb{C}[T])=Vir\otimes \mathbb{C}[T]$. As a result, we show
that all non-trivial finite irreducible modules of these algebras are free of
rank one as a $\mathbb{C}[\partial]$-module.
|
We view disentanglement learning as discovering an underlying structure that
equivariantly reflects the factorized variations shown in data. Traditionally,
such a structure is fixed to be a vector space with data variations represented
by translations along individual latent dimensions. We argue this simple
structure is suboptimal since it requires the model to learn to discard the
properties (e.g. different scales of changes, different levels of abstractness)
of data variations, which is an extra work than equivariance learning. Instead,
we propose to encode the data variations with groups, a structure not only can
equivariantly represent variations, but can also be adaptively optimized to
preserve the properties of data variations. Considering it is hard to conduct
training on group structures, we focus on Lie groups and adopt a
parameterization using Lie algebra. Based on the parameterization, some
disentanglement learning constraints are naturally derived. A simple model
named Commutative Lie Group VAE is introduced to realize the group-based
disentanglement learning. Experiments show that our model can effectively learn
disentangled representations without supervision, and can achieve
state-of-the-art performance without extra constraints.
|
Large area van der Waals (vdW) thin films are assembled materials consisting
of a network of randomly stacked nanosheets. The multi-scale structure and the
two-dimensional nature of the building block mean that interfaces naturally
play a crucial role in the charge transport of such thin films. While single or
few stacked nanosheets (i.e. vdW heterostructures) have been the subject of
intensive works, little is known about how charges travel through multilayered,
more disordered networks. Here we report a comprehensive study of a
prototypical system given by networks of randomly stacked reduced graphene
oxide 2D nanosheets, whose chemical and geometrical properties can be
controlled independently, permitting to explore percolated networks ranging
from a single nanosheet to some billions with room temperature resistivity
spanning from 10-5 to 10-1 ohm m. We systematically observe a clear transition
between two different regimes at a critical temperature T*: Efros-Shklovskii
variable range hopping (ESVRH) below T* and power law (PL) behavior above.
Firstly, we demonstrate that the two regimes are strongly correlated with each
other, both depending on the charge localization length xi, calculated by
ES-VRH model, which corresponds to the characteristic size of overlapping sp2
domains belonging to different nanosheets. Thus, we propose a microscopic model
describing the charge transport as a geometrical phase transition, given by the
metal-insulator transition associated with the percolation of quasi-1D
nanofillers with length xi, showing that the charge transport behavior of the
networks is valid for all geometries and defects of the nanosheets, ultimately
suggesting a generalized description on vdW and disordered thin films.
|
We provide a construction for categorical representation learning and
introduce the foundations of "$\textit{categorifier}$". The central theme in
representation learning is the idea of $\textbf{everything to vector}$. Every
object in a dataset $\mathcal{S}$ can be represented as a vector in
$\mathbb{R}^n$ by an $\textit{encoding map}$ $E:
\mathcal{O}bj(\mathcal{S})\to\mathbb{R}^n$. More importantly, every morphism
can be represented as a matrix $E:
\mathcal{H}om(\mathcal{S})\to\mathbb{R}^{n}_{n}$. The encoding map $E$ is
generally modeled by a $\textit{deep neural network}$. The goal of
representation learning is to design appropriate tasks on the dataset to train
the encoding map (assuming that an encoding is optimal if it universally
optimizes the performance on various tasks). However, the latter is still a
$\textit{set-theoretic}$ approach. The goal of the current article is to
promote the representation learning to a new level via a
$\textit{category-theoretic}$ approach. As a proof of concept, we provide an
example of a text translator equipped with our technology, showing that our
categorical learning model outperforms the current deep learning models by 17
times. The content of the current article is part of the recent US patent
proposal (patent application number: 63110906).
|
We study the number of points in the family of plane curves defined by a
trinomial \[
\mathcal{C}(\alpha,\beta)=
\{(x,y)\in\mathbb{F}_q^2\,:\,\alpha x^{a_{11}}y^{a_{12}}+\beta
x^{a_{21}}y^{a_{22}}=x^{a_{31}}y^{a_{32}}\} \] with fixed exponents (not
collinear) and varying coefficients over finite fields. We prove that each of
these curves has an almost predictable number of points, given by a closed
formula that depends on the coefficients, exponents, and the field, with a
small error term $N(\alpha,\beta)$ that is bounded in absolute value by
$2\tilde{g}q^{1/2}$, where $\tilde{g}$ is a constant that depends only on the
exponents and the field. A formula for $\tilde{g}$ is provided, as well as a
comparison of $\tilde{g}$ with the genus $g$ of the projective closure of the
curve over $\overline{\mathbb{F}_q}$. We also give several linear and quadratic
identities for the numbers $N(\alpha,\beta)$ that are strong enough to prove
the estimate above, and in some cases, to characterize them completely.
|
Because of the increasing demand for computation in DNN, researchers develope
both hardware and software mechanisms to reduce the compute and memory burden.
A widely adopted approach is to use mixed precision data types. However, it is
hard to leverage mixed precision without hardware support because of the
overhead of data casting. Hardware vendors offer tensorized instructions for
mixed-precision tensor operations, like Intel VNNI, Tensor Core, and ARM-DOT.
These instructions involve a computing idiom that reduces multiple low
precision elements into one high precision element. The lack of compilation
techniques for this makes it hard to utilize these instructions: Using
vendor-provided libraries for computationally-intensive kernels is inflexible
and prevents further optimizations, and manually writing hardware intrinsics is
error-prone and difficult for programmers. Some prior works address this
problem by creating compilers for each instruction. This requires excessive
effort when it comes to many tensorized instructions. In this work, we develop
a compiler framework to unify the compilation for these instructions -- a
unified semantics abstraction eases the integration of new instructions, and
reuses the analysis and transformations. Tensorized instructions from different
platforms can be compiled via UNIT with moderate effort for favorable
performance. Given a tensorized instruction and a tensor operation, UNIT
automatically detects the applicability, transforms the loop organization of
the operation,and rewrites the loop body to leverage the tensorized
instruction. According to our evaluation, UNIT can target various mainstream
hardware platforms. The generated end-to-end inference model achieves 1.3x
speedup over Intel oneDNN on an x86 CPU, 1.75x speedup over Nvidia cuDNN on an
NvidiaGPU, and 1.13x speedup over a carefully tuned TVM solution for ARM DOT on
an ARM CPU.
|
Federated Learning (FL) is a collaborative machine learning technique to
train a global model without obtaining clients' private data. The main
challenges in FL are statistical diversity among clients, limited computing
capability among clients' equipments, and the excessive communication overhead
between servers and clients. To address these challenges, we propose a novel
sparse personalized federated learning scheme via maximizing correlation
FedMac. By incorporating an approximated L1-norm and the correlation between
client models and global model into standard FL loss function, the performance
on statistical diversity data is improved and the communicational and
computational loads required in the network are reduced compared with
non-sparse FL. Convergence analysis shows that the sparse constraints in FedMac
do not affect the convergence rate of the global model, and theoretical results
show that FedMac can achieve good sparse personalization, which is better than
the personalized methods based on L2-norm. Experimentally, we demonstrate the
benefits of this sparse personalization architecture compared with the
state-of-the-art personalization methods (e.g. FedMac respectively achieves
98.95%, 99.37%, 90.90% and 89.06% accuracy on the MNIST, FMNIST, CIFAR-100 and
Synthetic datasets under non-i.i.d. variants).
|
We present the analytical framework for converting projected light
distributions with a S\'ersic profile into three-dimensional light
distributions for stellar systems of arbitrary triaxial shape. The main
practical result is the definition of a simple yet robust measure of intrinsic
galaxy size: the median radius $r_\mathrm{med}$, defined as the radius of a
sphere that contains 50% of the total luminosity or mass, that is, the median
distance of a star to the galaxy center. We examine how $r_\mathrm{med}$
depends on projected size measurements as a function of S\'ersic index and
intrinsic axis ratios, and demonstrate its relative independence of these
parameters. As an application we show that the projected semi-major axis length
of the ellipse enclosing 50% of the light is an unbiased proxy for
$r_\mathrm{med}$, with small galaxy-to-galaxy scatter of $\sim$10% (1$\sigma$),
under the condition that the variation in triaxiality within the population is
small. For galaxy populations with unknown or a large range in triaxiality an
unbiased proxy for $r_\mathrm{med}$ is $1.3\times R_{e}$, where $R_{e}$ is the
circularized half-light radius, with galaxy-to-galaxy scatter of 20-30%
(1$\sigma$). We also describe how inclinations can be estimated for individual
galaxies based on the measured projected shape and prior knowledge of the
intrinsic shape distribution of the corresponding galaxy population. We make
the numerical implementation of our calculations available.
|
We investigate the onset of chaos in a periodically kicked Dicke model (KDM),
using the out-of-time-order correlator (OTOC) as a diagnostic tool, in both the
oscillator and the spin subspaces. In the large spin limit, the classical
Hamiltonian map is constructed, which allows us to investigate the
corresponding phase space dynamics and to compute the Lyapunov exponent. We
show that the growth rate of the OTOC for the canonically conjugate coordinates
of the oscillator is able to capture the Lyapunov exponent in the chaotic
regime. The onset of chaos is further investigated using the saturation value
of the OTOC, that can serve as an alternate indicator of chaos in a generic
interacting quantum system. This is also supported by a system independent
effective random matrix model. We further identify the quantum scars in KDM and
detect their dynamical signature by using the OTOC dynamics. The relevance of
the present study in the context of ongoing cold atom experiments is also
discussed.
|
Identification of parameters in ordinary differential equations (ODEs) is an
important and challenging task when modeling dynamic systems in biomedical
research and other scientific areas, especially with the presence of
time-varying parameters. This article proposes a fast and accurate method,
TVMAGI (Time-Varying MAnifold-constrained Gaussian process Inference), to
estimate both time-constant and time-varying parameters in the ODE using noisy
and sparse observation data. TVMAGI imposes a Gaussian process model over the
time series of system components as well as time-varying parameters, and
restricts the derivative process to satisfy ODE conditions. Consequently,
TVMAGI completely bypasses numerical integration and achieves substantial
savings in computation time. By incorporating the ODE structures through
manifold constraints, TVMAGI enjoys a principled statistical construction under
the Bayesian paradigm, which further enables it to handle systems with missing
data or unobserved components. The Gaussian process prior also alleviates the
identifiability issue often associated with the time-varying parameters in ODE.
Unlike existing approaches, TVMAGI assumes no specific linearity of the ODE
structure, and can be applied to general nonlinear systems. We demonstrate the
robustness and efficiency of our method through three simulation examples,
including an infectious disease compartmental model.
|
We compute topological Hochschild homology of sufficiently structured forms
of truncated Brown--Peterson spectra with coefficients. In particular, we
compute $\operatorname{THH}_*(\operatorname{taf}^D;M)$ for $M\in \{
H\mathbb{Z}_{(3)},k(1),k(2)\}$ where $\operatorname{taf}^D$ is the $E_{\infty}$
form of $BP\langle 2\rangle$ constructed by Hill--Lawson. We compute
$\operatorname{THH}_*(\operatorname{tmf}_1(3);M)$ when $M\in \{
H\mathbb{Z}_{(2)},k(2)\}$ where $\operatorname{tmf}_1(3)$ is the $E_{\infty}$
form of $BP\langle 2\rangle$ constructed by Lawson--Naumann. We also compute
$\operatorname{THH}_*(B\langle n\rangle;M)$ for $M=H\mathbb{Z}_{(p)}$ and
certain $E_3$ forms $B\langle n\rangle$ of $BP\langle n\rangle$. For example at
$p=2$, this result applies to the $E_3$ forms of $BP\langle n\rangle$
constructed by Hahn--Wilson.
|
This paper proposes an approach that generates multiple 3D human meshes from
text. The human shapes are represented by 3D meshes based on the SMPL model.
The model's performance is evaluated on the COCO dataset, which contains
challenging human shapes and intricate interactions between individuals. The
model is able to capture the dynamics of the scene and the interactions between
individuals based on text. We further show how using such a shape as input to
image synthesis frameworks helps to constrain the network to synthesize humans
with realistic human shapes.
|
Here, a physical formalism is proposed for an unconditional microwave quantum
teleportation of Gaussian states via two-mode squeezed states in lossy
environments. The proposed formalism is controllable to be used in both the
fridge and free space in case of entanglement between two parties survives.
Some possible experimental parameters are estimated for the teleportation of
microwave signals with a frequency of 5GHz based on the proposed physical
framework. This would be helpful for superconducting inter- and intra-fridge
quantum communication as well as open-air quantum microwave communication,
which can be applied to quantum local area networks (QLANs) and distributed
quantum computing protocols.
|
We study quotients of the Toeplitz C*-algebra of a random walk, similar to
those studied by the author and Markiewicz for finite stochastic matrices. We
introduce a new Cuntz-type quotient C*-algebra for random walks that have
convergent ratios of transition probabilities. These C*-algebras give rise to
new notions of ratio limit space and boundary for such random walks, which are
computed by appealing to a companion paper by Woess. Our combined results are
leveraged to identify a unique smallest symmetry-equivariant quotient
C*-algebra for any symmetric random walk on a hyperbolic group, shedding light
on a question of Viselter on C*-algebras of subproduct systems.
|
We build the bosonic $\eta$-deformed $AdS_4\times\mathbb{CP}^3$ background
generated by an $r$-matrix that satisfies the modified classical Yang-Baxter
equation. In a special limit we find that it is the gravity dual of the
noncommutative ABJM theory.
|
We study pure exploration in bandits, where the dimension of the feature
representation can be much larger than the number of arms. To overcome the
curse of dimensionality, we propose to adaptively embed the feature
representation of each arm into a lower-dimensional space and carefully deal
with the induced model misspecifications. Our approach is conceptually very
different from existing works that can either only handle low-dimensional
linear bandits or passively deal with model misspecifications. We showcase the
application of our approach to two pure exploration settings that were
previously under-studied: (1) the reward function belongs to a possibly
infinite-dimensional Reproducing Kernel Hilbert Space, and (2) the reward
function is nonlinear and can be approximated by neural networks. Our main
results provide sample complexity guarantees that only depend on the effective
dimension of the feature spaces in the kernel or neural representations.
Extensive experiments conducted on both synthetic and real-world datasets
demonstrate the efficacy of our methods.
|
Oumuamua, the first known object of extrasolar origin seen to enter our Solar
System, has multiple unusual characteristics that, taken together, are very
difficult to explain with conventional astronomical entities like asteroids and
comets. Consequently, it has been hypothesized that Oumuamua is an interstellar
probe that was constructed by an alien civilization. We demonstrate that the
accomplishments that can be achieved with large space
telescopes/interferometers in the alien's planetary system will completely
quench any motivation for construction and launch of an Oumuamua-like probe.
The absence of any such motivation proves that Oumuamua is not an alien
creation.
|
Despite remarkable progress achieved, most neural architecture search (NAS)
methods focus on searching for one single accurate and robust architecture. To
further build models with better generalization capability and performance,
model ensemble is usually adopted and performs better than stand-alone models.
Inspired by the merits of model ensemble, we propose to search for multiple
diverse models simultaneously as an alternative way to find powerful models.
Searching for ensembles is non-trivial and has two key challenges: enlarged
search space and potentially more complexity for the searched model. In this
paper, we propose a one-shot neural ensemble architecture search (NEAS)
solution that addresses the two challenges. For the first challenge, we
introduce a novel diversity-based metric to guide search space shrinking,
considering both the potentiality and diversity of candidate operators. For the
second challenge, we enable a new search dimension to learn layer sharing among
different models for efficiency purposes. The experiments on ImageNet clearly
demonstrate that our solution can improve the supernet's capacity of ranking
ensemble architectures, and further lead to better search results. The
discovered architectures achieve superior performance compared with
state-of-the-arts such as MobileNetV3 and EfficientNet families under aligned
settings. Moreover, we evaluate the generalization ability and robustness of
our searched architecture on the COCO detection benchmark and achieve a 3.1%
improvement on AP compared with MobileNetV3. Codes and models are available at
https://github.com/researchmm/NEAS.
|
Optical wave-based computing has enabled the realization of real-time
information processing in both space and time domains. In the past few years,
analog computing has experienced rapid development but mostly for a single
function. Motivated by parallel space-time computing and miniaturization, we
show that reconfigurable graphene-based metasurfaces offer a promising path
towards spatiotemporal computing with integrated functionalities by properly
engineering both spatial- and temporal-frequency responses. This paper employs
a tunable graphene-based metasurface to enable analog signal and image
processing in both space and time by tuning the electrostatic bias. In the
first part of the paper, we propose a switchable analog computing paradigm in
which the proposed metasurface can switch among defined performances by
selecting a proper external voltage for graphene monolayers. Spatial isotropic
differentiation and edge detection in the spatial channel and first-order
temporal differentiation and metasurface-based phaser with linear group-delay
response in the temporal channel are demonstrated. In the second section of the
paper, simultaneous and parallel spatiotemporal analog computing is
demonstrated. The proposed metasurface processor has almost no static power
consumption due to its floating-gate configuration. The spatial- and
temporal-frequency transfer functions (TFs) are engineered by using a
transmission line (TL) model, and the obtained results are validated with
full-wave simulations. Our proposal will enable real-time parallel
spatiotemporal analog signal and image processing.
|
In this paper, we propose a novel SpatioTemporal convolutional Dense Network
(STDNet) to address the video-based crowd counting problem, which contains the
decomposition of 3D convolution and the 3D spatiotemporal dilated dense
convolution to alleviate the rapid growth of the model size caused by the
Conv3D layer. Moreover, since the dilated convolution extracts the multiscale
features, we combine the dilated convolution with the channel attention block
to enhance the feature representations. Due to the error that occurs from the
difficulty of labeling crowds, especially for videos, imprecise or
standard-inconsistent labels may lead to poor convergence for the model. To
address this issue, we further propose a new patch-wise regression loss (PRL)
to improve the original pixel-wise loss. Experimental results on three
video-based benchmarks, i.e., the UCSD, Mall and WorldExpo'10 datasets, show
that STDNet outperforms both image- and video-based state-of-the-art methods.
The source codes are released at \url{https://github.com/STDNet/STDNet}.
|
Multi-level exciton-polariton systems offer an attractive platform for
studies of non-linear optical phenomena. However, studies of such consequential
non-linear phenomena as polariton condensation and lasing in planar
microcavities have so far been limited to two-level systems, where the
condensation takes place in the lowest attainable state. Here, we report
non-equilibrium Bose-Einstein condensation of exciton-polaritons and low
threshold, dual-wavelength polariton lasing in vertically coupled, double
planar microcavities. Moreover, we find that the presence of the non-resonantly
driven condensate triggers interbranch exciton-polariton transfer in the form
of energy-degenerate parametric scattering. Such an effect has so far been
observed only under excitation that is strictly resonant in terms of the energy
and incidence angle. We describe theoretically our time-integrated and
time-resolved photoluminescence investigations by a set of rate equations
involving an open-dissipative Gross-Pitaevskii equation. Our platform's
inherent tunability is promising for construction of planar lattices, enabling
three-dimensional polariton hopping and realization of photonic devices, such
as two-qubit polariton-based logic gates.
|
The HAWC Collaboration has observed gamma rays at energies above 56 TeV from
a collection of nine sources. It has been suggested that this emission could be
hadronic in nature, requiring that these systems accelerate cosmic-ray protons
or nuclei up to PeV-scale energies. In this paper, we instead show that the
spectra of these objects favor a leptonic (inverse Compton) origin for their
emission. More specifically, the gamma-ray emission from these objects can be
straightforwardly accommodated within a model in which $\sim \mathcal{O}(10\%)$
of the host pulsar's spindown power is transferred into the acceleration of
electrons and positrons with a power-law spectrum that extends to several
hundred TeV or higher. The spectral break that is observed among these sources
is naturally explained within the context of this simple model, and occurs at
the energy where the timescale for energy losses matches the age of the pulsar.
In contrast, this spectral feature cannot be straightforwardly accommodated in
hadronic scenarios. Furthermore, hadronic models predict that these sources
should produce more emission at GeV-scale energies than is observed. In light
of these considerations, we conclude that HAWC's highest energy sources should
be interpreted as TeV halos or pulsar wind nebulae, which produce their
emission through inverse Compton scattering, and are powered by the rotational
kinetic energy of their host pulsar.
|
Regional facial image synthesis conditioned on semantic mask has achieved
great success using generative adversarial networks. However, the appearance of
different regions may be inconsistent with each other when conducting regional
image editing. In this paper, we focus on the problem of harmonized regional
style transfer and manipulation for facial images. The proposed approach
supports regional style transfer and manipulation at the same time. A
multi-scale encoder and style mapping networks are proposed in our work. The
encoder is responsible for extracting regional styles of real faces. Style
mapping networks generate styles from random samples for all facial regions. As
the key part of our work, we propose a multi-region style attention module to
adapt the multiple regional style embeddings from a reference image to a target
image for generating harmonious and plausible results. Furthermore, we propose
a new metric "harmony score" and conduct experiments in a challenging setting:
three widely used face datasets are involved and we test the model by
transferring the regional facial appearance between datasets. Images in
different datasets are usually quite different, which makes the inconsistency
between target and reference regions more obvious. Results show that our model
can generate reliable style transfer and multi-modal manipulation results
compared with SOTAs. Furthermore, we show two face editing applications using
the proposed approach.
|
We present MoonLight, a tool for monitoring temporal and spatio-temporal
properties of mobile and spatially distributed cyber-physical systems (CPS). In
the proposed framework, space is represented as a weighted graph, describing
the topological configurations in which the single CPS entities (nodes of the
graph) are arranged. Both nodes and edges have attributes modelling physical
and logical quantities that can change in time. MoonLight is implemented in
Java and supports the monitoring of Spatio-Temporal Reach and Escape Logic
(STREL). MoonLight can be used as a standalone command line tool, as a Java
API, or via Matlab interface. We provide here some examples using the Matlab
interface and we evaluate the tool performance also by comparing with other
tools specialized in monitoring only temporal properties.
|
Numerous improvements for feedback mechanisms have contributed to the great
progress in object detection. In this paper, we first present an
evaluation-feedback module, which is proposed to consist of evaluation system
and feedback mechanism. Then we analyze and summarize the disadvantages and
improvements of traditional evaluation-feedback module. Finally, we focus on
both the evaluation system and the feedback mechanism, and propose Control
Distance IoU and Control Distance IoU loss function (or CDIoU and CDIoU loss
for short) without increasing parameters or FLOPs in models, which show
different significant enhancements on several classical and emerging models.
Some experiments and comparative tests show that coordinated
evaluation-feedback module can effectively improve model performance. CDIoU and
CDIoU loss have different excellent performances in several models such as
Faster R-CNN, YOLOv4, RetinaNet and ATSS. There is a maximum AP improvement of
1.9% and an average AP of 0.8% improvement on MS COCO dataset, compared to
traditional evaluation-feedback modules.
|
Following a tidal disruption event (TDE), the accretion rate can evolve from
quiescent to near-Eddington levels and back over months - years timescales.
This provides a unique opportunity to study the formation and evolution of the
accretion flow around supermassive black holes (SMBHs). We present two years of
multi-wavelength monitoring observations of the TDE AT2018fyk at X-ray, UV,
optical and radio wavelengths. We identify three distinct accretion states and
two state transitions between them. These appear remarkably similar to the
behaviour of stellar-mass black holes in outburst. The X-ray spectral
properties show a transition from a soft (thermal-dominated) to a hard
(power-law dominated) spectral state around L$_{\rm bol} \sim $few $ \times
10^{-2}$ L$_{\rm Edd}$, and the strengthening of the corona over time
$\sim$100--200 days after the UV/optical peak. Contemporaneously, the spectral
energy distribution (in particular, the UV-to-X-ray spectral slope
$\alpha_{ox}$) shows a pronounced softening as the outburst progresses. The
X-ray timing properties also show a marked change, initially dominated by
variability at long ($>$day) timescales while a high frequency ($\sim$10$^{-3}$
Hz) component emerges after the transition into the hard state. At late times
($\sim$500 days after peak), a second accretion state transition occurs, from
the hard into the quiescent state, as identified by the sudden collapse of the
bolometric (X-ray+UV) emission to levels below 10$^{-3.4}$ L$_{\rm Edd}$. Our
findings illustrate that TDEs can be used to study the scale (in)variance of
accretion processes in individual SMBHs. Consequently, they provide a new
avenue to study accretion states over seven orders of magnitude in black hole
mass, removing limitations inherent to commonly used ensemble studies.
|
In this short note we classify the Cartan subalgebras in all von Neumann
algebras associated with graph product groups and their free ergodic measure
preserving actions on probability spaces.
|
In this paper, we consider the problem of reducing the semitotal domination
number of a given graph by contracting $k$ edges, for some fixed $k \geq 1$. We
show that this can always be done with at most 3 edge contractions and further
characterise those graphs requiring 1, 2 or 3 edge contractions, respectively,
to decrease their semitotal domination number. We then study the complexity of
the problem for $k=1$ and obtain in particular a complete complexity dichotomy
for monogenic classes.
|
Bayesian optimization has emerged as a powerful strategy to accelerate
scientific discovery by means of autonomous experimentation. However, expensive
measurements are required to accurately estimate materials properties, and can
quickly become a hindrance to exhaustive materials discovery campaigns. Here,
we introduce Gemini: a data-driven model capable of using inexpensive
measurements as proxies for expensive measurements by correcting systematic
biases between property evaluation methods. We recommend using Gemini for
regression tasks with sparse data and in an autonomous workflow setting where
its predictions of expensive to evaluate objectives can be used to construct a
more informative acquisition function, thus reducing the number of expensive
evaluations an optimizer needs to achieve desired target values. In a
regression setting, we showcase the ability of our method to make accurate
predictions of DFT calculated bandgaps of hybrid organic-inorganic perovskite
materials. We further demonstrate the benefits that Gemini provides to
autonomous workflows by augmenting the Bayesian optimizer Phoenics to yeild a
scalable optimization framework leveraging multiple sources of measurement.
Finally, we simulate an autonomous materials discovery platform for optimizing
the activity of electrocatalysts for the oxygen evolution reaction. Realizing
autonomous workflows with Gemini, we show that the number of measurements of a
composition space comprising expensive and rare metals needed to achieve a
target overpotential is significantly reduced when measurements from a proxy
composition system with less expensive metals are available.
|
Virtual clusters are widely used computing platforms than can be deployed in
multiple cloud platforms. The ability to dynamically grow and shrink the number
of nodes has paved the way for customised elastic computing both for High
Performance Computing and High Throughput Computing workloads. However,
elasticity is typically restricted to a single cloud site, thus hindering the
ability to provision computational resources from multiple geographically
distributed cloud sites. To this aim, this paper introduces an architecture of
open-source components that coherently deploy a virtual elastic cluster across
multiple cloud sites to perform large-scale computing. These hybrid virtual
elastic clusters are automatically deployed and configured using an
Infrastructure as Code (IaC) approach on a distributed hybrid testbed that
spans different organizations, including on-premises and public clouds,
supporting automated tunneling of communications across the cluster nodes with
advanced VPN topologies. The results indicate that cluster-based computing of
embarrassingly parallel jobs can benefit from hybrid virtual clusters that
aggregate computing resources from multiple cloud back-ends and bring them
together into a dedicated, albeit virtual network.
The work presented in this article has been partially funded by the European
Union's (EU) Horizon 2020 research project DEEP Hybrid-DataCloud (grant
agreement No 777435).
|
As an integral part of our culture and way of life, language is intricately
related to migrations of people. To understand whether and how migration shapes
language formation processes we examine the dynamics of the naming game with
migrating agents. (i) When all agents may migrate, the dynamics generates an
effective surface tension, which drives the coarsening. Such a behaviour is
very robust and appears for a wide range of densities of agents and their
migration rates. (ii) However, when only multilingual agents are allowed to
migrate, monolingual islands are typically formed. In such a case, when the
migration rate is sufficiently large, the majority of agents acquire a common
language, which spontaneously emerges with no indication of the surface-tension
driven coarsening. A relatively slow coarsening that takes place in a dense
static population is very fragile, and most likely, an arbitrarily small
migration rate can divert the system toward quick formation of monolingual
islands. Our work shows that migration influences language formation processes
but additional details like density, or mobility of agents are needed to
specify more precisely this influence.
|
Continued fractions are used to give an alternate proof of $e^{x/y}$ is
irrational.
|
We investigate a general formulation for clustering and transductive few-shot
learning, which integrates prototype-based objectives, Laplacian regularization
and supervision constraints from a few labeled data points. We propose a
concave-convex relaxation of the problem, and derive a computationally
efficient block-coordinate bound optimizer, with convergence guarantee. At each
iteration,our optimizer computes independent (parallel) updates for each
point-to-cluster assignment. Therefore, it could be trivially distributed for
large-scale clustering and few-shot tasks. Furthermore, we provides a thorough
convergence analysis based on point-to-set maps. Were port comprehensive
clustering and few-shot learning experiments over various data sets, showing
that our method yields competitive performances, in term of accuracy and
optimization quality, while scaling up to large problems. Using standard
training on the base classes, without resorting to complex meta-learning and
episodic-training strategies, our approach outperforms state-of-the-art
few-shot methods by significant margins, across various models, settings and
data sets. Surprisingly, we found that even standard clustering procedures
(e.g., K-means), which correspond to particular, non-regularized cases of our
general model, already achieve competitive performances in comparison to the
state-of-the-art in few-shot learning. These surprising results point to the
limitations of the current few-shot benchmarks, and question the viability of a
large body of convoluted few-shot learning techniques in the recent literature.
|
This paper deals with Hensel minimal, non-trivially valued fields $K$ of
equicharacteristic zero, whose axiomatic theory was introduced in a recent
article by Cluckers-Halupczok-Rideau. We additionally require that the standard
algebraic language be induced (up to interdefinability) for the imaginary sort
$RV$. This condition is satisfied by the majority of classical tame structures
on Henselian fields, including Henselian fields with analytic structure. The
main purpose is to carry over many results of our previous papers to the above
general axiomatic settings including, among others, the theorem on existence of
the limit, curve selection, the closedness theorem, several non-Archimedean
versions of the Lojasiewicz inequalities as well as the theorems on extending
continuous definable functions and on existence of definable retractions. We
establish an embedding theorem for regular definable spaces and the definable
ultranormality of definable Hausdorff LC-spaces. Also given are examples that
curve selection and the closedness theorem, key results for numerous
applications, may be no longer true after expanding the language for the
leading term structure $RV$. In the case of Henselian fields with analytic
structure, a more precise version of the theorem on existence of the limit (a
version of Puiseux's theorem) is provided. Further, we establish definable
versions of resolution of singularities (hypersurface case) and transformation
to normal crossings by blowing up, on arbitrary strong analytic manifolds in
Hensel minimal expansions of analytic structures. Also introduced are
meromorphous functions, i.e. continuous quotients of strong analytic functions
on strong analytic manifolds. Finally, we prove a finitary meromorphous version
of the Nullstellensatz.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.