abstract
stringlengths 42
2.09k
|
---|
This paper presents a novel mechatronic setup intended for providing
respiratory support to patients suffering from pulmonary failure. The setup
relies upon the circulation of an oxygenated perfluorocarbon (PFC) through the
abdominal cavity. Such circulation provides a potential pathway for the
transport of oxygen to the bloodstream. However, the viability of this
technology for $CO_2$ clearance has not been established. Moreover, there is a
lack of experimental data enabling the modeling and identification of the
underlying dynamics of this technology. To address these gaps, we develop a
flexible experimental perfusion setup capable of monitoring and controlling key
variables such as perfusate flowrate, temperature, pressure, and oxygenation.
The paper (i) briefly summarizes the design of this setup; (ii) highlights the
degree to which its data acquisition system enables the collection and
cross-correlation of both perfusion-related and physiological variables; and
(iii) discusses the development of flow, pressure, and temperature control
algorithms for the setup. Experiments with large animals (swine) show that the
setup is capable of successfully controlling the perfusion process, as well as
gathering extensive data to support subsequent modeling and identification
studies.
|
Yb-substituted Ni-Zn ferrites have been synthesized using sol-gel auto
combustion method. The structural characterization of the compositions has been
performed by X-ray diffraction analysis, field emission scanning electron
microscopy (FESEM), quantum design physical properties measurement system
(PPMS). That ensured the formation of single phase cubic spinel structure.
Crystallite and average grain size are calculated and found to decrease with
increasing Yb3+ contents. Saturation magnetization and Bohr magnetic moment
decrease while the coercivity increases with the increase in Yb3+ contents
successfully explained by the Neels collinear two sub-lattice model and
critical size effect, respectively. Critical particle size has been estimated
at 6.4 nm, the transition point between single domain regime (below the
critical size) and multi-domain regime (beyond the critical size). Curie
temperature reduces due to the weakening of A-O-B super exchange interaction
and redistribution of cations, confirmed by the M-T graph. The compositions
retain ferromagnetic ordered structured below Curie temperature and above Curie
temperature, it becomes paramagnetic, making them plausible candidates for high
temperature magnetic device applications. The relative quality factor peak is
obtained at a very high frequency, indicating the compositions could also be
applicable for high frequency magnetic device applications.
|
Quasi-periodic plasmoid formation at the tip of magnetic streamer structures
is observed to occur in experiments on the Big Red Ball as well as in
simulations of these experiments performed with the extended-MHD code, NIMROD.
This plasmoid formation is found to occur on a characteristic timescale
dependent on pressure gradients and magnetic curvature in both experiment and
simulation. Single mode, or laminar, plasmoids exist when the pressure gradient
is modest, but give way to turbulent plasmoid ejection when the system drive is
higher, producing plasmoids of many sizes. However, a critical pressure
gradient is also observed, below which plasmoids are never formed. A simple
heuristic model of this plasmoid formation process is presented and suggested
to be a consequence of a dynamic loss of equilibrium in the high-$\beta$ region
of the helmet streamer. This model is capable of explaining the periodicity of
plasmoids observed in the experiment and simulations and produces plasmoid
periods of 90 minutes when applied to 2D models of solar streamers with a
height of $3R_\odot$. This is consistent with the location and frequency at
which periodic plasma blobs have been observed to form by LASCO and SECCHI
instruments.
|
Maximum Entropy (MaxEnt) reinforcement learning is a powerful learning
paradigm which seeks to maximize return under entropy regularization. However,
action entropy does not necessarily coincide with state entropy, e.g., when
multiple actions produce the same transition. Instead, we propose to maximize
the transition entropy, i.e., the entropy of next states. We show that
transition entropy can be described by two terms; namely, model-dependent
transition entropy and action redundancy. Particularly, we explore the latter
in both deterministic and stochastic settings and develop tractable
approximation methods in a near model-free setup. We construct algorithms to
minimize action redundancy and demonstrate their effectiveness on a synthetic
environment with multiple redundant actions as well as contemporary benchmarks
in Atari and Mujoco. Our results suggest that action redundancy is a
fundamental problem in reinforcement learning.
|
Enhanced room-temperature electromechanical coupling in the lead-free
ferroelectric system $(1-x)$BaZr$_{0.2}$Ti$_{0.8}$O$_{3}$ -
$x$Ba$_{0.7}$Ca$_{0.3}$TiO$_{3}$ (abbreviated as BZCT) at $x=0.5$ is attributed
to the existence of a morphotropic phase region (MPR) containing an
intermediate orthorhombic ($O$) phase between terminal rhombohedral ($R$) BZT
and tetragonal ($T$) BCT phases. However, there is ambiguity regarding the
morphotropic phase transition in BZCT at room temperature - while some
experiments suggest a single $O$ phase within the MPR, others indicate
coexistence of three polar phases ($T+R+O$). Therefore, to understand the
thermodynamic stability of polar phases and its relation to electromechanical
switching during morphotropic phase transition in BZCT, we develop a Landau
potential based on the theory of polar anisotropy. Since intrinsic
electrostrictive anisotropy changes as a function of electromechanical
processing, we establish a correlation between the parameters of our potential
and the coefficients of electrostriction. We also conducted phase-field
simulations based on this potential to demonstrate changes in domain
configuration from single-phase $O$ to three-phase $T+R+O$ at the equimolar
composition with the increase in electrostrictive anisotropy. Diffusionless
phase diagrams and the corresponding piezoelectric coefficients obtained from
our model compare well with the experimental findings. Increase in
electrostrictive anisotropy increases the degeneracy of the free energy at
ambient temperature and pressure leading to decreasing polar anisotropy,
although there is an accompanying increase in the electromechanical anisotropy
manifested by an increase in the difference between effective longitudinal and
transverse piezo-coefficients, $d_{33}$ and $d_{31}$.
|
Clinical event sequences consist of thousands of clinical events that
represent records of patient care in time. Developing accurate prediction
models for such sequences is of a great importance for defining representations
of a patient state and for improving patient care. One important challenge of
learning a good predictive model of clinical sequences is patient-specific
variability. Based on underlying clinical complications, each patient's
sequence may consist of different sets of clinical events. However,
population-based models learned from such sequences may not accurately predict
patient-specific dynamics of event sequences. To address the problem, we
develop a new adaptive event sequence prediction framework that learns to
adjust its prediction for individual patients through an online model update.
|
Fitting network models to neural activity is an important tool in
neuroscience. A popular approach is to model a brain area with a probabilistic
recurrent spiking network whose parameters maximize the likelihood of the
recorded activity. Although this is widely used, we show that the resulting
model does not produce realistic neural activity. To correct for this, we
suggest to augment the log-likelihood with terms that measure the dissimilarity
between simulated and recorded activity. This dissimilarity is defined via
summary statistics commonly used in neuroscience and the optimization is
efficient because it relies on back-propagation through the stochastically
simulated spike trains. We analyze this method theoretically and show
empirically that it generates more realistic activity statistics. We find that
it improves upon other fitting algorithms for spiking network models like GLMs
(Generalized Linear Models) which do not usually rely on back-propagation. This
new fitting algorithm also enables the consideration of hidden neurons which is
otherwise notoriously hard, and we show that it can be crucial when trying to
infer the network connectivity from spike recordings.
|
It is known that phonons have angular momentum, and when the time-reversal
symmetry (TRS) is broken, the total phonon angular momentum in the whole system
becomes nonzero. In this paper, we propose that as an angular momentum of
phonons for a crystal without TRS, we need to consider the canonical angular
momentum, as opposed to the kinetic angular momentum in previous works. Next,
we show that the angular momentum of phonons without TRS exhibits universal
behaviors near the $\Gamma$ point. We focus on in-plane oscillations in
two-dimensional crystals as an example. By breaking the TRS, one of the
acoustic phonon branches at the $\Gamma$ point acquires a gap. We show that the
angular momentum of its acoustic phonon with a gap has a peak with the height
$\pm \hbar$ regardless of the details of the system. From this, we find that
this peak height changes discontinuously by changing the sign of the
TRS-breaking parameter.
|
This paper describes a novel lossless compression method for point cloud
geometry, building on a recent lossy compression method that aimed at
reconstructing only the bounding volume of a point cloud. The proposed scheme
starts by partially reconstructing the geometry from the two depthmaps
associated to a single projection direction. The partial reconstruction
obtained from the depthmaps is completed to a full reconstruction of the point
cloud by sweeping section by section along one direction and encoding the
points which were not contained in the two depthmaps. The main ingredient is a
list-based encoding of the inner points (situated inside the feasible regions)
by a novel arithmetic three dimensional context coding procedure that
efficiently utilizes rotational invariances present in the input data.
State-of-the-art bits-per-voxel results are obtained on benchmark datasets.
|
If Z is an open subscheme of Spec ZZ, X is a sufficiently nice Z-model of a
smooth curve over QQ, and p is a closed point of Z, the Chabauty-Kim method
leads to the construction of locally analytic functions on X(ZZ_p) which vanish
on X(Z); we call such functions "Kim functions". At least in broad outline, the
method generalizes readily to higher dimensions. In fact, in some sense, the
surface M_{0,5} should be easier than the previously studied curve M_{0,4}
since its points are closely related to those of M_{0,4}, yet they face a
further condition to integrality. This is mirrored by a certain "weight
advantage" we encounter, because of which, M_{0,5} possesses new Kim functions
not coming from M_{0,4}. Here we focus on the case "ZZ[1/6] in half-weight 4",
where we provide a first nontrivial example of a Kim function on a surface.
Central to our approach to Chabauty-Kim theory (as developed in works by S.
Wewers, D. Corwin, and the first author) is the possibility of separating the
geometric part of the computation from its arithmetic context. However, we find
that in this case the geometric step grows beyond the bounds of standard
algorithms running on current computers. Therefore, some ingenuity is needed to
solve this seemingly straightforward problem, and our new Kim function is huge.
|
X-ray pulse profile modeling of PSR J0740+6620, the most massive known
pulsar, with data from the NICER and XMM-Newton observatories recently led to a
measurement of its radius. We investigate this measurement's implications for
the neutron star equation of state (EoS), employing a nonparametric EoS model
based on Gaussian processes and combining information from other x-ray, radio
and gravitational-wave observations of neutron stars. Our analysis mildly
disfavors EoSs that support a disconnected hybrid star branch in the
mass-radius relation, a proxy for strong phase transitions, with a Bayes factor
of $6.9$. For such EoSs, the transition mass from the hadronic to the hybrid
branch is constrained to lie outside ($1,2$) $M_{\odot}$. We also find that the
conformal sound-speed bound is violated inside neutron star cores, which
implies that the core matter is strongly interacting. The squared sound speed
reaches a maximum of $0.75^{+0.25}_{-0.24}\, c^2$ at $3.60^{+2.25}_{-1.89}$
times nuclear saturation density at 90% credibility. Since all but the
gravitational-wave observations prefer a relatively stiff EoS, PSR J0740+6620's
central density is only $3.57^{+1.3}_{-1.3}$ times nuclear saturation, limiting
the density range probed by observations of cold, nonrotating neutron stars in
$\beta$-equilibrium.
|
Matched filters are routinely used in cosmology in order to detect galaxy
clusters from mm observations through their thermal Sunyaev-Zeldovich (tSZ)
signature. In addition, they naturally provide an observable, the detection
signal-to-noise or significance, which can be used as a mass proxy in number
counts analyses of tSZ-selected cluster samples. In this work, we show that
this observable is, in general, non-Gaussian, and that it suffers from a
positive bias, which we refer to as optimisation bias. Both aspects arise from
the fact that the signal-to-noise is constructed through an optimisation
operation on noisy data, and hold even if the cluster signal is modelled
perfectly well, no foregrounds are present, and the noise is Gaussian. After
reviewing the general mathematical formalism underlying matched filters, we
study the statistics of the signal-to-noise with a set Monte Carlo mock
observations, finding it to be well-described by a unit-variance Gaussian for
signal-to-noise values of 6 and above, and quantify the magnitude of the
optimisation bias, for which we give an approximate expression that may be used
in practice. We also consider the impact of the bias on the cluster number
counts of Planck and the Simons Observatory (SO), finding it to be negligible
for the former and potentially significant for the latter.
|
Self-propelled droplets are composed of droplets driven by the waves they
emit when bouncing on a vertically vibrated bath. Their dynamics is based on an
interplay between the waves and their source. The existence of self-spinning
modes is still controversial. Here, we show experimentally that these modes are
stable for a class of droplets and emerge spontaneously from noise
fluctuations. We perform a discrete stability analysis to confirm experimental
observations. In addition, we show that these self-spinning modes provide a
unique opportunity for a direct experimental measurement of parameters used in
the wave-driven droplet models found in the literature to enable comparison and
calibration.
|
Artificial Neural Networks (ANNs) became popular due to their successful
application difficult problems such image and speech recognition. However, when
practitioners want to design an ANN they need to undergo laborious process of
selecting a set of parameters and topology. Currently, there are several
state-of-the art methods that allow for the automatic selection of some of
these aspects. Learning Rate optimizers are a set of such techniques that
search for good values of learning rates. Whilst these techniques are effective
and have yielded good results over the years, they are general solution i.e.
they do not consider the characteristics of a specific network.
We propose a framework called AutoLR to automatically design Learning Rate
Optimizers. Two versions of the system are detailed. The first one, Dynamic
AutoLR, evolves static and dynamic learning rate optimizers based on the
current epoch and the previous learning rate. The second version, Adaptive
AutoLR, evolves adaptive optimizers that can fine tune the learning rate for
each network eeight which makes them generally more effective. The results are
competitive with the best state of the art methods, even outperforming them in
some scenarios. Furthermore, the system evolved a classifier, ADES, that
appears to be novel and innovative since, to the best of our knowledge, it has
a structure that differs from state of the art methods.
|
We apply the Ionization Region Model (IRM) and the Orsay Boltzmann equation
for ELectrons coupled with Ionization and eXcited states kinetics (OBELIX)
model to study the electron kinetics of a high power impulse magnetron
sputtering (HiPIMS) discharge. In the IRM the bulk (cold) electrons are assumed
to exhibit a Maxwellian energy distribution and the secondary (hot) electrons,
emitted from the target surface upon ion bombardment, are treated as a high
energy tail, while in the OBELIX the electron energy distribution is calculated
self-consistently using an isotropic Boltzmann equation. The two models are
merged in the sense that the output from the IRM is used as an input for
OBELIX. The temporal evolutions of the particle densities are found to agree
very well between the two models. Furthermore, a very good agreement is
demonstrated between the bi-Maxwellian electron energy distribution assumed by
the IRM and the electron energy distribution calculated by the OBELIX model. It
can therefore be concluded that assuming a bi-Maxwellian electron energy
distribution, constituting a cold bulk electron group and a hot secondary
electron group, is a good approximation for modeling the HiPIMS discharge.
|
We study the outflow dynamics and clogging phenomena of mixtures of soft,
elastic low-friction spherical grains and hard frictional spheres of similar
size in a quasi-two-dimensional (2D) silo with narrow orifice at the bottom.
Previous work has demonstrated the crucial influence of elasticity and friction
on silo discharge. We show that the addition of small amounts, even as low as
5\%, of hard grains to an ensemble of soft, low-friction grains already has
significant consequences. The mixtures allow a direct comparison of the
probabilities of the different types of particles to clog the orifice. We
analyze these probabilities for the hard, frictional and the soft, slippery
grains on the basis of their participation in the blocking arches, and compare
outflow velocities and durations of non-permanent clogs for different
compositions of the mixtures. Experimental results are compared with numerical
simulations. The latter strongly suggest a significant influence of the
inter-species particle friction.
|
Isogeometric approach applied to Boundary Element Methods is an emerging
research area. In this context, the aim of the present contribution is that of
investigating, from a numerical point of view, the Symmetric Galerkin Boundary
Element Method (SGBEM) devoted to the solution of 2D boundary value problems
for the Laplace equation, where the boundary and the unknowns on it are both
represented by B-splines. We mainly compare this approach, which we call
IGA-SGBEM, with a curvilinear SGBEM, which operates on any boundary given by
explicit parametric representation and where the approximate solution is
obtained using Lagrangian basis. Both techniques are further compared with a
standard (conventional) SGBEM approach, where the boundary of the assigned
problem is approximated by linear elements and the numerical solution is
expressed in terms of Lagrangian basis. Several examples will be presented and
discussed, underlying benefits and drawbacks of all the above-mentioned
approaches.
|
The modular decomposition of a symmetric map $\delta\colon X\times X \to
\Upsilon$ (or, equivalently, a set of symmetric binary relations, a
2-structure, or an edge-colored undirected graph) is a natural construction to
capture key features of $\delta$ in labeled trees. A map $\delta$ is explained
by a vertex-labeled rooted tree $(T,t)$ if the label $\delta(x,y)$ coincides
with the label of the last common ancestor of $x$ and $y$ in $T$, i.e., if
$\delta(x,y)=t(\mathrm{lca}(x,y))$. Only maps whose modular decomposition does
not contain prime nodes, i.e., the symbolic ultrametrics, can be exaplained in
this manner. Here we consider rooted median graphs as a generalization to
(modular decomposition) trees to explain symmetric maps. We first show that
every symmetric map can be explained by "extended" hypercubes and half-grids.
We then derive a a linear-time algorithm that stepwisely resolves prime
vertices in the modular decomposition tree to obtain a rooted and labeled
median graph that explains a given symmetric map $\delta$. We argue that the
resulting "tree-like" median graphs may be of use in phylogenetics as a model
of evolutionary relationships.
|
Deep Metric Learning (DML) learns a non-linear semantic embedding from input
data that brings similar pairs together while keeps dissimilar data away from
each other. To this end, many different methods are proposed in the last decade
with promising results in various applications. The success of a DML algorithm
greatly depends on its loss function. However, no loss function is perfect, and
it deals only with some aspects of an optimal similarity embedding. Besides,
the generalizability of the DML on unseen categories during the test stage is
an important matter that is not considered by existing loss functions. To
address these challenges, we propose novel approaches to combine different
losses built on top of a shared deep feature extractor. The proposed ensemble
of losses enforces the deep model to extract features that are consistent with
all losses. Since the selected losses are diverse and each emphasizes different
aspects of an optimal semantic embedding, our effective combining methods yield
a considerable improvement over any individual loss and generalize well on
unseen categories. Here, there is no limitation in choosing loss functions, and
our methods can work with any set of existing ones. Besides, they can optimize
each loss function as well as its weight in an end-to-end paradigm with no need
to adjust any hyper-parameter. We evaluate our methods on some popular datasets
from the machine vision domain in conventional Zero-Shot-Learning (ZSL)
settings. The results are very encouraging and show that our methods outperform
all baseline losses by a large margin in all datasets.
|
We use FIRE-2 simulations to examine 3-D variations of gas-phase elemental
abundances of [O/H], [Fe/H], and [N/H] in 11 Milky Way (MW) and M31-mass
galaxies across their formation histories at $z \leq 1.5$ ($t_{\rm lookback}
\leq 9.4$ Gyr), motivated by characterizing the initial conditions of stars for
chemical tagging. Gas within $1$ kpc of the disk midplane is vertically
homogeneous to $\lesssim 0.008$ dex at all $z \leq 1.5$. We find negative
radial gradients (metallicity decreases with galactocentric radius) at all
times, which steepen over time from $\approx -0.01$ dex kpc$^{-1}$ at $z = 1$
($t_{\rm lookback} = 7.8$ Gyr) to $\approx -0.03$ dex kpc$^{-1}$ at $z = 0$,
and which broadly agree with observations of the MW, M31, and nearby
MW/M31-mass galaxies. Azimuthal variations at fixed radius are typically $0.14$
dex at $z = 1$, reducing to $0.05$ dex at $z = 0$. Thus, over time radial
gradients become steeper while azimuthal variations become weaker (more
homogeneous). As a result, azimuthal variations were larger than radial
variations at $z \gtrsim 0.8$ ($t_{\rm lookback} \gtrsim 6.9$ Gyr).
Furthermore, elemental abundances are measurably homogeneous (to $\lesssim
0.05$ dex) across a radial range of $\Delta R \approx 3.5$ kpc at $z \gtrsim 1$
and $\Delta R \approx 1.7$ kpc at $z = 0$. We also measure full distributions
of elemental abundances, finding typically negatively skewed normal
distributions at $z \gtrsim 1$ that evolve to typically Gaussian distributions
by $z = 0$. Our results on gas abundances inform the initial conditions for
stars, including the spatial and temporal scales for applying chemical tagging
to understand stellar birth in the MW.
|
We characterize mass, momentum, energy and metal outflow rates of multi-phase
galactic winds in a suite of FIRE-2 cosmological "zoom-in" simulations from the
Feedback in Realistic Environments (FIRE) project. We analyze simulations of
low-mass dwarfs, intermediate-mass dwarfs, Milky Way-mass halos, and
high-redshift massive halos. Consistent with previous work, we find that dwarfs
eject about 100 times more gas from their interstellar medium (ISM) than they
form in stars, while this mass "loading factor" drops below one in massive
galaxies. Most of the mass is carried by the hot phase ($>10^5$ K) in massive
halos and the warm phase ($10^3-10^5$ K) in dwarfs; cold outflows ($<10^3$ K)
are negligible except in high-redshift dwarfs. Energy, momentum and metal
loading factors from the ISM are of order unity in dwarfs and significantly
lower in more massive halos. Hot outflows have $2-5\times$ higher specific
energy than needed to escape from the gravitational potential of dwarf halos;
indeed, in dwarfs, the mass, momentum, and metal outflow rates increase with
radius whereas energy is roughly conserved, indicating swept up halo gas.
Burst-averaged mass loading factors tend to be larger during more powerful star
formation episodes and when the inner halo is not virialized, but we see
effectively no trend with the dense ISM gas fraction. We discuss how our
results can guide future controlled numerical experiments that aim to elucidate
the key parameters governing galactic winds and the resulting associated
preventative feedback.
|
Controlling bias in training datasets is vital for ensuring equal treatment,
or parity, between different groups in downstream applications. A naive
solution is to transform the data so that it is statistically independent of
group membership, but this may throw away too much information when a
reasonable compromise between fairness and accuracy is desired. Another common
approach is to limit the ability of a particular adversary who seeks to
maximize parity. Unfortunately, representations produced by adversarial
approaches may still retain biases as their efficacy is tied to the complexity
of the adversary used during training. To this end, we theoretically establish
that by limiting the mutual information between representations and protected
attributes, we can assuredly control the parity of any downstream classifier.
We demonstrate an effective method for controlling parity through mutual
information based on contrastive information estimators and show that they
outperform approaches that rely on variational bounds based on complex
generative models. We test our approach on UCI Adult and Heritage Health
datasets and demonstrate that our approach provides more informative
representations across a range of desired parity thresholds while providing
strong theoretical guarantees on the parity of any downstream algorithm.
|
In programming education, it makes a difference whether you are dealing with
beginners or advanced students. As our future students will become even more
tech-savvy, it is necessary to assess programming skills appropriately and
quickly to protect them from boredom and optimally support the learning
process. In this work, we advocate for the use of slice-based cohesion metrics
to assess the process of program construction in a learning analytics setting.
We argue that semantically related parts during program construction are an
essential part of programming skills. Therefore, we propose using cohesion
metrics on the level of variables to identify programmers' trains of thought
based on the cohesion of semantically related parts during program
construction.
|
A logic is said to admit an equational completeness theorem when it can be
interpreted into the equational consequence relative to some class of algebras.
We characterize logics admitting an equational completeness theorem that are
either locally tabular or have some tautology. In particular, it is shown that
a protoalgebraic logic admits an equational completeness theorem precisely when
its has two distinct logically equivalent formulas. While the problem of
determining whether a logic admits an equational completeness theorem is shown
to be decidable both for logics presented by a finite set of finite matrices
and for locally tabular logics presented by a finite Hilbert calculus, it
becomes undecidable for arbitrary logics presented by finite Hilbert calculi.
|
Efficient link configuration in millimeter wave (mmWave) communication
systems is a crucial yet challenging task due to the overhead imposed by beam
selection. For vehicle-to-infrastructure (V2I) networks, side information from
LIDAR sensors mounted on the vehicles has been leveraged to reduce the beam
search overhead. In this letter, we propose a federated LIDAR aided beam
selection method for V2I mmWave communication systems. In the proposed scheme,
connected vehicles collaborate to train a shared neural network (NN) on their
locally available LIDAR data during normal operation of the system. We also
propose a reduced-complexity convolutional NN (CNN) classifier architecture and
LIDAR preprocessing, which significantly outperforms previous works in terms of
both the performance and the complexity.
|
Large-scale multi-agent cooperative control problems have materially enjoyed
the scalability, adaptivity, and flexibility of decentralized optimization.
However, due to the mandatory iterative communications between the agents and
the system operator, the decentralized architecture is vulnerable to malicious
attacks and privacy breach. Current research on addressing privacy preservation
of both agents and the system operator in cooperative decentralized
optimization with strongly coupled objective functions and constraints is still
primitive. To fill in the gaps, this paper proposes a novel privacy-preserving
decentralized optimization paradigm based on Paillier cryptosystem. The
proposed paradigm achieves ideal correctness and security, as well as resists
attacks from a range of adversaries. The efficacy and efficiency of the
proposed approach are verified via numerical simulations and a real-world
physical platform.
|
A heterodimensional cycle is an invariant set of a dynamical system
consisting of two hyperbolic periodic orbits with different dimensions of their
unstable manifolds and a pair of orbits that connect them. For systems which
are at least $C^2$, we show that bifurcations of a coindex-1 heterodimensional
cycle within a generic 2-parameter family always create robust
heterodimensional dynamics, i.e., chain-transitive sets which contain
coexisting orbits with different numbers of positive Lyapunov exponents and
persist for an open set of parameter values. In particular, we solve the
so-called $C^r$-stabilization problem for the coindex-1 heterodimensional
cycles in any regularity class $r=2,\ldots,\infty,\omega$. The results are
based on the observation that arithmetic properties of moduli of topological
conjugacy of systems with heterodimensional cycles determine the emergence of
Bonatti-Diaz blenders.
|
In this paper we introduce a discrete fractional resolvent family
$\{S_{\alpha,\beta}^n\}_{n\in\mathbb{N}_0}$ generated by a closed linear
operator in a Banach space $X$ for a given $\alpha,\beta>0.$ Moreover, we study
its main properties and, as a consequence, we obtain a method to study the
existence and uniqueness of the solutions to discrete fractional difference
equations in a Banach space.
|
Virtual meetings are critical for remote work because of the need for
synchronous collaboration in the absence of in-person interactions. In-meeting
multitasking is closely linked to people's productivity and wellbeing. However,
we currently have limited understanding of multitasking in remote meetings and
its potential impact. In this paper, we present what we believe is the most
comprehensive study of remote meeting multitasking behavior through an analysis
of a large-scale telemetry dataset collected from February to May 2020 of U.S.
Microsoft employees and a 715-person diary study. Our results demonstrate that
intrinsic meeting characteristics such as size, length, time, and type,
significantly correlate with the extent to which people multitask, and
multitasking can lead to both positive and negative outcomes. Our findings
suggest important best-practice guidelines for remote meetings (e.g., avoid
important meetings in the morning) and design implications for productivity
tools (e.g., support positive remote multitasking).
|
We present a comprehensive analysis of the potential sensitivity of the
Electron-Ion Collider (EIC) to charged lepton flavor violation (CLFV) in the
channel $ep\to \tau X$, within the model-independent framework of the Standard
Model Effective Field Theory (SMEFT). We compute the relevant cross sections to
leading order in QCD and electroweak corrections and perform simulations of
signal and SM background events in various $\tau$ decay channels, suggesting
simple cuts to enhance the associated estimated efficiencies. To assess the
discovery potential of the EIC in $\tau$-$e$ transitions, we study the
sensitivity of other probes of this physics across a broad range of energy
scales, from $pp \to e \tau X$ at the Large Hadron Collider to decays of $B$
mesons and $\tau$ leptons, such as $\tau \to e \gamma$, $\tau \to e \ell^+
\ell^-$, and crucially the hadronic modes $\tau \to e Y$ with $Y \in \{ \pi, K,
\pi \pi, K \pi, ...\}$. We find that electroweak dipole and four-fermion
semi-leptonic operators involving light quarks are already strongly constrained
by $\tau$ decays, while operators involving the $c$ and $b$ quarks present more
promising discovery potential for the EIC. An analysis of three models of
leptoquarks confirms the expectations based on the SMEFT results. We also
identify future directions needed to maximize the reach of the EIC in CLFV
searches: these include an optimization of the $\tau$ tagger in hadronic
channels, an exploration of background suppression through tagging $b$ and $c$
jets in the final state, and a global fit by turning on all SMEFT couplings,
which will likely reveal new discovery windows for the EIC.
|
Word vector representations enable machines to encode human language for
spoken language understanding and processing. Confusion2vec, motivated from
human speech production and perception, is a word vector representation which
encodes ambiguities present in human spoken language in addition to semantics
and syntactic information. Confusion2vec provides a robust spoken language
representation by considering inherent human language ambiguities. In this
paper, we propose a novel word vector space estimation by unsupervised learning
on lattices output by an automatic speech recognition (ASR) system. We encode
each word in confusion2vec vector space by its constituent subword character
n-grams. We show the subword encoding helps better represent the acoustic
perceptual ambiguities in human spoken language via information modeled on
lattice structured ASR output. The usefulness of the proposed Confusion2vec
representation is evaluated using semantic, syntactic and acoustic analogy and
word similarity tasks. We also show the benefits of subword modeling for
acoustic ambiguity representation on the task of spoken language intent
detection. The results significantly outperform existing word vector
representations when evaluated on erroneous ASR outputs. We demonstrate that
Confusion2vec subword modeling eliminates the need for retraining/adapting the
natural language understanding models on ASR transcripts.
|
This work proposes to use evolutionary computation as a pathway to allow a
new perspective on the modeling of energy expenditure and recovery of an
individual athlete during exercise. We revisit a theoretical concept called the
"three component hydraulic model" which is designed to simulate metabolic
systems during exercise and which is able to address recently highlighted
shortcomings of currently applied performance models. This hydraulic model has
not been entirely validated on individual athletes because it depends on
physiological measures that cannot be acquired in the required precision or
quantity. This paper introduces a generalized interpretation and formalization
of the three component hydraulic model that removes its ties to concrete
metabolic measures and allows to use evolutionary computation to fit its
parameters to an athlete.
|
The Epoch of Reionisation (EoR) is the period within which the neutral
universe transitioned to an ionised one. This period remains unobserved using
low-frequency radio interferometers which target the 21 cm signal of neutral
hydrogen emitted in this era. The Murchison Widefield Array (MWA) radio
telescope was built with the detection of this signal as one of its major
science goals. One of the most significant challenges towards a successful
detection is that of calibration, especially in the presence of the Earth's
ionosphere. By introducing refractive source shifts, distorting source shapes
and scintillating flux densities, the ionosphere is a major nuisance in
low-frequency radio astronomy. We introduce SIVIO, a software tool developed
for simulating observations of the MWA through different ionospheric conditions
estimated using thin screen approximation models and propagated into the
visibilities. This enables us to directly assess the impact of the ionosphere
on observed EoR data and the resulting power spectra. We show that the
simulated data captures the dispersive behaviour of ionospheric effects. We
show that the spatial structure of the simulated ionospheric media is
accurately reconstructed either from the resultant source positional offsets or
from parameters evaluated during the data calibration procedure. In turn, this
will inform on the best strategies of identifying and efficiently eliminating
ionospheric contamination in EoR data moving into the Square Kilometre Array
era.
|
This article presents a new hand architecture with three under-actuated
fingers. Each finger performs spatial movements to achieve more complex and
varied grasping than the existing planar-movement fingers. The purpose of this
hand is to grasp complex-shaped workpieces as they leave the machining centres.
Among the taxonomy of grips, cylindrical and spherical grips are often used to
grasp heavy objects. A combination of these two modes makes it possible to
capture most of the workpieces machined with 5-axis machines. However, the
change in grasping mode requires the fingers to reconfigure themselves to
perform spatial movements. This solution requires the addition of two or three
actuators to change the position of the fingers and requires sensors to
recognize the shape of the workpiece and determine the type of grasp to be
used. This article proposes to extend the notion of under-actuated fingers to
spatial movements. After a presentation of the kinematics of the fingers, the
problem of stability is discussed as well as the transmission of forces in this
mechanism. The complete approach for calculating the stability conditions is
presented from the study of Jacobian force transmission matrices. CAD
representations of the hand and its behavior in spherical and cylindrical grips
are presented.
|
The main contribution of this manuscript is a local normal form for
Hamiltonian actions of Poisson-Lie groups $K$ on a symplectic manifold equipped
with an $AN$-valued moment map, where $AN$ is the dual Poisson-Lie group of
$K$. Our proof uses the delinearization theorem of Alekseev which relates a
classical Hamiltonian action of $K$ with $\mathfrak{k}^*$-valued moment map to
a Hamiltonian action with an $AN$-valued moment map, via a deformation of
symplectic structures. We obtain our main result by proving a ``delinearization
commutes with symplectic quotients'' theorem which is also of independent
interest, and then putting this together with the local normal form theorem for
classical Hamiltonian actions wtih $\mathfrak{k}^*$-valued moment maps. A key
ingredient for our main result is the delinearization
$\mathcal{D}(\omega_{can})$ of the canonical symplectic structure on $T^*K$, so
we additionally take some steps toward explicit computations of
$\mathcal{D}(\omega_{can})$. In particular, in the case $K=SU(2)$, we obtain
explicit formulas for the matrix coefficients of $\mathcal{D}(\omega_{can})$
with respect to a natural choice of coordinates on $T^*SU(2)$.
|
We propose a three-dimensional (3D) multimodal medical imaging system that
combines freehand ultrasound and structured light 3D reconstruction in a single
coordinate system without requiring registration. To the best of our knowledge,
these techniques have not been combined before as a multimodal imaging
technique. The system complements the internal 3D information acquired with
ultrasound, with the external surface measured with the structure light
technique. Moreover, the ultrasound probe's optical tracking for pose
estimation was implemented based on a convolutional neural network.
Experimental results show the system's high accuracy and reproducibility, as
well as its potential for preoperative and intraoperative applications. The
experimental multimodal error, or the distance from two surfaces obtained with
different modalities, was 0.12 mm. The code is available as a Github
repository.
|
Solving planning and scheduling problems for multiple tasks with highly
coupled state and temporal constraints is notoriously challenging. An appealing
approach to effectively decouple the problem is to judiciously order the events
such that decisions can be made over sequences of tasks. As many problems
encountered in practice are over-constrained, we must instead find relaxed
solutions in which certain requirements are dropped. This motivates a
formulation of optimality with respect to the costs of relaxing constraints and
the problem of finding an optimal ordering under which this relaxing cost is
minimum. In this paper, we present Generalized Conflict-directed Ordering
(GCDO), a branch-and-bound ordering method that generates an optimal total
order of events by leveraging the generalized conflicts of both inconsistency
and suboptimality from sub-solvers for cost estimation and solution space
pruning. Due to its ability to reason over generalized conflicts, GCDO is much
more efficient in finding high-quality total orders than the previous
conflict-directed approach CDITO. We demonstrate this by benchmarking on
temporal network configuration problems, which involves managing networks over
time and makes necessary tradeoffs between network flows against CDITO and
Mixed Integer-Linear Programing (MILP). Our algorithm is able to solve two
orders of magnitude more benchmark problems to optimality and twice the
problems compared to CDITO and MILP within a runtime limit, respectively.
|
Cryo-electron microscopy (cryo-EM) has become a major experimental technique
to determine the structures of large protein complexes and molecular
assemblies, as evidenced by the 2017 Nobel Prize. Although cryo-EM has been
drastically improved to generate high-resolution three-dimensional (3D) maps
that contain detailed structural information about macromolecules, the
computational methods for using the data to automatically build structure
models are lagging far behind. The traditional cryo-EM model building approach
is template-based homology modeling. Manual de novo modeling is very
time-consuming when no template model is found in the database. In recent
years, de novo cryo-EM modeling using machine learning (ML) and deep learning
(DL) has ranked among the top-performing methods in macromolecular structure
modeling. Deep-learning-based de novo cryo-EM modeling is an important
application of artificial intelligence, with impressive results and great
potential for the next generation of molecular biomedicine. Accordingly, we
systematically review the representative ML/DL-based de novo cryo-EM modeling
methods. And their significances are discussed from both practical and
methodological viewpoints. We also briefly describe the background of cryo-EM
data processing workflow. Overall, this review provides an introductory guide
to modern research on artificial intelligence (AI) for de novo molecular
structure modeling and future directions in this emerging field.
|
2020 has been a year marked by the COVID-19 pandemic. This event has caused
disruptions to many aspects of normal life. An important aspect in reducing the
impact of the pandemic is to control its spread. Studies have shown that one
effective method in reducing the transmission of COVID-19 is to wear masks.
Strict mask-wearing policies have been met with not only public sensation but
also practical difficulty. We cannot hope to manually check if everyone on a
street is wearing a mask properly. Existing technology to help automate mask
checking uses deep learning models on real-time surveillance camera footages.
The current dominant method to perform real-time mask detection uses Mask-RCNN
with ResNet as the backbone. While giving good detection results, this method
is computationally intensive and its efficiency in real-time face mask
detection is not ideal. Our research proposes a new approach to mask detection
by replacing Mask-R-CNN with a more efficient model "YOLO" to increase the
processing speed of real-time mask detection and not compromise on accuracy.
Besides, given the small volume as well as extreme imbalance of the mask
detection datasets, we adopt a latest progress made in few-shot visual
classification, simple CNAPs, to improve the classification performance.
|
In computed tomography, data consist of measurements of the attenuation of
X-rays passing through an object. The goal is to reconstruct the linear
attenuation coefficient of the object's interior. For each position of the
X-ray source, characterized by its angle with respect to a fixed coordinate
system, one measures a set of data referred to as a view. A common assumption
is that these view angles are known, but in some applications they are known
with imprecision. We propose a framework to solve a Bayesian inverse problem
that jointly estimates the view angles and an image of the object's attenuation
coefficient. We also include a few hyperparameters that characterize the
likelihood and the priors. Our approach is based on a Gibbs sampler where the
associated conditional densities are simulated using different sampling schemes
- hence the term hybrid. In particular, the conditional distribution associated
with the reconstruction is nonlinear in the image pixels, non-Gaussian and
high-dimensional. We approach this distribution by constructing a Laplace
approximation that represents the target conditional locally at each Gibbs
iteration. This enables sampling of the attenuation coefficients in an
efficient manner using iterative reconstruction algorithms. The numerical
results show that our algorithm is able to jointly identify the image and the
view angles, while also providing uncertainty estimates of both. We demonstrate
our method with 2D X-ray computed tomography problems using fan beam
configurations.
|
FPGAs are now used in public clouds to accelerate a wide range of
applications, including many that operate on sensitive data such as financial
and medical records. We present ShEF, a trusted execution environment (TEE) for
cloud-based reconfigurable accelerators. ShEF is independent from CPU-based
TEEs and allows secure execution under a threat model where the adversary can
control all software running on the CPU connected to the FPGA, has physical
access to the FPGA, and can compromise the FPGA interface logic of the cloud
provider. ShEF provides a secure boot and remote attestation process that
relies solely on existing FPGA mechanisms for root of trust. It also includes a
Shield component that provides secure access to data while the accelerator is
in use. The Shield is highly customizable and extensible, allowing users to
craft a bespoke security solution that fits their accelerator's memory access
patterns, bandwidth, and security requirements at minimum performance and area
overheads. We describe a prototype implementation of ShEF for existing cloud
FPGAs and measure the performance benefits of customizable security using five
accelerator designs.
|
The dual-path RNN (DPRNN) was proposed to more effectively model extremely
long sequences for speech separation in the time domain. By splitting long
sequences to smaller chunks and applying intra-chunk and inter-chunk RNNs, the
DPRNN reached promising performance in speech separation with a limited model
size. In this paper, we combine the DPRNN module with Convolution Recurrent
Network (CRN) and design a model called Dual-Path Convolution Recurrent Network
(DPCRN) for speech enhancement in the time-frequency domain. We replace the
RNNs in the CRN with DPRNN modules, where the intra-chunk RNNs are used to
model the spectrum pattern in a single frame and the inter-chunk RNNs are used
to model the dependence between consecutive frames. With only 0.8M parameters,
the submitted DPCRN model achieves an overall mean opinion score (MOS) of 3.57
in the wide band scenario track of the Interspeech 2021 Deep Noise Suppression
(DNS) challenge. Evaluations on some other test sets also show the efficacy of
our model.
|
Quantitative lung measures derived from computed tomography (CT) have been
demonstrated to improve prognostication in coronavirus disease (COVID-19)
patients, but are not part of the clinical routine since required manual
segmentation of lung lesions is prohibitively time-consuming. We propose a new
fully automated deep learning framework for rapid quantification and
differentiation between lung lesions in COVID-19 pneumonia from both contrast
and non-contrast CT images using convolutional Long Short-Term Memory
(ConvLSTM) networks. Utilizing the expert annotations, model training was
performed 5 times with separate hold-out sets using 5-fold cross-validation to
segment ground-glass opacity and high opacity (including consolidation and
pleural effusion). The performance of the method was evaluated on CT data sets
from 197 patients with positive reverse transcription polymerase chain reaction
test result for SARS-CoV-2. Strong agreement between expert manual and
automatic segmentation was obtained for lung lesions with a Dice score
coefficient of 0.876 $\pm$ 0.005; excellent correlations of 0.978 and 0.981 for
ground-glass opacity and high opacity volumes. In the external validation set
of 67 patients, there was dice score coefficient of 0.767 $\pm$ 0.009 as well
as excellent correlations of 0.989 and 0.996 for ground-glass opacity and high
opacity volumes. Computations for a CT scan comprising 120 slices were
performed under 2 seconds on a personal computer equipped with NVIDIA Titan RTX
graphics processing unit. Therefore, our deep learning-based method allows
rapid fully-automated quantitative measurement of pneumonia burden from CT and
may generate results with an accuracy similar to the expert readers.
|
DNA methylation is a well-studied genetic modification that regulates gene
transcription of Eukaryotes. Its alternations have been recognized as a
significant component of cancer development. In this study, we use the DNA
methylation 450k data from The Cancer Genome Atlas to evaluate the efficacy of
DNA methylation data on cancer classification for 30 cancer types. We propose a
new method for gene selection in high dimensional data(over 450 thousand).
Variance filtering is first introduced for dimension reduction and Recursive
feature elimination (RFE) is then used for feature selection. We address the
problem of selecting a small subsets of genes from large number of methylated
sites, and our parsimonious model is demonstrated to be efficient, achieving an
accuracy over 91%, outperforming other studies which use DNA micro-arrays and
RNA-seq Data . The performance of 20 models, which are based on 4 estimators
(Random Forest, Decision Tree, Extra Tree and Support Vector Machine) and 5
classifiers (k-Nearest Neighbours, Support Vector Machine, XGboost, Light GBM
and Multi-Layer Perceptron), is compared and robustness of the RFE algorithm is
examined. Results suggest that the combined model of extra tree plus catboost
classifier offers the best performance in cancer identification, with an
overall validation accuracy of 91% , 92.3%, 93.3% and 93.5% for 20, 30, 40 and
50 features respectively. The biological functions in cancer development of 50
selected genes is also explored through enrichment analysis and the results
show that 12 out of 16 of our top features have already been identified to be
specific with cancer and we also propose some more genes to be tested for
future studies. Therefore, our method may be utilzed as an auxiliary diagnostic
method to determine the actual clinicopathological status of a specific cancer.
|
We investigate a one-dimensional quantum emitter chain where transport of
excitations and correlations takes place via nearest neighbor, dipole-dipole
interactions. In the presence of collective radiative emission, we show that a
phase imprinting wavepacket initialization procedure can lead to subradiant
transport and can preserve quantum correlations. In the context of cavity
mediated transport, where emitters are coupled to a common delocalized optical
mode, we analyze the effect of frequency disorder and nonidentical
photon-emitter couplings on excitation transport.
|
We associate all small subgraph counting problems with a systematic graph
encoding/representation system which makes a coherent use of graphlet
structures. The system can serve as a unified foundation for studying and
connecting many important graph problems in theory and practice. We describe
topological relations among graphlets (graph elements) in rigorous mathematics
language and from the perspective of graph encoding. We uncover, characterize
and utilize algebraic and numerical relations in graphlet counts/frequencies.
We present a novel algorithm for efficiently counting small subgraphs as a
practical product of our theoretical findings.
|
We propose a framework to mine API usage scenarios from Stack Overflow. Each
task consists of a code example, the task description, and the reactions of
developers towards the code example. First, we present an algorithm to
automatically link a code example in a forum post to an API mentioned in the
textual contents of the forum post. Second, we generate a natural language
description of the task by summarizing the discussions around the code example.
Third, we automatically associate developers reactions (i.e., positive and
negative opinions) towards the code example to offer information about code
quality. We evaluate the algorithms using three benchmarks.
|
We study the problem of user-scheduling and resource allocation in
distributed multi-user, multiple-input multiple-output (MIMO) networks
implementing user-centric clustering and non-coherent transmission. We
formulate a weighted sum-rate maximization problem which can provide user
proportional fairness. As in this setup, users can be served by many
transmitters, user scheduling is particularly difficult. To solve this issue,
we use block coordinate descent, fractional programming, and compressive
sensing to construct an algorithm that performs user-scheduling and
beamforming. Our results show that the proposed framework provides an 8- to
10-fold gain in the long-term user spectral efficiency compared to benchmark
schemes such as round-robin scheduling. Furthermore, we quantify the
performance loss due to imperfect channel state information and pilot training
overhead using a defined area-based pilot-reuse factor.
|
In this paper, we study the graph of homothety classes of stable free
lattices in a two-dimensional representation over a local UFD. This generalizes
a classical result of the case where the base ring is a discrete valuation ring
due to Serre. As applications, we consider the case when the representation
comes from a residually reducible Hida family and we study the control theorem
of Selmer groups. These results enable us to know the precise statement of the
main conjecture in residually reducible case as we will remark in section 4.
|
We show that assuming the standard conjectures, for any smooth projective
variety $X$ of dimension $n$ over an algebraically closed field, there is a
constant $C>0$ such that for any positive rational number $r$ and for any
polarized endomorphism $f$ of $X$, we have \[ \| G_r \circ f \| \le C \,
\mathrm{deg}(G_r \circ f), \] where $G_r$ is a correspondence of $X$ so that
for each $0\le i\le 2n$ its pullback action on the $i$-th Weil cohomology group
is the multiplication-by-$r^i$ map. This inequality has been conjectured by the
authors to hold in a more general setting, which - in the special case of
polarized endomorphisms - confirms the validity of the analog of a well known
result by Serre in the K\"ahler setting.
|
As we gain access to a greater depth and range of health-related information
about individuals, three questions arise: (1) Can we build better models to
predict individual-level risk of ill health? (2) How much data do we need to
effectively predict ill health? (3) Are new methods required to process the
added complexity that new forms of data bring? The aim of the study is to apply
a machine learning approach to identify the relative contribution of personal,
social, health-related, biomarker and genetic data as predictors of future
health in individuals. Using longitudinal data from 6830 individuals in the UK
from Understanding Society (2010-12 to 2015-17), the study compares the
predictive performance of five types of measures: personal (e.g. age, sex),
social (e.g. occupation, education), health-related (e.g. body weight, grip
strength), biomarker (e.g. cholesterol, hormones) and genetic single nucleotide
polymorphisms (SNPs). The predicted outcome variable was limiting long-term
illness one and five years from baseline. Two machine learning approaches were
used to build predictive models: deep learning via neural networks and XGBoost
(gradient boosting decision trees). Model fit was compared to traditional
logistic regression models. Results found that health-related measures had the
strongest prediction of future health status, with genetic data performing
poorly. Machine learning models only offered marginal improvements in model
accuracy when compared to logistic regression models, but also performed well
on other metrics e.g. neural networks were best on AUC and XGBoost on
precision. The study suggests that increasing complexity of data and methods
does not necessarily translate to improved understanding of the determinants of
health or performance of predictive models of ill health.
|
Despite extensive research efforts in the recent years, computational
modeling of argumentation remains one of the most challenging areas of natural
language processing (NLP). This is primarily due to inherent complexity of the
cognitive processes behind human argumentation, which commonly combine and
integrate plethora of different types of knowledge, requiring from
computational models capabilities that are far beyond what is needed for most
other (i.e., simpler) natural language understanding tasks. The existing large
body of work on mining, assessing, generating, and reasoning over arguments
largely acknowledges that much more common sense and world knowledge needs to
be integrated into computational models that would accurately model
argumentation. A systematic overview and organization of the types of knowledge
introduced in existing models of computational argumentation (CA) is, however,
missing and this hinders targeted progress in the field. In this survey paper,
we fill this gap by (1) proposing a pyramid of types of knowledge required in
CA tasks, (2) analysing the state of the art with respect to the reliance and
exploitation of these types of knowledge, for each of the for main research
areas in CA, and (3) outlining and discussing directions for future research
efforts in CA.
|
Point clouds, being the simple and compact representation of surface geometry
of 3D objects, have gained increasing popularity with the evolution of deep
learning networks for classification and segmentation tasks. Unlike human,
teaching the machine to analyze the segments of an object is a challenging task
and quite essential in various machine vision applications. In this paper, we
address the problem of segmentation and labelling of the 3D point clouds by
proposing a inception based deep network architecture called PIG-Net, that
effectively characterizes the local and global geometric details of the point
clouds. In PIG-Net, the local features are extracted from the transformed input
points using the proposed inception layers and then aligned by feature
transform. These local features are aggregated using the global average pooling
layer to obtain the global features. Finally, feed the concatenated local and
global features to the convolution layers for segmenting the 3D point clouds.
We perform an exhaustive experimental analysis of the PIG-Net architecture on
two state-of-the-art datasets, namely, ShapeNet [1] and PartNet [2]. We
evaluate the effectiveness of our network by performing ablation study.
|
A formula of the $D$-$D$ correlation function is derived. The deuterons are
treated either as elementary particles or as neutron-proton bound states. In
the first case the deuterons are directly emitted from a source and in the
second one the deuteron formation is a final-state process simultaneous with a
generation of $D$-$D$ correlation. The source radius of deuterons formed due to
final-state interactions is bigger by the factor of $\sqrt{2}$ than that of
directly emitted deuterons. To check how sizable is the effect we compute the
$D$-$D$ correlation function taking into the Bose-Einstein statistics of
deuterons, the $s$-wave scattering due to strong interaction and the Coulomb
repulsion. The correlation function is shown to be sensitive to the source
radius for sources which are sufficiently small with RMS radii smaller than 3.5
fm. Otherwise the correlation function is dominated by the Coulomb repulsion
and weakly depends on the source radius. Measurements which can make use of our
finding are discussed.
|
Recently developed deep neural networks achieved state-of-the-art results in
the subject of 6D object pose estimation for robot manipulation. However, those
supervised deep learning methods require expensive annotated training data.
Current methods for reducing those costs frequently use synthetic data from
simulations, but rely on expert knowledge and suffer from the "domain gap" when
shifting to the real world. Here, we present a proof of concept for a novel
approach of autonomously generating annotated training data for 6D object pose
estimation. This approach is designed for learning new objects in operational
environments while requiring little interaction and no expertise on the part of
the user. We evaluate our autonomous data generation approach in two grasping
experiments, where we archive a similar grasping success rate as related work
on a non autonomously generated data set.
|
We describe the implementation of a three-dimensional Paul ion trap
fabricated from a stack of precision-machined silica glass wafers, which
incorporates a pair of junctions for 2-dimensional ion transport. The trap has
142 dedicated electrodes which can be used to define multiple potential wells
in which strings of ions can be held. By supplying time-varying potentials,
this also allows for transport and re-configuration of ion strings. We describe
the design, simulation, fabrication and packaging of the trap, including
explorations of different parameter regimes and possible optimizations and
design choices. We give results of initial testing of the trap, including
measurements of heating rates and junction transport.
|
Laterally large (~3 micrometers), atomically-thin two-dimensional (2D)
Bi2O2CO3 nanosheets (2D bismuth oxycarbonate, 2D bismutite) are fabricated via
sonochemically-assisted template-free synthesis. Key to the synthesis of the
freestanding, laterally large 2D Bi2O2CO3 nanosheets from bulk Bi powder is
choice of suspension medium, controlled reaction temperatures and several hours
processing time. Lateral sizes of 2D Bi2O2CO3 can be controlled between
micrometer-sized nanosheets and tens of nm sized nanoflakes solely based on the
choice of suspension medium. The here introduced 2D Bi2O2CO3 nanosheets/-flakes
are then hybridized by a simple mix-and-match approach with TiO2 nanoparticles
for testing in suspension-type photocatalytic hydrogen production via water
splitting. This introduces the 2D Bi2O2CO3 with TiO2 as a promising
noble-metal-free co-catalyst for photocatalytic hydrogen evolution. Our results
enrich the fabrication toolbox of emerging 2D pnictogen oxycarbonates towards
large 2D nanosheets and demonstrate the promising potential of 2D Bi2O2CO3 as
an advantageous (co-)catalyst for hydrogen evolution in photocatalytic water
splitting.
|
A blocking set in a graph $G$ is a subset of vertices that intersects every
maximum independent set of $G$. Let ${\sf mmbs}(G)$ be the size of a maximum
(inclusion-wise) minimal blocking set of $G$. This parameter has recently
played an important role in the kernelization of Vertex Cover parameterized by
the distance to a graph class ${\cal F}$. Indeed, it turns out that the
existence of a polynomial kernel for this problem is closely related to the
property that ${\sf mmbs}({\cal F})=\sup_{G \in {\cal F}}{\sf mmbs}(G)$ is
bounded by a constant, and thus several recent results focused on determining
${\sf mmbs}({\cal F})$ for different classes ${\cal F}$. We consider the
parameterized complexity of computing ${\sf mmbs}$ under various
parameterizations, such as the size of a maximum independent set of the input
graph and the natural parameter. We provide a panorama of the complexity of
computing both ${\sf mmbs}$ and ${\sf mmhs}$, which is the size of a maximum
minimal hitting set of a hypergraph, a closely related parameter. Finally, we
consider the problem of computing ${\sf mmbs}$ parameterized by treewidth,
especially relevant in the context of kernelization. Given the "counting"
nature of ${\sf mmbs}$, it does not seem to be expressible in monadic
second-order logic, hence its tractability does not follow from Courcelle's
theorem. Our main technical contribution is a fixed-parameter tractable
algorithm for this problem.
|
In this work we propose a batch Bayesian optimization method for
combinatorial problems on permutations, which is well suited for expensive cost
functions on permutations. We introduce LAW, a new efficient batch acquisition
method based on the determinantal point process, using an acquisition weighted
kernel. Relying on multiple parallel evaluations, LAW accelerates the search
for the optimal permutation. We provide a regret analysis for our method to
gain insight in its theoretical properties. We then apply the framework to
permutation problems, which have so far received little attention in the
Bayesian Optimization literature, despite their practical importance. We call
this method LAW2ORDER. We evaluate the method on several standard combinatorial
problems involving permutations such as quadratic assignment, flowshop
scheduling and the traveling salesman, as well as on a structure learning task.
|
We investigate the symplectic geometric and differential geometric aspects of
the moduli space of connections on a compact Riemann surface $X$. Fix a theta
characteristic $K^{1/2}_X$ on $X$; it defines a theta divisor on the moduli
space ${\mathcal M}$ of stable vector bundles on $X$ of rank $r$ degree zero.
Given a vector bundle $E \in {\mathcal M}$ lying outside the theta divisor, we
construct a natural holomorphic connection on $E$ that depends holomorphically
on $E$. Using this holomorphic connection, we construct a canonical holomorphic
isomorphism between the following two: \begin{enumerate} \item the moduli space
$\mathcal C$ of pairs $(E, D)$, where $E\in {\mathcal M}$ and $D$ is a
holomorphic connection on $E$, and
\item the space ${\rm Conn}(\Theta)$ given by the sheaf of holomorphic
connections on the line bundle on $\mathcal M$ associated to the theta divisor.
\end{enumerate} The above isomorphism between $\mathcal C$ and ${\rm
Conn}(\Theta)$ is symplectic structure preserving, and it moves holomorphically
as $X$ runs over a holomorphic family of Riemann surfaces.
|
Despite inextricable ties between race and language, little work has
considered race in NLP research and development. In this work, we survey 79
papers from the ACL anthology that mention race. These papers reveal various
types of race-related bias in all stages of NLP model development, highlighting
the need for proactive consideration of how NLP systems can uphold racial
hierarchies. However, persistent gaps in research on race and NLP remain: race
has been siloed as a niche topic and remains ignored in many NLP tasks; most
work operationalizes race as a fixed single-dimensional variable with a
ground-truth label, which risks reinforcing differences produced by historical
racism; and the voices of historically marginalized people are nearly absent in
NLP literature. By identifying where and how NLP literature has and has not
considered race, especially in comparison to related fields, our work calls for
inclusion and racial justice in NLP research practices.
|
The Collatz dynamic is known to generate a complex quiver of sequences over
natural numbers which inflation propensity remains so unpredictable it could be
used to generate reliable proof of work algorithms for the cryptocurrency
industry. Here we establish an ad hoc equivalent of modular arithmetic for
Collatz sequences to automatically demonstrate the convergence of infinite
quivers of numbers, based on five arithmetic rules we prove apply on the entire
Collatz dynamic and which we further simulate to gain insight on their graph
geometry and computational properties. We then formally demonstrate these rules
define an automaton that is playing a Hydra game on the graph of undecided
numbers we also prove is embedded in 24N-7, proving that in ZFC the Collatz
conjecture is true, before giving a promising direction to also prove it in
Peano arithmetic.
|
We analyze the problem of active covering, where the learner is given an
unlabeled dataset and can sequentially label query examples. The objective is
to label query all of the positive examples in the fewest number of total label
queries. We show under standard non-parametric assumptions that a classical
support estimator can be repurposed as an offline algorithm attaining an excess
query cost of $\widetilde{\Theta}(n^{D/(D+1)})$ compared to the optimal
learner, where $n$ is the number of datapoints and $D$ is the dimension. We
then provide a simple active learning method that attains an improved excess
query cost of $\widetilde{O}(n^{(D-1)/D})$. Furthermore, the proposed
algorithms only require access to the positive labeled examples, which in
certain settings provides additional computational and privacy benefits.
Finally, we show that the active learning method consistently outperforms
offline methods as well as a variety of baselines on a wide range of benchmark
image-based datasets.
|
We consider a network of autonomous agents whose outputs are actions in a
game with coupled constraints. In such network scenarios, agents seeking to
minimize coupled cost functions using distributed information while satisfying
the coupled constraints. Current methods consider the small class of
multi-integrator agents using primal-dual methods. These methods can only
ensure constraint satisfaction in steady-state. In contrast, we propose an
inexact penalty method using a barrier function for nonlinear agents with
equilibrium-independent passive dynamics. We show that these dynamics converge
to an epsilon-GNE while satisfying the constraints for all time, not only in
steady-state. We develop these dynamics in both the full-information and
partial-information settings. In the partial-information setting, dynamic
estimates of the others' actions are used to make decisions and are updated
through local communication. Applications to optical networks and velocity
synchronization of flexible robots are provided.
|
Two dynamical systems are topologically equivalent when their phase-portraits
can be morphed into each other by a homeomorphic coordinate transformation on
the state space. The induced equivalence classes capture qualitative properties
such as stability or the oscillatory nature of the state trajectories, for
example. In this paper we develop a method to learn the topological class of an
unknown stable system from a single trajectory of finitely many state
observations. Using a moderate deviations principle for the least squares
estimator of the unknown system matrix $\theta$, we prove that the probability
of misclassification decays exponentially with the number of observations at a
rate that is proportional to the square of the smallest singular value of
$\theta$.
|
A recent case study from AWS by Chong et al. proposes an effective
methodology for Bounded Model Checking in industry. In this paper, we report on
a follow up case study that explores the methodology from the perspective of
three research questions: (a) can proof artifacts be used across verification
tools; (b) are there bugs in verified code; and (c) can specifications be
improved. To study these questions, we port the verification tasks for
$\texttt{aws-c-common}$ library to SEAHORN and KLEE. We show the benefits of
using compiler semantics and cross-checking specifications with different
verification techniques, and call for standardizing proof library extensions to
increase specification reuse. The verification tasks discussed are publicly
available online.
|
Despite the significant advances in deep learning over the past decade, a
major challenge that limits the wide-spread adoption of deep learning has been
their fragility to adversarial attacks. This sensitivity to making erroneous
predictions in the presence of adversarially perturbed data makes deep neural
networks difficult to adopt for certain real-world, mission-critical
applications. While much of the research focus has revolved around adversarial
example creation and adversarial hardening, the area of performance measures
for assessing adversarial robustness is not well explored. Motivated by this,
this study presents the concept of residual error, a new performance measure
for not only assessing the adversarial robustness of a deep neural network at
the individual sample level, but also can be used to differentiate between
adversarial and non-adversarial examples to facilitate for adversarial example
detection. Furthermore, we introduce a hybrid model for approximating the
residual error in a tractable manner. Experimental results using the case of
image classification demonstrates the effectiveness and efficacy of the
proposed residual error metric for assessing several well-known deep neural
network architectures. These results thus illustrate that the proposed measure
could be a useful tool for not only assessing the robustness of deep neural
networks used in mission-critical scenarios, but also in the design of
adversarially robust models.
|
We present QuantumSync, the first quantum algorithm for solving a
synchronization problem in the context of computer vision. In particular, we
focus on permutation synchronization which involves solving a non-convex
optimization problem in discrete variables. We start by formulating
synchronization into a quadratic unconstrained binary optimization problem
(QUBO). While such formulation respects the binary nature of the problem,
ensuring that the result is a set of permutations requires extra care. Hence,
we: (I) show how to insert permutation constraints into a QUBO problem and (ii)
solve the constrained QUBO problem on the current generation of the adiabatic
quantum computers D-Wave. Thanks to the quantum annealing, we guarantee global
optimality with high probability while sampling the energy landscape to yield
confidence estimates. Our proof-of-concepts realization on the adiabatic D-Wave
computer demonstrates that quantum machines offer a promising way to solve the
prevalent yet difficult synchronization problems.
|
The present paper gives new elements for the light variations of the EA
variable star AY Peg, on the basis of new times of minimum performed visually
and with ccd by members of GEOS between 1985 and 2018, and the ASAS-SN set of
data available. On one hand, we can establish a new ephemeris with a possible
quadratic term, and on the other hand, the amplitude of the primary minimum
appears much deeper than the one given in GCVS. AY Peg varies between 13.1 and
15.6 magnitude at its primary eclipse.
|
Diagnostic classification models (DCMs) offer statistical tools to inspect
the fined-grained attribute of respondents' strengths and weaknesses. However,
the diagnosis accuracy deteriorates when misspecification occurs in the
predefined item-attribute relationship, which is encoded into a Q-matrix. To
prevent such misspecification, methodologists have recently developed several
Bayesian Q-matrix estimation methods for greater estimation flexibility.
However, these methods become infeasible in the case of large-scale assessments
with a large number of attributes and items. In this study, we focused on the
deterministic inputs, noisy ``and'' gate (DINA) model and proposed a new
framework for the Q-matrix estimation to find the Q-matrix with the maximum
marginal likelihood. Based on this framework, we developed a scalable
estimation algorithm for the DINA Q-matrix by constructing an iteration
algorithm that utilizes stochastic optimization and variational inference. The
simulation and empirical studies reveal that the proposed method achieves
high-speed computation, good accuracy, and robustness to potential
misspecifications, such as initial value's choices and hyperparameter settings.
Thus, the proposed method can be a useful tool for estimating a Q-matrix in
large-scale settings.
|
Novae are some of the most commonly detected optical transients and have the
potential to provide valuable information about binary evolution. Binary
population synthesis codes have emerged as the most effective tool for
modelling populations of binary systems, but such codes have traditionally
employed greatly simplified nova physics, precluding detailed study. In this
work, we implement a model treating H and He novae as individual events into
the binary population synthesis code \binaryc. This treatment of novae
represents a significant improvement on the `averaging' treatment currently
employed in modern population synthesis codes. We discuss the evolutionary
pathways leading to these phenomena and present nova event rates and
distributions of several important physical parameters. Most novae are produced
on massive white dwarfs, with approximately 70 and 55 per cent of nova events
occurring on O/Ne white dwarfs for H and He novae respectively. Only 15 per
cent of H-nova systems undergo a common-envelope phase, but these systems are
responsible for the majority of H nova events. All He-accreting He-nova systems
are considered post-common-envelope systems, and almost all will merge with
their donor star in a gravitational-wave driven inspiral. We estimate the
current annual rate of novae in M31 (Andromeda) to be approximately $41 \pm 4$
for H novae, underpredicting the current observational estimate of
$65^{+15}_{-16}$, and $0.14\pm0.015$ for He novae. When varying common-envelope
parameters, the H nova rate varies between 20 and 80 events per year.
|
Here we introduce a new reconstruction technique for two-dimensional Bragg
Scattering Tomography (BST), based on the Radon transform models of [arXiv
preprint, arXiv:2004.10961 (2020)]. Our method uses a combination of ideas from
multibang control and microlocal analysis to construct an objective function
which can regularize the BST artifacts; specifically the boundary artifacts due
to sharp cutoff in sinogram space (as observed in [arXiv preprint,
arXiv:2007.00208 (2020)]), and artifacts arising from approximations made in
constructing the model used for inversion. We then test our algorithm in a
variety of Monte Carlo (MC) simulated examples of practical interest in airport
baggage screening and threat detection. The data used in our studies is
generated with a novel Monte-Carlo code presented here. The model, which is
available from the authors upon request, captures both the Bragg scatter
effects described by BST as well as beam attenuation and Compton scatter.
|
The automated analysis of medical images is currently limited by technical
and biological noise and bias. The same source tissue can be represented by
vastly different images if the image acquisition or processing protocols vary.
For an image analysis pipeline, it is crucial to compensate such biases to
avoid misinterpretations. Here, we evaluate, compare, and improve existing
generative model architectures to overcome domain shifts for immunofluorescence
(IF) and Hematoxylin and Eosin (H&E) stained microscopy images. To determine
the performance of the generative models, the original and transformed images
were segmented or classified by deep neural networks that were trained only on
images of the target bias. In the scope of our analysis, U-Net cycleGANs
trained with an additional identity and an MS-SSIM-based loss and Fixed-Point
GANs trained with an additional structure loss led to the best results for the
IF and H&E stained samples, respectively. Adapting the bias of the samples
significantly improved the pixel-level segmentation for human kidney glomeruli
and podocytes and improved the classification accuracy for human prostate
biopsies by up to 14%.
|
We prove a quantitative equidistribution theorem for the eigenfunctions of a
Schr\"odinger operator -\Delta+V on a rectangular torus T for V\in L^2(T). A
key application of our theorem is a quantitative equidistribution theorem for
the eigenfunctions of a Schr\"odinger operator whose potential models
disordered systems with N obstacles. We prove the validity of this
equidistribution theorem in the thermodynamic limit, as N \to\infty, under the
assumption that a weak disorder hypothesis is satisfied.
In particular, we show that this scale-invariant equidistribution theorem
holds for the eigenfunctions of random displacement models almost surely with
respect to the joint density of the random positions of the potentials. In the
case of a general random Schr\"odinger operator, where disorder may be strong,
we deduce an equidistribution theorem on certain length scales, which
establishes a lower bound for the Anderson localization length as a function of
the energy, coupling parameter, density of scatterers and the L^2 norm of the
potential.
|
The heavy fermion superconductor URu$_2$Si$_2$ is a candidate for chiral,
time-reversal symmetry-breaking superconductivity with a nodal gap structure.
Here, we microscopically visualized superconductivity and spatially
inhomogeneous ferromagnetism in URu$_2$Si$_2$. We observed linear-$T$
superfluid density, consistent with d-wave pairing symmetries including chiral
d-wave, but did not observe the spontaneous magnetization expected for chiral
d-wave. Local vortex pinning potentials had either four- or two-fold rotational
symmetries with various orientations at different locations. Taken together,
these data support a nodal gap structure in URu$_2$Si$_2$ and suggest that
chirality either is not present or does not lead to detectable spontaneous
magnetization.
|
The realization of practical intelligent reflecting surface (IRS)-assisted
multi-user communication (IRS-MUC) systems critically depends on the proper
beamforming design exploiting accurate channel state information (CSI).
However, channel estimation (CE) in IRS-MUC systems requires a significantly
large training overhead due to the numerous reflection elements involved in
IRS. In this paper, we adopt a deep learning approach to implicitly learn the
historical channel features and directly predict the IRS phase shifts for the
next time slot to maximize the average achievable sum-rate of an IRS-MUC system
taking into account the user mobility. By doing this, only a low-dimension
multiple-input single-output (MISO) CE is needed for transmit beamforming
design, thus significantly reducing the CE overhead. To this end, a
location-aware convolutional long short-term memory network (LA-CLNet) is first
developed to facilitate predictive beamforming at IRS, where the convolutional
and recurrent units are jointly adopted to exploit both the spatial and
temporal features of channels simultaneously. Given the predictive IRS phase
shift beamforming, an instantaneous CSI (ICSI)-aware fully-connected neural
network (IA-FNN) is then proposed to optimize the transmit beamforming matrix
at the access point. Simulation results demonstrate that the sum-rate
performance achieved by the proposed method approaches that of the genie-aided
scheme with the full perfect ICSI.
|
We consider linear network error correction (LNEC) coding when errors may
occur on edges of a communication network of which the topology is known. In
this paper, we first revisit and explore the framework of LNEC coding, and then
unify two well-known LNEC coding approaches. Furthermore, by developing a
graph-theoretic approach to the framework of LNEC coding, we obtain a
significantly enhanced characterization of the error correction capability of
LNEC codes in terms of the minimum distances at the sink nodes. In LNEC coding,
the minimum required field size for the existence of LNEC codes, in particular
LNEC maximum distance separable (MDS) codes which are a type of most important
optimal codes, is an open problem not only of theoretical interest but also of
practical importance, because it is closely related to the implementation of
the coding scheme in terms of computational complexity and storage requirement.
By applying the graph-theoretic approach, we obtain an improved upper bound on
the minimum required field size. The improvement over the existing results is
in general significant. The improved upper bound, which is graph-theoretic,
depends only on the network topology and requirement of the error correction
capability but not on a specific code construction. However, this bound is not
given in an explicit form. We thus develop an efficient algorithm that can
compute the bound in linear time. In developing the upper bound and the
efficient algorithm for computing this bound, various graph-theoretic concepts
are introduced. These concepts appear to be of fundamental interest in graph
theory and they may have further applications in graph theory and beyond.
|
In ground-based astronomy, starlight distorted by the atmosphere couples
poorly into single-mode waveguides but a correction by adaptive optics, even if
only partial, can boost coupling into the few-mode regime allowing the use of
photonic lanterns to convert into multiple single-mode beams. Corrected
wavefronts result in focal patterns that couple mostly with the circularly
symmetric waveguide modes. A mode-selective photonic lantern is hence proposed
to convert the multimode light into a subset of the single-mode waveguides of
the standard photonic lantern, thereby reducing the required number of outputs.
We ran simulations to show that only two out of the six waveguides of a 1x6
photonic lantern carry >95% of the coupled light to the outputs at $D/r_0 < 10$
if the wavefront is partially corrected and the photonic lantern is made
mode-selective.
|
In Statistics, log-concave density estimation is a central problem within the
field of nonparametric inference under shape constraints. Despite great
progress in recent years on the statistical theory of the canonical estimator,
namely the log-concave maximum likelihood estimator, adoption of this method
has been hampered by the complexities of the non-smooth convex optimization
problem that underpins its computation. We provide enhanced understanding of
the structural properties of this optimization problem, which motivates the
proposal of new algorithms, based on both randomized and Nesterov smoothing,
combined with an appropriate integral discretization of increasing accuracy. We
prove that these methods enjoy, both with high probability and in expectation,
a convergence rate of order $1/T$ up to logarithmic factors on the objective
function scale, where $T$ denotes the number of iterations. The benefits of our
new computational framework are demonstrated on both synthetic and real data,
and our implementation is available in a github repository \texttt{LogConcComp}
(Log-Concave Computation).
|
We establish existence, uniqueness as well as quantitative estimates for
solutions to the fractional nonlinear diffusion equation, $\partial_t u
+{\mathcal L}_{s,p} (u)=0$, where ${\mathcal L}_{s,p}=(-\Delta)_p^s$ is the
standard fractional $p$-Laplacian operator. We work in the range of exponents
$0<s<1$ and $1<p<2$, and in some sections $sp<1$. The equation is posed in the
whole space $x\in {\mathbb R}^N$. We first obtain weighted global integral
estimates that allow establishing the existence of solutions for a class of
large data that is proved to be roughly optimal. We study the class of
self-similar solutions of forward type, that we describe in detail when they
exist. We also explain what happens when possible self-similar solutions do not
exist. We establish the dichotomy positivity versus extinction for nonnegative
solutions at any given time. We analyze the conditions for extinction in finite
time.
|
An important problem in deep learning is the privacy and security of neural
networks (NNs). Both aspects have long been considered separately. To date, it
is still poorly understood how privacy enhancing training affects the
robustness of NNs. This paper experimentally evaluates the impact of training
with Differential Privacy (DP), a standard method for privacy preservation, on
model vulnerability against a broad range of adversarial attacks. The results
suggest that private models are less robust than their non-private
counterparts, and that adversarial examples transfer better among DP models
than between non-private and private ones. Furthermore, detailed analyses of DP
and non-DP models suggest significant differences between their gradients.
Additionally, this work is the first to observe that an unfavorable choice of
parameters in DP training can lead to gradient masking, and, thereby, results
in a wrong sense of security.
|
Biological, physical, medical, and numerical applications involving membrane
problems on different scales are numerous. We propose an extension of the
standard Turing theory to the case of two domains separated by a permeable
membrane. To this aim, we study a reaction-diffusion system with zero-flux
boundary conditions on the external boundary and Kedem-Katchalsky membrane
conditions on the inner membrane. We use the same approach as in the classical
Turing analysis but applied to membrane operators. The introduction of a
diagonalization theory for compact and self-adjoint membrane operators is
needed. Here, Turing instability is proven with the addition of new
constraints, due to the presence of membrane permeability coefficients. We
perform an explicit one-dimensional analysis of the eigenvalue problem,
combined with numerical simulations, to validate the theoretical results.
Finally, we observe the formation of discontinuous patterns in a system which
combines diffusion and dissipative membrane conditions, varying both diffusion
and membrane permeability coefficients. The case of a fast reaction-diffusion
system is also considered.
|
We use analytic calculations and time-dependent spherically-symmetric
simulations to study the properties of isothermal galactic winds driven by
cosmic-rays (CRs) streaming at the Alfv\'en velocity. The simulations produce
time-dependent flows permeated by strong shocks; we identify a new linear
instability of sound waves that sources these shocks. The shocks substantially
modify the wind dynamics, invalidating previous steady state models: the CR
pressure $p_c$ has a staircase-like structure with $dp_c/dr \simeq 0$ in most
of the volume, and the time-averaged CR energetics are in many cases better
approximated by $p_c \propto \rho^{1/2}$, rather than the canonical $p_c
\propto \rho^{2/3}$. Accounting for this change in CR energetics, we
analytically derive new expressions for the mass-loss rate, momentum flux, wind
speed, and wind kinetic power in galactic winds driven by CR streaming. We show
that streaming CRs are ineffective at directly driving cold gas out of
galaxies, though CR-driven winds in hotter ISM phases may entrain cool gas. For
the same physical conditions, diffusive CR transport (Paper I) yields mass-loss
rates that are a few-100 times larger than streaming transport, and asymptotic
wind powers that are a factor of $\simeq 4$ larger. We discuss the implications
of our results for galactic wind theory and observations; strong shocks driven
by CR-streaming-induced instabilities produce gas with a wide range of
densities and temperatures, consistent with the multiphase nature of observed
winds. We also quantify the applicability of the isothermal gas approximation
for modeling streaming CRs and highlight the need for calculations with more
realistic thermodynamics.
|
The past year has seen numerous publications underlining the importance of a
space mission to the ice giants in the upcoming decade. Proposed mission plans
involve a $\sim$10 year cruise time to the ice giants. This cruise time can be
utilized to search for low-frequency gravitational waves (GWs) by observing the
Doppler shift caused by them in the Earth-spacecraft radio link. We calculate
the sensitivity of prospective ice giant missions to GWs. Then, adopting a
steady-state black hole binary population, we derive a conservative estimate
for the detection rate of extreme mass ratio inspirals (EMRIs), supermassive-
(SMBH) and stellar mass binary black hole (sBBH) mergers. We link the SMBH
population to the fraction of quasars $f_\rm{bin}$ resulting from galaxy
mergers that pair SMBHs to a binary. For a total of ten 40-day observations
during the cruise of a single spacecraft, $\mathcal{O}(f_\rm{bin})\sim0.5$
detections of SMBH mergers are likely, if Allan deviation of Cassini-era noise
is improved by $\sim 10^2$ in the $10^{-5}-10^{-3}$ Hz range. For EMRIs the
number of detections lies between $\mathcal{O}(0.1) - \mathcal{O}(100)$.
Furthermore, ice giant missions combined with the Laser Interferometer Space
Antenna (LISA) would improve the localisation by an order of magnitude compared
to LISA by itself.
|
We report the occurrence of a self-emerging frequency chimera state in
spatially extended systems of coupled oscillators, where the coherence and
incoherence are defined with respect to the emergent frequency of the
oscillations. This is generated by the local coupling among nonlinear
oscillators evolving under differing dynamical timescales starting from random
initial conditions. We show how they self-organize to structured patterns with
spatial domains of coherence that are in frequency synchronization, coexisting
with domains that are incoherent in frequencies. Our study has relevance in
understanding such patterns observed in real-world systems like neuronal
systems, power grids, social and ecological networks, where differing dynamical
timescales is natural and realistic amongthe interacting systems.
|
We prove that perfect $3$-hash linear codes in $\mathbb{F}_{3}^{n}$ must have
dimension at most $ \left(\frac{1}{4}-\epsilon\right)n$ for some absolute
constant $\epsilon > 0$.
|
Meta-learning is a branch of machine learning which aims to quickly adapt
models, such as neural networks, to perform new tasks by learning an underlying
structure across related tasks. In essence, models are being trained to learn
new tasks effectively rather than master a single task. Meta-learning is
appealing for process control applications because the perturbations to a
process required to train an AI controller can be costly and unsafe.
Additionally, the dynamics and control objectives are similar across many
different processes, so it is feasible to create a generalizable controller
through meta-learning capable of quickly adapting to different systems. In this
work, we construct a deep reinforcement learning (DRL) based controller and
meta-train the controller using a latent context variable through a separate
embedding neural network. We test our meta-algorithm on its ability to adapt to
new process dynamics as well as different control objectives on the same
process. In both cases, our meta-learning algorithm adapts very quickly to new
tasks, outperforming a regular DRL controller trained from scratch.
Meta-learning appears to be a promising approach for constructing more
intelligent and sample-efficient controllers.
|
Electricity production currently generates approximately 25% of greenhouse
gas emissions in the USA. Thus, increasing the amount of renewable energy is a
key step to carbon neutrality. However, integrating a large amount of
fluctuating renewable generation is a significant challenge for power grid
operating and planning. Grid reliability, i.e., an ability to meet operational
constraints under power fluctuations, is probably the most important of them.
In this paper, we propose computationally efficient and accurate methods to
estimate the probability of failure, i.e. reliability constraints violation,
under a known distribution of renewable energy generation. To this end, we
investigate an importance sampling approach, a flexible extension of
Monte-Carlo methods, which adaptively changes the sampling distribution to
generate more samples near the reliability boundary. The approach allows to
estimate failure probability in real-time based only on a few dozens of random
samples, compared to thousands required by the plain Monte-Carlo. Our study
focuses on high voltage direct current power transmission grids with linear
reliability constraints on power injections and line currents. We propose a
novel theoretically justified physics-informed adaptive importance sampling
algorithm and compare its performance to state-of-the-art methods on multiple
IEEE power grid test cases.
|
Expectation-maximization (EM) is a popular and well-established method for
image reconstruction in positron emission tomography (PET) but it often suffers
from slow convergence. Ordered subset EM (OSEM) is an effective reconstruction
algorithm that provides significant acceleration during initial iterations, but
it has been observed to enter a limit cycle. In this work, we investigate two
classes of algorithms for accelerating OSEM based on variance reduction for
penalised PET reconstructions. The first is a stochastic variance reduced EM
algorithm, termed as SVREM, an extension of the classical EM to the stochastic
context, by combining classical OSEM with insights from variance reduction
techniques for gradient descent. The second views OSEM as a preconditioned
stochastic gradient ascent, and applies variance reduction techniques, i.e.,
SAGA and SVRG, to estimate the update direction. We present several numerical
experiments to illustrate the efficiency and accuracy of the approaches. The
numerical results show that these approaches significantly outperform existing
OSEM type methods for penalised PET reconstructions, and hold great potential.
|
Topological phenomena are commonly studied in phases of matter which are
separated from a trivial phase by an unavoidable quantum phase transition. This
can be overly restrictive, leaving out scenarios of practical relevance --
similar to the distinction between liquid water and vapor. Indeed, we show that
topological phenomena can be stable over a large part of parameter space even
when the bulk is strictly speaking in a trivial phase of matter. In particular,
we focus on symmetry-protected topological phases which can be trivialized by
extending the symmetry group. The topological Haldane phase in spin chains
serves as a paradigmatic example where the $SO(3)$ symmetry is extended to
$SU(2)$ by tuning away from the Mott limit. Although the Haldane phase is then
adiabatically connected to a product state, we show that characteristic
phenomena -- edge modes, entanglement degeneracies and bulk phase transitions
-- remain parametrically stable. This stability is due to a separation of
energy scales, characterized by quantized invariants which are well-defined
when a subgroup of the symmetry only acts on high-energy degrees of freedom.
The low-energy symmetry group is a quotient group whose emergent anomalies
stabilize edge modes and unnecessary criticality, which can occur in any
dimension.
|
A Thickened Flame (TF) modeling approach is combined with a Large Eddy
Simulation (LES) methodology to model premixed combustion and the accuracy of
these model predictions is evaluated by comparing with the piloted premixed
stoichiometric methane-air flame data of Chen et al. [Combust. Flame 107 (1996)
233-244] at a Reynolds number Re = 24,000. In the TF model, the flame front is
artificially thickened to resolve it on the computational LES grid and the
reaction rates are specified using reduced chemistry. The response of the
thickened flame to turbulence is taken care of by incorporating an efficiency
function in the governing equations. The efficiency function depends on the
characteristics of the local turbulence and on the characteristics of the
premixed flame such as laminar flame speed and thickness. Three variants of the
TF model are examined: the original Thickened Flame model, the Power-law flame
wrinkling model, and the dynamically modified TF model. Reasonable agreement is
found when comparing predictions with the experimental data and with
computations reported using a probability distribution function (PDF) modeling
approach. The results of the TF model are in better agreement with data when
compared with the predictions of the G-equation approach
|
Surrender poses one of the major risks to life insurance and a sound modeling
of its true probability has direct implication on the risk capital demanded by
the Solvency II directive. We add to the existing literature by performing
extensive experiments that present highly practical results for various
modeling approaches, including XGBoost, random forest, GLM and neural networks.
Further, we detect shortcomings of prevalent model assessments, which are in
essence based on a confusion matrix. Our results indicate that accurate label
predictions and a sound modeling of the true probability can be opposing
objectives. We illustrate this with the example of resampling. While resampling
is capable of improving label prediction in rare event settings, such as
surrender, and thus is commonly applied, we show theoretically and numerically
that models trained on resampled data predict significantly biased event
probabilities. Following a probabilistic perspective on surrender, we further
propose time-dependent confidence bands on predicted mean surrender rates as a
complementary assessment and demonstrate its benefit. This evaluation takes a
very practical, going concern perspective, which respects that the composition
of a portfolio, as well as the nature of underlying risk drivers might change
over time.
|
We address a challenging problem of identifying main sources of hate speech
on Twitter. On one hand, we carefully annotate a large set of tweets for hate
speech, and deploy advanced deep learning to produce high quality hate speech
classification models. On the other hand, we create retweet networks, detect
communities and monitor their evolution through time. This combined approach is
applied to three years of Slovenian Twitter data. We report a number of
interesting results. Hate speech is dominated by offensive tweets, related to
political and ideological issues. The share of unacceptable tweets is
moderately increasing with time, from the initial 20% to 30% by the end of
2020. Unacceptable tweets are retweeted significantly more often than
acceptable tweets. About 60% of unacceptable tweets are produced by a single
right-wing community of only moderate size. Institutional Twitter accounts and
media accounts post significantly less unacceptable tweets than individual
accounts. However, the main sources of unacceptable tweets are anonymous
accounts, and accounts that were suspended or closed during the last three
years.
|
Successful quantitative investment usually relies on precise predictions of
the future movement of the stock price. Recently, machine learning based
solutions have shown their capacity to give more accurate stock prediction and
become indispensable components in modern quantitative investment systems.
However, the i.i.d. assumption behind existing methods is inconsistent with the
existence of diverse trading patterns in the stock market, which inevitably
limits their ability to achieve better stock prediction performance. In this
paper, we propose a novel architecture, Temporal Routing Adaptor (TRA), to
empower existing stock prediction models with the ability to model multiple
stock trading patterns. Essentially, TRA is a lightweight module that consists
of a set of independent predictors for learning multiple patterns as well as a
router to dispatch samples to different predictors. Nevertheless, the lack of
explicit pattern identifiers makes it quite challenging to train an effective
TRA-based model. To tackle this challenge, we further design a learning
algorithm based on Optimal Transport (OT) to obtain the optimal sample to
predictor assignment and effectively optimize the router with such assignment
through an auxiliary loss term. Experiments on the real-world stock ranking
task show that compared to the state-of-the-art baselines, e.g., Attention LSTM
and Transformer, the proposed method can improve information coefficient (IC)
from 0.053 to 0.059 and 0.051 to 0.056 respectively. Our dataset and code used
in this work are publicly available:
https://github.com/microsoft/qlib/tree/main/examples/benchmarks/TRA.
|
We continue studying $6D, {\cal N}=(1,1)$ supersymmetric Yang-Mills (SYM)
theory in the ${\cal N}=(1,0)$ harmonic superspace formulation. Using the
superfield background field method we explore the two-loop divergencies of the
effective action in the gauge multiplet sector. It is explicitly demonstrated
that among four two-loop background-field dependent supergraphs contributing to
the effective action, only one diverges off shell. It is also shown that the
divergences are proportional to the superfield classical equations of motion
and hence vanish on shell. Besides, we have analyzed a possible structure of
the two-loop divergences on general gauge and hypermultiplet background.
|
This article reviews a class of adaptive group testing procedures that
operate under a probabilistic model assumption as follows. Consider a set of
$N$ items, where item $i$ has the probability $p$ ($p_i$ in the generalized
group testing) to be defective, and the probability $1-p$ to be non-defective
independent from the other items. A group test applied to any subset of size
$n$ is a binary test with two possible outcomes, positive or negative. The
outcome is negative if all $n$ items are non-defective, whereas the outcome is
positive if at least one item among the $n$ items is defective. The goal is
complete identification of all $N$ items with the minimum expected number of
tests.
|
Liquid-liquid phase separation (LLPS) is important to control a wide range of
reactions from gene expression to protein degradation in a cell-sized space. To
bring a better understanding of the compatibility of such phase-separated
structures with protein synthesis, we study emergent LLPS in a cell-free
transcription-translation (TXTL) reaction. When the TXTL reaction composed of
many proteins is concentrated, the uniformly mixed state becomes unstable, and
membrane-less phases form spontaneously. This LLPS droplet formation is induced
when the TXTL reaction is enclosed in water-in-oil emulsion droplets, in which
water evaporates from the surface. As the emulsion droplets shrink, smaller
LLPS droplets appear inside the emulsion droplets and coalesce into large
phase-separated domains that partition the localization of synthesized reporter
proteins. The presence of PEG in the TXTL reaction is important not only for
versatile cell-free protein synthesis but also for the formation of two large
domains capable of protein partitioning. Our results may shed light on the
dynamic interplay of LLPS formation and cell-free protein synthesis toward the
construction of synthetic organelles.
|
Mobile and embedded platforms are increasingly required to efficiently
execute computationally demanding DNNs across heterogeneous processing
elements. At runtime, the available hardware resources to DNNs can vary
considerably due to other concurrently running applications. The performance
requirements of the applications could also change under different scenarios.
To achieve the desired performance, dynamic DNNs have been proposed in which
the number of channels/layers can be scaled in real time to meet different
requirements under varying resource constraints. However, the training process
of such dynamic DNNs can be costly, since platform-aware models of different
deployment scenarios must be retrained to become dynamic. This paper proposes
Dynamic-OFA, a novel dynamic DNN approach for state-of-the-art platform-aware
NAS models (i.e. Once-for-all network (OFA)). Dynamic-OFA pre-samples a family
of sub-networks from a static OFA backbone model, and contains a runtime
manager to choose different sub-networks under different runtime environments.
As such, Dynamic-OFA does not need the traditional dynamic DNN training
pipeline. Compared to the state-of-the-art, our experimental results using
ImageNet on a Jetson Xavier NX show that the approach is up to 3.5x (CPU), 2.4x
(GPU) faster for similar ImageNet Top-1 accuracy, or 3.8% (CPU), 5.1% (GPU)
higher accuracy at similar latency.
|
NP (search) problems allow easy correctness tests for solutions. Climbing
algorithms allow also easy assessment of how close to yielding the correct
answer is the configuration at any stage of their run. This offers a great
flexibility, as how sensible is any deviation from the standard procedures can
be instantly assessed.
An example is the Dual Matrix Algorithm (DMA) for linear programming,
variations of which were considered by A.Y. Levin in 1965 and by Yamnitsky and
myself in 1982. It has little sensitivity to numerical errors and to the number
of inequalities. It offers substantial flexibility and, thus, potential for
further developments.
|
In high-energy leptonic collisions, such as at a multi-TeV muon collider, the
collinear splittings of the electroweak (EW) gauge bosons and leptons are the
dominant phenomena, and the scattering processes should thus be formulated in
terms of the EW parton distribution functions (EW PDFs). We complete this
formalism in the Standard Model to include the QCD sector and evaluate the
quark and gluon PDFs inside a lepton at the double-log accuracy. The splittings
of the photon and subsequently the quarks and gluons control the quark/gluon
PDFs below the EW scale. The massive gauge bosons lead to substantial
contributions at high scales. The jet production cross section can reach the
order of a few nb (50 pb) in $e^+e^-$ ($\mu^+\mu^-$) collisions, at the TeV
c.m. energies with a moderate acceptance cut, that governs the overall event
shape up to about $p_T^j \sim 60$ GeV.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.