abstract
stringlengths 42
2.09k
|
---|
Zebrafish is a powerful and widely-used model system for a host of biological
investigations including cardiovascular studies and genetic screening.
Zebrafish are readily assessable during developmental stages; however, the
current methods for quantification and monitoring of cardiac functions mostly
involve tedious manual work and inconsistent estimations. In this paper, we
developed and validated a Zebrafish Automatic Cardiovascular Assessment
Framework (ZACAF) based on a U-net deep learning model for automated assessment
of cardiovascular indices, such as ejection fraction (EF) and fractional
shortening (FS) from microscopic videos of wildtype and cardiomyopathy mutant
zebrafish embryos. Our approach yielded favorable performance with accuracy
above 90% compared with manual processing. We used only black and white regular
microscopic recordings with frame rates of 5-20 frames per second (fps); thus,
the framework could be widely applicable with any laboratory resources and
infrastructure. Most importantly, the automatic feature holds promise to enable
efficient, consistent and reliable processing and analysis capacity for large
amounts of videos, which can be generated by diverse collaborating teams.
|
We use direct $N$-body simulations to explore some possible scenarios for the
future evolution of two massive clusters observed toward the center of
NGC\,4654, a spiral galaxy with mass similar to that of the Milky Way. Using
archival HST data, we obtain the photometric masses of the two clusters,
$M=3\times 10^5$ M$_\odot$ and $M=1.7\times 10^6$ M$_\odot$, their half-light
radii, $R_{\rm eff}\sim4$ pc and $R_{\rm eff} \sim 6$ pc, and their projected
distances from the photometric center of the galaxy (both $<22$ pc). The
knowledge of the structure and separation of these two clusters ($\sim 24$ pc)
provides a unique view for studying the dynamics of a galactic central zone
hosting massive clusters. Varying some of the unknown clusters orbital
parameters, we carry out several $N$-body simulations showing that the future
evolution of these clusters will inevitably result in their merger. We find
that, mainly depending on the shape of their relative orbit, they will merge
into the galactic center in less than 82 Myr. In addition to the tidal
interaction, a proper consideration of the dynamical friction braking would
shorten the merging times up to few Myr. We also investigate the possibility to
form a massive NSC in the center of the galaxy by this process. Our analysis
suggests that for low eccentricity orbits, and relatively long merger times,
the final merged cluster is spherical in shape, with an effective radius of few
parsecs and a mass within the effective radius of the order of
$10^5\,\mathrm{M_{\odot}}$. Because the central density of such a cluster is
higher than that of the host galaxy, it is likely that this merger remnant
could be the likely embryo of a future NSC.
|
Due to the unavailability of nationally representative data on time use, a
systematic analysis of the gender gap in unpaid household and care work has not
been undertaken in the context of India. The present paper, using the recent
Time Use Survey (2019) data, examines the socioeconomic and demographic factors
associated with variation in time spent on unpaid household and care work among
men and women. It analyses how much of the gender gap in the time allocated to
unpaid work can be explained by differences in these factors. The findings show
that women spend much higher time compared to men in unpaid household and care
work. The decomposition results reveal that differences in socioeconomic and
demographic factors between men and women do not explain most of the gender gap
in unpaid household work. Our results indicate that unobserved gender norms and
practices most crucially govern the allocation of unpaid work within Indian
households.
|
We consider the problem of online classification under a privacy constraint.
In this setting a learner observes sequentially a stream of labelled examples
$(x_t, y_t)$, for $1 \leq t \leq T$, and returns at each iteration $t$ a
hypothesis $h_t$ which is used to predict the label of each new example $x_t$.
The learner's performance is measured by her regret against a known hypothesis
class $\mathcal{H}$. We require that the algorithm satisfies the following
privacy constraint: the sequence $h_1, \ldots, h_T$ of hypotheses output by the
algorithm needs to be an $(\epsilon, \delta)$-differentially private function
of the whole input sequence $(x_1, y_1), \ldots, (x_T, y_T)$. We provide the
first non-trivial regret bound for the realizable setting. Specifically, we
show that if the class $\mathcal{H}$ has constant Littlestone dimension then,
given an oblivious sequence of labelled examples, there is a private learner
that makes in expectation at most $O(\log T)$ mistakes -- comparable to the
optimal mistake bound in the non-private case, up to a logarithmic factor.
Moreover, for general values of the Littlestone dimension $d$, the same mistake
bound holds but with a doubly-exponential in $d$ factor. A recent line of work
has demonstrated a strong connection between classes that are online learnable
and those that are differentially-private learnable. Our results strengthen
this connection and show that an online learning algorithm can in fact be
directly privatized (in the realizable setting). We also discuss an adaptive
setting and provide a sublinear regret bound of $O(\sqrt{T})$.
|
The intertwined processes of learning and evolution in complex environmental
niches have resulted in a remarkable diversity of morphological forms.
Moreover, many aspects of animal intelligence are deeply embodied in these
evolved morphologies. However, the principles governing relations between
environmental complexity, evolved morphology, and the learnability of
intelligent control, remain elusive, partially due to the substantial challenge
of performing large-scale in silico experiments on evolution and learning. We
introduce Deep Evolutionary Reinforcement Learning (DERL): a novel
computational framework which can evolve diverse agent morphologies to learn
challenging locomotion and manipulation tasks in complex environments using
only low level egocentric sensory information. Leveraging DERL we demonstrate
several relations between environmental complexity, morphological intelligence
and the learnability of control. First, environmental complexity fosters the
evolution of morphological intelligence as quantified by the ability of a
morphology to facilitate the learning of novel tasks. Second, evolution rapidly
selects morphologies that learn faster, thereby enabling behaviors learned late
in the lifetime of early ancestors to be expressed early in the lifetime of
their descendants. In agents that learn and evolve in complex environments,
this result constitutes the first demonstration of a long-conjectured
morphological Baldwin effect. Third, our experiments suggest a mechanistic
basis for both the Baldwin effect and the emergence of morphological
intelligence through the evolution of morphologies that are more physically
stable and energy efficient, and can therefore facilitate learning and control.
|
In this work we consider Bayesian inference problems with intractable
likelihood functions. We present a method to compute an approximate of the
posterior with a limited number of model simulations. The method features an
inverse Gaussian Process regression (IGPR), i.e., one from the output of a
simulation model to the input of it. Within the method, we provide an adaptive
algorithm with a tempering procedure to construct the approximations of the
marginal posterior distributions. With examples we demonstrate that IGPR has a
competitive performance compared to some commonly used algorithms, especially
in terms of statistical stability and computational efficiency, while the price
to pay is that it can only compute a weighted Gaussian approximation of the
marginal posteriors.
|
Recently it has become apparent that the Galactic center excess (GCE) is
spatially correlated with the stellar distribution in the Galactic bulge. This
has given extra motivation for the unresolved population of millisecond pulsars
(MSPs) explanation for the GCE. However, in the "recycling" channel the neutron
star forms from a core collapse supernovae that undergoes a random "kick" due
to the asymmetry of the explosion. This would imply a smoothing out of the
spatial distribution of the MSPs. We use N-body simulations to model how the
MSP spatial distribution changes. We estimate the probability distribution of
natal kick velocities using the resolved gamma-ray MSP proper motions, where
MSPs have random velocities relative to the circular motion with a scale
parameter of 77+/-6 km/s. We find that, due to the natal kicks, there is an
approximately 10% increase in each of the bulge MSP spatial distribution
dimensions and also the bulge MSP distribution becomes less boxy but is still
far from being spherical.
|
The paper considers the problem of controlling Connected and Automated
Vehicles (CAVs) traveling through a three-entry roundabout so as to jointly
minimize both the travel time and the energy consumption while providing
speed-dependent safety guarantees, as well as satisfying velocity and
acceleration constraints. We first design a systematic approach to dynamically
determine the safety constraints and derive the unconstrained optimal control
solution. A joint optimal control and barrier function (OCBF) method is then
applied to efficiently obtain a controller that optimally track the
unconstrained optimal solution while guaranteeing all the constraints.
Simulation experiments are performed to compare the optimal controller to a
baseline of human-driven vehicles showing effectiveness under symmetric and
asymmetric roundabout configurations, balanced and imbalanced traffic rates and
different sequencing rules for CAVs.
|
The reversible implementation of classical functions accounts for the bulk of
most known quantum algorithms. As a result, a number of reversible circuit
constructions over the Clifford+$T$ gate set have been developed in recent
years which use both the state and phase spaces, or $X$ and $Z$ bases, to
reduce circuit costs beyond what is possible at the strictly classical level.
We study and generalize two particular classes of these constructions: relative
phase circuits, including Giles and Selinger's multiply-controlled $iX$ gates
and Maslov's $4$ qubit Toffoli gate, and measurement-assisted circuits,
including Jones' Toffoli gate and Gidney's temporary logical-AND. In doing so,
we introduce general methods for implementing classical functions up to phase
and for measurement-assisted termination of temporary values. We then apply
these techniques to find novel $T$-count efficient constructions of some
classical functions in space-constrained regimes, notably multiply-controlled
Toffoli gates and temporary products.
|
A new family of operators, coined hierarchical measurement operators, is
introduced and discussed within the well-known hierarchical sparse recovery
framework. Such operator is a composition of block and mixing operations and
notably contains the Kronecker product as a special case. Results on their
hierarchical restricted isometry property (HiRIP) are derived, generalizing
prior work on recovery of hierarchically sparse signals from
Kronecker-structured linear measurements. Specifically, these results show
that, very surprisingly, sparsity properties of the block and mixing part can
be traded against each other. The measurement structure is well-motivated by a
massive random access channel design in communication engineering. Numerical
evaluation of user detection rates demonstrate the huge benefit of the
theoretical framework.
|
We consider fractional operators of the form $$\mathcal{H}^s=(\partial_t
-\mathrm{div}_{x} ( A(x,t)\nabla_{x}))^s,\ (x,t)\in\mathbb R^n\times\mathbb
R,$$ where $s\in (0,1)$ and $A=A(x,t)=\{A_{i,j}(x,t)\}_{i,j=1}^{n}$ is an
accretive, bounded, complex, measurable, $n\times n$-dimensional matrix valued
function. We study the fractional operators ${\mathcal{H}}^s$ and their
relation to the initial value problem $$(\lambda^{1-2s}\mathrm{u}')'(\lambda)
=\lambda^{1-2s}\mathcal{H} \mathrm{u}(\lambda), \quad \lambda\in (0, \infty),$$
$$\mathrm{u}(0) = u,$$ in $\mathbb R_+\times \mathbb R^n\times\mathbb R$.
Exploring this type of relation, and making the additional assumption that
$A=A(x,t)=\{A_{i,j}(x,t)\}_{i,j=1}^{n}$ is real, we derive some local
properties of solutions to the non-local Dirichlet problem
$$\mathcal{H}^su=(\partial_t -\mathrm{div}_{x} ( A(x,t)\nabla_{x}))^s u=0\
\mbox{ for $(x,t)\in \Omega \times J$},$$ $$ u=f\ \mbox{ for $(x,t)\in \mathbb
R^{n+1}\setminus (\Omega \times J)$}. $$ Our contribution is that we allow for
non-symmetric and time-dependent coefficients.
|
In recent years, most of the accuracy gains for video action recognition have
come from the newly designed CNN architectures (e.g., 3D-CNNs). These models
are trained by applying a deep CNN on single clip of fixed temporal length.
Since each video segment are processed by the 3D-CNN module separately, the
corresponding clip descriptor is local and the inter-clip relationships are
inherently implicit. Common method that directly averages the clip-level
outputs as a video-level prediction is prone to fail due to the lack of
mechanism that can extract and integrate relevant information to represent the
video.
In this paper, we introduce the Gated Clip Fusion Network (GCF-Net) that can
greatly boost the existing video action classifiers with the cost of a tiny
computation overhead. The GCF-Net explicitly models the inter-dependencies
between video clips to strengthen the receptive field of local clip
descriptors. Furthermore, the importance of each clip to an action event is
calculated and a relevant subset of clips is selected accordingly for a
video-level analysis. On a large benchmark dataset (Kinetics-600), the proposed
GCF-Net elevates the accuracy of existing action classifiers by 11.49% (based
on central clip) and 3.67% (based on densely sampled clips) respectively.
|
We have seen a surge in research aims toward adversarial attacks and defenses
in AI/ML systems. While it is crucial to formulate new attack methods and
devise novel defense strategies for robustness, it is also imperative to
recognize who is responsible for implementing, validating, and justifying the
necessity of these defenses. In particular, which components of the system are
vulnerable to what type of adversarial attacks, and the expertise needed to
realize the severity of adversarial attacks. Also how to evaluate and address
the adversarial challenges in order to recommend defense strategies for
different applications. This paper opened a discussion on who should examine
and implement the adversarial defenses and the reason behind such efforts.
|
The coarse similarity class $[A]$ of $A$ is the set of all $B$ whose
symmetric difference with $A$ has asymptotic density 0. There is a natural
metric $\delta$ on the space $\mathcal{S}$ of coarse similarity classes defined
by letting $\delta([A],[B])$ be the upper density of the symmetric difference
of $A$ and $B$. We study the resulting metric space, showing in particular that
between any two distinct points there are continuum many geodesic paths. We
also study subspaces of the form $\{[A] : A \in \mathcal U\}$ where $\mathcal
U$ is closed under Turing equivalence, and show that there is a tight
connection between topological properties of such a space and
computability-theoretic properties of $\mathcal U$.
We then define a distance between Turing degrees based on Hausdorff distance
in this metric space. We adapt a proof of Monin to show that the distances
between degrees that occur are exactly 0, 1/2, and 1, and study which of these
values occur most frequently in the senses of measure and category. We define a
degree to be attractive if the class of all degrees at distance 1/2 from it has
measure 1, and dispersive otherwise. We study the distribution of attractive
and dispersive degrees. We also study some properties of the metric space of
Turing degrees under this Hausdorff distance, in particular the question of
which countable metric spaces are isometrically embeddable in it, giving a
graph-theoretic sufficient condition.
We also study the computability-theoretic and reverse-mathematical aspects of
a Ramsey-theoretic theorem due to Mycielski, which in particular implies that
there is a perfect set whose elements are mutually 1-random, as well as a
perfect set whose elements are mutually 1-generic.
Finally, we study the completeness of $(\mathcal S,\delta)$ from the
perspectives of computability theory and reverse mathematics.
|
Recent experiments on the antiferromagnetic intercalated transition metal
dichalcogenide $\mathrm{Fe_{1/3}NbS_2}$ have demonstrated reversible
resistivity switching by application of orthogonal current pulses below its
magnetic ordering temperature, making $\mathrm{Fe_{1/3}NbS_2}$ promising for
spintronics applications. Here, we perform density functional theory
calculations with Hubbard U corrections of the magnetic order, electronic
structure, and transport properties of crystalline $\mathrm{Fe_{1/3}NbS_2}$,
clarifying the origin of the different resistance states. The two
experimentally proposed antiferromagnetic ground states, corresponding to
in-plane stripe and zigzag ordering, are computed to be nearly degenerate.
In-plane cross sections of the calculated Fermi surfaces are anisotropic for
both magnetic orderings, with the degree of anisotropy sensitive to the Hubbard
U value. The in-plane resistance, computed within the Kubo linear response
formalism using a constant relaxation time approximation, is also anisotropic,
supporting a hypothesis that the current-induced resistance changes are due to
a repopulating of AFM domains. Our calculations indicate that the transport
anisotropy of $\mathrm{Fe_{1/3}NbS_2}$ in the zigzag phase is reduced relative
to stripe, consistent with the relative magnitudes of resistivity changes in
experiment. Finally, our calculations reveal the likely directionality of the
current-domain response, specifically, which domains are energetically
stabilized for a given current direction.
|
We present the analysis of the microlensing event OGLE-2018-BLG-1428, which
has a short-duration ($\sim 1$ day) caustic-crossing anomaly. The event was
caused by a planetary lens system with planet/host mass ratio
$q=1.7\times10^{-3}$. Thanks to the detection of the caustic-crossing anomaly,
the finite source effect was well measured, but the microlens parallax was not
constrained due to the relatively short timescale ($t_{\rm E}=24$ days). From a
Bayesian analysis, we find that the host star is a dwarf star $M_{\rm
host}=0.43^{+0.33}_{-0.22} \ M_{\odot}$ at a distance $D_{\rm
L}=6.22^{+1.03}_{-1.51}\ {\rm kpc}$ and the planet is a Jovian-mass planet
$M_{\rm p}=0.77^{+0.77}_{-0.53} \ M_{\rm J}$ with a projected separation
$a_{\perp}=3.30^{+0.59}_{-0.83}\ {\rm au}$. The planet orbits beyond the snow
line of the host star. Considering the relative lens-source proper motion of
$\mu_{\rm rel} = 5.58 \pm 0.38\ \rm mas\ yr^{-1}$, the lens can be resolved by
adaptive optics with a 30m telescope in the future.
|
Spatial constraints such as rigid barriers affect the dynamics of cell
populations, potentially altering the course of natural evolution. In this
paper, we study the population genetics of Escherichia coli proliferating in
microchannels with open ends. Our experiments reveal that competition among two
fluorescently labeled E. coli strains growing in a microchannel generates a
self-organized stripe pattern aligned with the axial direction of the channel.
To account for this observation, we employ a lattice population model in which
reproducing cells push entire lanes of cells towards the open ends of the
channel. By combining mathematical theory, numerical simulations, and
experiments, we find that the fixation dynamics is extremely fast along the
axial direction, with a logarithmic dependence on the number of cells per lane.
In contrast, competition among lanes is a much slower process. We also
demonstrate that random mutations appearing in the middle of the channel and
close to its walls are much more likely to reach fixation than mutations
occurring elsewhere.
|
Waste heat recovery for trucks via organic Rankine cycle is a promising
technology to reduce fuel consumption and emissions. As the vehicles are
operated in street traffic, the heat source is subject to strong fluctuations.
Consequently, such disturbances have to be considered to enable safe and
efficient operation. Herein, we find optimal operating policies for several
representative scenarios by means of dynamic optimization and discuss the
implications on control strategy design. First, we optimize operation of a
typical driving cycle with data from a test rig. Results indicate that
operating the cycle at minimal superheat is an appropriate operating policy.
Second, we consider a scenario where the permissible expander power is
temporarily limited, which is realistic in street traffic. In this case, an
operating policy with flexible superheat can reduce the losses associated with
operation at minimal superheat by up to 53 % in the considered scenario. As the
duration of power limitation increases, other constraints might become active
which results in part of the exhaust gas being bypassed, hence reduced savings.
|
We investigate a theoretical model for a dynamic Moir\'e grating which is
capable of producing slow and stopped light with improved performance when
compared with a static Moir\'e grating. A Moir\'e grating superimposes two
grating periods which creates a narrow slow light resonance between two band
gaps. A Moir\'e grating can be made dynamic by varying its coupling strength in
time. By increasing the coupling strength the reduction in group velocity in
the slow light resonance can be improved by many orders of magnitude while
still maintaining the wide bandwidth of the initial, weak grating. We show that
for a pulse propagating through the grating this is a consequence of altering
the pulse spectrum and therefore the grating can also perform bandwidth
modulation. Finally we present a possible realization of the system via an
electro-optic grating by applying a quasi-static electric field to a poled
$\chi^{(2)}$ nonlinear medium.
|
We studied the physical behavior of PdO nanoparticles at low temperatures,
which presents an unusual behavior clearly related to macroscopic quantum
tunneling. The samples show a tetragonal single phase with P42/mmc space group.
Most importantly, the particle size was estimated at about 5.07 nm. Appropriate
techniques were used to determine the characteristic of these nanoparticles.
The most important aspect of this study is the magnetic characterization
performed at low temperatures. It shows a peak at 50 K in zero field cooling
mode (ZFC) that corresponds to the Blocking temperature (T$_{B}$). These
measurements in ZFC and field cooling (FC) indicates that the peak behavior is
due to different relaxation times of the asymmetrical barriers when the
electron changes from a metastable state to another. Below T$_{B}$ in FC mode,
the magnetization decreases with temperature until 36 K; this temperature is
the crossover temperature (T$_{Cr}$) related to the anisotropy of the barriers,
indicative of macroscopic quantum tunneling.
|
This paper is concerned with the problem of nonlinear filter stability of
ergodic Markov processes. The main contribution is the conditional Poincar\'e
inequality (PI), which is shown to yield filter stability. The proof is based
upon a recently discovered duality which is used to transform the nonlinear
filtering problem into a stochastic optimal control problem for a backward
stochastic differential equation (BSDE). Based on these dual formalisms, a
comparison is drawn between the stochastic stability of a Markov process and
the filter stability. The latter relies on the conditional PI described in this
paper, whereas the former relies on the standard form of PI.
|
Unexpected hypersurfaces are a brand name for some special linear systems.
They were introduced around 2017 and are a field of intensive study since then.
They attracted a lot of attention because of their close tights to various
other areas of mathematics including vector bundles, arrangements of
hyperplanes, geometry of projective varieties. Our research is motivated by the
what is now known as the BMSS duality, which is a new way of deriving
projective varieties out of already constructed. The last author coined the
concept of companion surfaces in the setting of unexpected curves admitted by
the $B_3$ root system. Here we extend this construction in various directions.
We revisit the configurations of points associated to either root systems or to
Fermat arrangements and we study the geometry of the associated varieties and
their companions.
|
We present AURA-net, a convolutional neural network (CNN) for the
segmentation of phase-contrast microscopy images. AURA-net uses transfer
learning to accelerate training and Attention mechanisms to help the network
focus on relevant image features. In this way, it can be trained efficiently
with a very limited amount of annotations. Our network can thus be used to
automate the segmentation of datasets that are generally considered too small
for deep learning techniques. AURA-net also uses a loss inspired by active
contours that is well-adapted to the specificity of phase-contrast images,
further improving performance. We show that AURA-net outperforms
state-of-the-art alternatives in several small (less than 100images) datasets.
|
Multi-domain learning (MDL) refers to learning a set of models
simultaneously, with each one specialized to perform a task in a certain
domain. Generally, high labeling effort is required in MDL, as data needs to be
labeled by human experts for every domain. Active learning (AL), which reduces
labeling effort by only using the most informative data, can be utilized to
address the above issue. The resultant paradigm is termed multi-domain active
learning (MDAL). However, currently little research has been done in MDAL, not
to mention any off-the-shelf solution. To fill this gap, we present a
comprehensive comparative study of 20 different MDAL algorithms, which are
established by combining five representative MDL models under different
information-sharing schemes and four well-used AL strategies under different
categories. We evaluate the algorithms on five datasets, involving textual and
visual classification tasks. We find that the models which capture both
domain-dependent and domain-specific information are more likely to perform
well in the whole AL loops. Besides, the simplest informative-based uncertainty
strategy surprisingly performs good in most datasets. As our off-the-shelf
recommendation, the combination of Multinomial Adversarial Networks (MAN) with
the best vs second best (BvSB) uncertainty strategy shows its superiority in
most cases, and this combination is also robust across datasets and domains.
|
This paper develops and investigates the impacts of multi-objective Nash
optimum (user equilibrium) traffic assignment on a large-scale network for
battery electric vehicles (BEVs) and internal combustion engine vehicles
(ICEVs) in a microscopic traffic simulation environment. Eco-routing is a
technique that finds the most energy efficient route. ICEV and BEV energy
consumption patterns are significantly different with regard to their
sensitivity to driving cycles. Unlike ICEVs, BEVs are more energy efficient on
low-speed arterial trips compared to highway trips. Different energy
consumption patterns require different eco-routing strategies for ICEVs and
BEVs. This study found that eco-routing could reduce energy consumption for
BEVs but also significantly increases their average travel time. The simulation
study found that multi-objective routing could reduce the energy consumption of
BEVs by 13.5, 14.2, 12.9, and 10.7 percent, as well as the fuel consumption of
ICEVs by 0.1, 4.3, 3.4, and 10.6 percent for "not congested", "slightly
congested", "moderately congested", and "highly congested" conditions,
respectively. The study also found that multi-objective user equilibrium
routing reduced the average vehicle travel time by up to 10.1% compared to the
standard user equilibrium traffic assignment for the highly congested
conditions, producing a solution closer to the system optimum traffic
assignment. The results indicate that the multi-objective eco-routing can
effectively reduce fuel/energy consumption with minimum impacts on travel times
for both BEVs and ICEVs.
|
Using the dynamical diquark model, we calculate the electric-dipole radiative
decay widths to $X(3872)$ of the lightest negative-parity exotic candidates,
including the four $I=0$, $J^{PC} \! = \! 1^{--}$ ("$Y$") states. The
$O$(100--1000 keV) values obtained test the hypothesis of a common substructure
shared by all of these states. We also calculate the magnetic-dipole radiative
decay width for $Z_c(4020)^0 \! \to \! \gamma X(3872)$, and find it to be
rather smaller ($<$~10 keV) than its predicted value in molecular models.
|
Unmanned Aerial Vehicle (UAV) swarms adoption shows a steady growth among
operators due to the benefits in time and cost arisen from their use. However,
this kind of system faces an important problem which is the calculation of many
optimal paths for each UAV. Solving this problem would allow a to control many
UAVs without human intervention at the same time while saving battery between
recharges and performing several tasks simultaneously. The main aim is to
develop a system capable of calculating the optimal flight path for a UAV
swarm. The aim of these paths is to achieve full coverage of a flight area for
tasks such as field prospection. All this, regardless of the size of maps and
the number of UAVs in the swarm. It is not necessary to establish targets or
any other previous knowledge other than the given map. Experiments have been
conducted to determine whether it is optimal to establish a single control for
all UAVs in the swarm or a control for each UAV. The results show that it is
better to use one control for all UAVs because of the shorter flight time. In
addition, the flight time is greatly affected by the size of the map. The
results give starting points for future research such as finding the optimal
map size for each situation.
|
We present a neutron spectroscopy based method to study quantitatively the
partial miscibility and phase behaviour of an organic photovoltaic active layer
made of conjugated polymer:small molecule blends, presently illustrated with
the regio-random poly(3-hexylthiophene-2,5-diyl) and fullerene [6,6]-Phenyl
C$_{61}$ butyric acid methyl ester (RRa-P3HT:PCBM) system. We perform both
inelastic neutron scattering and quasi-elastic neutron scattering measurements
to study the structural dynamics of blends of different compositions enabling
us to resolve the phase behaviour. The difference of neutron cross sections
between RRa-P3HT and PCBM, and the use of deuteration technique, offer a unique
opportunity to probe the miscibility limit of fullerene in the amorphous
polymer-rich phase and to tune the contrast between the polymer and the
fullerene phases, respectively. Therefore, the proposed approach should be
universal and relevant to study new non-fullerene acceptors that are closely
related - in terms of chemical structures - to the polymer, where other
conventional imaging and spectroscopic techniques present a poor contrast
between the blend components.
|
Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. The Sombor and
reduced Sombor indices of $G$ are defined as $SO(G)=\sum_{uv\in
E(G)}\sqrt{deg_G(u)^2+deg_G(v)^2}$ and $SO_{red}(G)=\sum_{uv\in
E(G)}\sqrt{(deg_G(u)-1)^2+(deg_G(v)-1)^2}$, respectively. We denote by
$H_{n,\nu}$ the graph constructed from the star $S_n$ by adding $\nu$ edge(s)
$(0\leq \nu\leq n-2)$, between a fixed pendent vertex and $\nu$ other pendent
vertices. R\'eti et al. [T. R\'eti, T. Do\v{s}li\'c and A. Ali, On the Sombor
index of graphs, $\textit{Contrib. Math. }$ $\textbf{3}$ (2021) 11-18] proposed
a conjecture that the graph $H_{n,\nu}$ has the maximum Sombor index among all
connected $\nu$-cyclic graphs of order $n$, where $5\leq \nu \leq n-2$. In this
paper we confirm that the former conjecture is true. It is also shown that this
conjecture is valid for the reduced Sombor index. The relationship between
Sombor, reduced Sombor and first Zagreb indices of graph is also investigated.
|
Meta-learning synthesizes and leverages the knowledge from a given set of
tasks to rapidly learn new tasks using very little data. Meta-learning of
linear regression tasks, where the regressors lie in a low-dimensional
subspace, is an extensively-studied fundamental problem in this domain.
However, existing results either guarantee highly suboptimal estimation errors,
or require $\Omega(d)$ samples per task (where $d$ is the data dimensionality)
thus providing little gain over separately learning each task. In this work, we
study a simple alternating minimization method (MLLAM), which alternately
learns the low-dimensional subspace and the regressors. We show that, for a
constant subspace dimension MLLAM obtains nearly-optimal estimation error,
despite requiring only $\Omega(\log d)$ samples per task. However, the number
of samples required per task grows logarithmically with the number of tasks. To
remedy this in the low-noise regime, we propose a novel task subset selection
scheme that ensures the same strong statistical guarantee as MLLAM, even with
bounded number of samples per task for arbitrarily large number of tasks.
|
In the presence of spacetime torsion, the momentum components do not commute;
therefore, in quantum field theory, summation over the momentum eigenvalues
will replace integration over the momentum. In the Einstein--Cartan theory of
gravity, in which torsion is coupled to spin, the separation between the
eigenvalues increases with the magnitude of the momentum. Consequently, this
replacement regularizes divergent integrals in Feynman diagrams with loops by
turning them into convergent sums. In this article, we apply torsional
regularization to the self-energy of a charged lepton in quantum
electrodynamics. We show that this procedure eliminates the ultraviolet
divergence. We also show that torsion gives a photon a small nonzero mass,
which regularizes the infrared divergence. In the end, we calculate the finite
bare masses of the electron, muon, and tau lepton: $0.4329\,\mbox{MeV}$,
$90.95\,\mbox{MeV}$, and $1543\,\mbox{MeV}$, respectively. These values
constitute about $85\%$ of the observed, re-normalized masses.
|
Wikipedia is an online encyclopedia available in 285 languages. It composes
an extremely relevant Knowledge Base (KB), which could be leveraged by
automatic systems for several purposes. However, the structure and organisation
of such information are not prone to automatic parsing and understanding and it
is, therefore, necessary to structure this knowledge. The goal of the current
SHINRA2020-ML task is to leverage Wikipedia pages in order to categorise their
corresponding entities across 268 hierarchical categories, belonging to the
Extended Named Entity (ENE) ontology. In this work, we propose three distinct
models based on the contextualised embeddings yielded by Multilingual BERT. We
explore the performances of a linear layer with and without explicit usage of
the ontology's hierarchy, and a Gated Recurrent Units (GRU) layer. We also test
several pooling strategies to leverage BERT's embeddings and selection criteria
based on the labels' scores. We were able to achieve good performance across a
large variety of languages, including those not seen during the fine-tuning
process (zero-shot languages).
|
In this work we present (and encourage the use of) the Williamson theorem and
its consequences in several contexts in physics. We demonstrate this theorem
using only basic concepts of linear algebra and symplectic matrices. As an
immediate application in the context of small oscillations, we show that
applying this theorem reveals the normal-mode coordinates and frequencies of
the system in the Hamiltonian scenario. A modest introduction of the symplectic
formalism in quantum mechanics is presented, useing the theorem to study
quantum normal modes and canonical distributions of thermodynamically stable
systems described by quadratic Hamiltonians. As a last example, a more advanced
topic concerning uncertainty relations is developed to show once more its
utility in a distinct and modern perspective.
|
This paper presents the Multilingual COVID-19 Analysis Method (CMTA) for
detecting and observing the spread of misinformation about this disease within
texts. CMTA proposes a data science (DS) pipeline that applies machine learning
models for processing, classifying (Dense-CNN) and analyzing (MBERT)
multilingual (micro)-texts. DS pipeline data preparation tasks extract features
from multilingual textual data and categorize it into specific information
classes (i.e., 'false', 'partly false', 'misleading'). The CMTA pipeline has
been experimented with multilingual micro-texts (tweets), showing
misinformation spread across different languages. To assess the performance of
CMTA and put it in perspective, we performed a comparative analysis of CMTA
with eight monolingual models used for detecting misinformation. The comparison
shows that CMTA has surpassed various monolingual models and suggests that it
can be used as a general method for detecting misinformation in multilingual
micro-texts. CMTA experimental results show misinformation trends about
COVID-19 in different languages during the first pandemic months.
|
We theoretically report the emergence of $Z_4$ parafermion edge modes in a
periodically driven spinful superconducting chain with modest fermionic Hubbard
interaction. These parafermion edge modes represent $\pm \pi/(2T)$ quasienergy
excitations ($T$ being the driving period), which have no static counterpart
and arise from the interplay between interaction effect and periodic driving.
At special parameter values, these exotic quasiparticles can be analytically
and exactly derived. Strong numerical evidence of their robustness against
variations in parameter values and spatial disorder is further presented. Our
proposal offers a route toward realizing parafermions without fractional
quantum Hall systems or complicated interactions.
|
In this rejoinder, we aim to address two broad issues that cover most
comments made in the discussion. First, we discuss some theoretical aspects of
our work and comment on how this work might impact the theoretical foundation
of privacy-preserving data analysis. Taking a practical viewpoint, we next
discuss how f-differential privacy (f-DP) and Gaussian differential privacy
(GDP) can make a difference in a range of applications.
|
We present the first spectroscopically resolved \ha\ emission map of the
Large Magellanic Cloud's (LMC) galactic wind. By combining new Wisconsin
H-alpha Mapper (WHAM) observations ($I_{\rm H\alpha}\gtrsim10~{\rm mR}$) with
existing \hicm\ emission observations, we have (1) mapped the LMC's near-side
galactic wind over a local standard of rest (LSR) velocity range of $+50\le\rm
v_{LSR}\le+250~{\rm km}~{\rm s}^{-1}$, (2) determined its morphology and
extent, and (3) estimated its mass, outflow rate, and mass-loading factor. We
observe \ha\ emission from this wind to typically 1-degree off the LMC's \hi\
disk. Kinematically, we find that the diffuse gas in the warm-ionized phase of
this wind persists at both low ($\lesssim100~{\rm km}~{\rm s}^{-1}$) and high
($\gtrsim100~{\rm km}~{\rm s}^{-1}$) velocities, relative to the LMC's \hi\
disk. Furthermore, we find that the high-velocity component spatially aligns
with the most intense star-forming region, 30~Doradus. We, therefore, conclude
that this high-velocity material traces an active outflow. We estimate the mass
of the warm ($T_e\approx10^4~\rm K$) ionized phase of the near-side LMC outflow
to be $\log{\left(M_{\rm ionized}/M_\odot\right)=7.51\pm0.15}$ for the combined
low and high velocity components. Assuming an ionization fraction of 75\% and
that the wind is symmetrical about the LMC disk, we estimate that its total
(neutral and ionized) mass is $\log{\left(M_{\rm total}/M_\odot\right)=7.93}$,
its mass-flow rate is $\dot{M}_{\rm outflow}\approx1.43~M_\odot~\rm yr^{-1}$,
and its mass-loading factor is $\eta\approx4.54$. Our average mass-loading
factor results are roughly a factor of 2.5 larger than previous \ha\ imaging
and UV~absorption line studies, suggesting that those studies are missing
nearly half the gas in the outflows.
|
While 2D occupancy maps commonly used in mobile robotics enable safe
navigation in indoor environments, in order for robots to understand their
environment to the level required for them to perform more advanced tasks,
representing 3D geometry and semantic environment information is required. We
propose a pipeline that can generate a multi-layer representation of indoor
environments for robotic applications. The proposed representation includes 3D
metric-semantic layers, a 2D occupancy layer, and an object instance layer
where known objects are replaced with an approximate model obtained through a
novel model-matching approach. The metric-semantic layer and the object
instance layer are combined to form an augmented representation of the
environment. Experiments show that the proposed shape matching method
outperforms a state-of-the-art deep learning method when tasked to complete
unseen parts of objects in the scene. The pipeline performance translates well
from simulation to real world as shown by F1-score analysis, with semantic
segmentation accuracy using Mask R-CNN acting as the major bottleneck. Finally,
we also demonstrate on a real robotic platform how the multi-layer map can be
used to improve navigation safety.
|
Analyzing human affect is vital for human-computer interaction systems. Most
methods are developed in restricted scenarios which are not practical for
in-the-wild settings. The Affective Behavior Analysis in-the-wild (ABAW) 2021
Contest provides a benchmark for this in-the-wild problem. In this paper, we
introduce a multi-modal and multi-task learning method by using both visual and
audio information. We use both AU and expression annotations to train the model
and apply a sequence model to further extract associations between video
frames. We achieve an AU score of 0.712 and an expression score of 0.477 on the
validation set. These results demonstrate the effectiveness of our approach in
improving model performance.
|
It is well known that CS can boost massive random access protocols. Usually,
the protocols operate in some overloaded regime where the sparsity can be
exploited. In this paper, we consider a different approach by taking an
orthogonal FFT base, subdivide its image into appropriate sub-channels and let
each subchannel take only a fraction of the load. To show that this approach
can actually achieve the full capacity we provide i) new concentration
inequalities, and ii) devise a sparsity capture effect, i.e where the
sub-division can be driven such that the activity in each each sub-channel is
sparse by design. We show by simulations that the system is scalable resulting
in a coarsely 30-fold capacity increase.
|
Matrices are often built and designed by applying procedures from lower order
matrices. Matrix tensor products, direct sums and multiplication of matrices
retain certain properties of the lower order matrices; matrices produced by
these procedures are said to be {\em separable}. {\em Entangled} matrices is
the term used for matrices which are not separable. Here design methods for
entangled matrices are derived. These can retain properties of lower order
matrices or acquire new required properties.
Entangled matrices are often required in practice and a number of
applications of the designs are given. Methods with which to construct
multidimensional entangled paraunitary matrices are derived; these have
applications for wavelet and filter bank design. New entangled unitary matrices
are designed; these are used in quantum information theory. Efficient methods
for designing new full diversity constellations of unitary matrices with
excellent {\em quality} (a defined term) for space time applications are given.
|
We investigate theoretically coherent detection implemented simultaneously on
a set of mutually orthogonal spatial modes in the image plane as a method to
characterize properties of a composite thermal source below the Rayleigh limit.
A general relation between the intensity distribution in the source plane and
the covariance matrix for the complex field amplitudes measured in the image
plane is derived. An algorithm to estimate parameters of a two-dimensional
symmetric binary source is devised and verified using Monte Carlo simulations
to provide super-resolving capability for high ratio of signal to detection
noise (SNR). Specifically, the separation between two point sources can be
meaningfully determined down to $\textrm{SNR}^{-1/2}$ in the units determined
by the spatial spread of the transfer function of the imaging system. The
presented algorithm is shown to make a nearly optimal use of the measured data
in the sub-Rayleigh region.
|
Pool block withholding attack is performed among mining pools in digital
cryptocurrencies, such as Bitcoin. Instead of mining honestly, pools can be
incentivized to infiltrate their own miners into other pools. These
infiltrators report partial solutions but withhold full solutions, share block
rewards but make no contribution to block mining. The block withholding attack
among mining pools can be modeled as a non-cooperative game called "the miner's
dilemm", which reduces effective mining power in the system and leads to
potential systemic instability in the blockchain. However, existing literature
on the game-theoretic properties of this attack only gives a preliminary
analysis, e.g., an upper bound of 3 for the pure price of anarchy (PPoA) in
this game, with two pools involved and no miner betraying. Pure price of
anarchy is a measurement of how much mining power is wasted in the miner's
dilemma game. Further tightening its upper bound will bring us more insight
into the structure of this game, so as to design mechanisms to reduce the
systemic loss caused by mutual attacks. In this paper, we give a tight bound of
(1, 2] for the pure price of anarchy. Moreover, we show the tight bound holds
in a more general setting, in which infiltrators may betray.We also prove the
existence and uniqueness of pure Nash equilibrium in this setting. Inspired by
experiments on the game among three mining pools, we conjecture that similar
results hold in the N-player miner's dilemma game (N>=2).
|
Computing the distribution of permanents of random matrices has been an
outstanding open problem for several decades. In quantum computing,
"anti-concentration" of this distribution is an unproven input for the proof of
hardness of the task of boson-sampling. We study permanents of random i.i.d.
complex Gaussian matrices, and more broadly, submatrices of random unitary
matrices. Using a hybrid representation-theoretic and combinatorial approach,
we prove strong lower bounds for all moments of the permanent distribution. We
provide substantial evidence that our bounds are close to being tight and
constitute accurate estimates for the moments. Let $U(d)^{k\times k}$ be the
distribution of $k\times k$ submatrices of $d\times d$ random unitary matrices,
and $G^{k\times k}$ be the distribution of $k\times k$ complex Gaussian
matrices. (1) Using the Schur-Weyl duality (or the Howe duality), we prove an
expansion formula for the $2t$-th moment of $|Perm(M)|$ when $M$ is drawn from
$U(d)^{k\times k}$ or $G^{k\times k}$. (2) We prove a surprising size-moment
duality: the $2t$-th moment of the permanent of random $k\times k$ matrices is
equal to the $2k$-th moment of the permanent of $t\times t$ matrices. (3) We
design an algorithm to exactly compute high moments of the permanent of small
matrices. (4) We prove lower bounds for arbitrary moments of permanents of
matrices drawn from $G^{ k\times k}$ or $U(k)$, and conjecture that our lower
bounds are close to saturation up to a small multiplicative error. (5) Assuming
our conjectures, we use the large deviation theory to compute the tail of the
distribution of log-permanent of Gaussian matrices for the first time. (6) We
argue that it is unlikely that the permanent distribution can be uniquely
determined from the integer moments and one may need to supplement the moment
calculations with extra assumptions to prove the anti-concentration conjecture.
|
In this paper we attempt to develop a general $p-$Bergman theory on bounded
domains in $\mathbb C^n$. To indicate the basic difference between $L^p$ and
$L^2$ cases, we show that the $p-$Bergman kernel $K_p(z)$ is not real-analytic
on some bounded complete Reinhardt domains when $p\ge 4$ is an even number. By
the calculus of variations we get a fundamental reproducing formula. This
together with certain techniques from nonlinear analysis of the $p-$Laplacian
yield a number of results, e.g., the off-diagonal $p-$Bergman kernel
$K_p(z,\cdot)$ is H\"older continuous of order $\frac12$ for $p>1$ and of order
$\frac1{2(n+2)}$ for $p=1$. We also show that the $p-$Bergman metric $B_p(z;X)$
tends to the Carath\'eodory metric $C(z;X)$ as $p\rightarrow \infty$ and the
generalized Levi form $i\partial\bar{\partial}\log K_p(z;X)$ is no less than
$B_p(z;X)^2$ for $p\ge 2$ and $ C(z;X)^2$ for $p\le 2.$ Stability of $K_p(z,w)$
or $B_p(z;X)$ as $p$ varies, boundary behavior of $K_p(z)$, as well as basic
facts on the $p-$Bergman prjection, are also investigated.
|
Quantitative evaluation has increased dramatically among recent video
inpainting work, but the video and mask content used to gauge performance has
received relatively little attention. Although attributes such as camera and
background scene motion inherently change the difficulty of the task and affect
methods differently, existing evaluation schemes fail to control for them,
thereby providing minimal insight into inpainting failure modes. To address
this gap, we propose the Diagnostic Evaluation of Video Inpainting on
Landscapes (DEVIL) benchmark, which consists of two contributions: (i) a novel
dataset of videos and masks labeled according to several key inpainting failure
modes, and (ii) an evaluation scheme that samples slices of the dataset
characterized by a fixed content attribute, and scores performance on each
slice according to reconstruction, realism, and temporal consistency quality.
By revealing systematic changes in performance induced by particular
characteristics of the input content, our challenging benchmark enables more
insightful analysis into video inpainting methods and serves as an invaluable
diagnostic tool for the field. Our code is available at
https://github.com/MichiganCOG/devil .
|
Quantum dots (QDs) made from semiconductors are among the most promising
platforms for the developments of quantum computing and simulation chips, and
have advantages over other platforms in high density integration and in
compatibility to the standard semiconductor chip fabrication technology.
However, development of a highly tunable semiconductor multiple QD system still
remains as a major challenge. Here, we demonstrate realization of a highly
tunable linear quadruple QD (QQD) in a narrow bandgap semiconductor InAs
nanowire with fine finger gate technique. The QQD is studied by electron
transport measurements in the linear response regime. Characteristic
two-dimensional charge stability diagrams containing four groups of resonant
current lines of different slopes are found for the QQD. It is shown that these
current lines can be individually assigned as arising from resonant electron
transport through the energy levels of different QDs. Benefited from the
excellent gate tunability, we also demonstrate tuning of the QQD to regimes
where the energy levels of two QDs, three QDs and all the four QDs are
energetically on resonance, respectively, with the fermi level of source and
drain contacts. A capacitance network model is developed for the linear QQD and
the simulated charge stability diagrams based on the model show good agreements
with the experiments. Our work presents a solid experimental evidence that
narrow bandgap semiconductor nanowires multiple QDs could be used as a
versatile platform to achieve integrated qubits for quantum computing and to
perform quantum simulations for complex many-body systems.
|
Predictive energy management of Connected and Automated Vehicles (CAVs), in
particular those with multiple power sources, has the potential to
significantly improve energy savings in real-world driving conditions. In
particular, the eco-driving problem seeks to design optimal speed and power
usage profiles based upon available information from connectivity and advanced
mapping features to minimize the fuel consumption between two designated
locations.
In this work, the eco-driving problem is formulated as a three-state receding
horizon optimal control problem and solved via Dynamic Programming (DP). The
optimal solution, in terms of vehicle speed and battery State of Charge (SoC)
trajectories, allows a connected and automated hybrid electric vehicle to
intelligently pass the signalized intersections and minimize fuel consumption
over a prescribed route. To enable real-time implementation, a parallel
architecture of DP is proposed for an NVIDIA GPU with CUDA programming.
Simulation results indicate that the proposed optimal controller delivers more
than 15% fuel economy benefits compared to a baseline control strategy and that
the solver time can be reduced by more than 90% by the parallel implementation
when compared to a serial implementation.
|
We investigate how sentence-level transformers can be modified into effective
sequence labelers at the token level without any direct supervision. Existing
approaches to zero-shot sequence labeling do not perform well when applied on
transformer-based architectures. As transformers contain multiple layers of
multi-head self-attention, information in the sentence gets distributed between
many tokens, negatively affecting zero-shot token-level performance. We find
that a soft attention module which explicitly encourages sharpness of attention
weights can significantly outperform existing methods.
|
We calculate the possible interaction between a superconductor and the static
Earth's gravitational fields, making use of the gravito-Maxwell formalism
combined with the time-dependent Ginzburg-Landau theory. We try to estimate
which are the most favourable conditions to enhance the effect, optimizing the
superconductor parameters characterizing the chosen sample. We also give a
qualitative comparison of the behaviour of high-$T_\text{c}$ and classical
low-$T_\text{c}$ superconductors with respect to the gravity/superfluid
interplay.
|
Implications of the Raychaudhuri equation in focusing of geodesic congruence
are studied in the framework of scalar-tensor theory of gravity. Brans-Dicke
theory and Bekenstein's scalar field theory are picked up for investigation. In
both the theories, a static spherically symmetric distribution and a spatially
homogeneous and isotropic cosmological model are dealt with, as specific
examples. It is found that with reasonable physical conditions, there are
possibilities for a violation of the convergence condition. This fact leads to
a possibility of avoiding a singularity.
|
We show that, for vector spaces in which distance measurement is performed
using a gauge, the existence of best coapproximations in $1$-codimensional
closed linear subspaces implies in dimensions $\geq 2$ that the gauge is a
norm, and in dimensions $\geq 3$ that the gauge is even a Hilbert space norm.
We also show that coproximinality of all closed subspaces of a fixed dimension
implies coproximinality of all subspaces of all lower finite dimensions.
|
We construct high-order semi-discrete-in-time and fully discrete (with
Fourier-Galerkin in space) schemes for the incompressible Navier-Stokes
equations with periodic boundary conditions, and carry out corresponding error
analysis. The schemes are of implicit-explicit type based on a scalar auxiliary
variable (SAV) approach. It is shown that numerical solutions of these schemes
are uniformly bounded without any restriction on time step size. These uniform
bounds enable us to carry out a rigorous error analysis for the schemes up to
fifth-order in a unified form, and derive global error estimates in
$l^\infty(0,T;H^1)\cap l^2(0,T;H^2)$ in the two dimensional case as well as
local error estimates in $l^\infty(0,T;H^1)\cap l^2(0,T;H^2)$ in the three
dimensional case. We also present numerical results confirming our theoretical
convergence rates and demonstrating advantages of higher-order schemes for
flows with complex structures in the double shear layer problem.
|
Stochastically switching force terms appear frequently in models of
biological systems under the action of active agents such as proteins. The
interaction of switching force and Brownian motion can create an "effective
thermal equilibrium" even though the system does not obey a potential function.
In order to extend the field of energy landscape analysis to understand
stability and transitions in switching systems, we derive the quasipotential
that defines this effective equilibrium for a general overdamped Langevin
system with a force switching according to a continuous-time Markov chain
process. Combined with the string method for computing most-probable transition
paths, we apply our method to an idealized system and show the appearance of
previously unreported numerical challenges. We present modifications to the
algorithms to overcome these challenges, and show validity by demonstrating
agreement between our computed quasipotential barrier and asymptotic Monte
Carlo transition times in the system.
|
Convolutional Neural Networks (CNNs) have achieved great success due to the
powerful feature learning ability of convolution layers. Specifically, the
standard convolution traverses the input images/features using a sliding window
scheme to extract features. However, not all the windows contribute equally to
the prediction results of CNNs. In practice, the convolutional operation on
some of the windows (e.g., smooth windows that contain very similar pixels) can
be very redundant and may introduce noises into the computation. Such
redundancy may not only deteriorate the performance but also incur the
unnecessary computational cost. Thus, it is important to reduce the
computational redundancy of convolution to improve the performance. To this
end, we propose a Content-aware Convolution (CAC) that automatically detects
the smooth windows and applies a 1x1 convolutional kernel to replace the
original large kernel. In this sense, we are able to effectively avoid the
redundant computation on similar pixels. By replacing the standard convolution
in CNNs with our CAC, the resultant models yield significantly better
performance and lower computational cost than the baseline models with the
standard convolution. More critically, we are able to dynamically allocate
suitable computation resources according to the data smoothness of different
images, making it possible for content-aware computation. Extensive experiments
on various computer vision tasks demonstrate the superiority of our method over
existing methods.
|
Let $G$ be an $n$-vertex graph and let $L:V(G)\rightarrow P(\{1,2,3\})$ be a
list assignment over the vertices of $G$, where each vertex with list of size 3
and of degree at most 5 has at least three neighbors with lists of size 2. We
can determine $L$-choosability of $G$ in $O(1.3196^{n_3+.5n_2})$ time, where
$n_i$ is the number of vertices in $G$ with list of size $i$ for $i\in
\{2,3\}$. As a corollary, we conclude that the 3-colorability of any graph $G$
with minimum degree at least 6 can be determined in $O(1.3196^{n-.5\Delta(G)})$
time.
|
Multi-instance learning is a type of weakly supervised learning. It deals
with tasks where the data is a set of bags and each bag is a set of instances.
Only the bag labels are observed whereas the labels for the instances are
unknown. An important advantage of multi-instance learning is that by
representing objects as a bag of instances, it is able to preserve the inherent
dependencies among parts of the objects. Unfortunately, most existing
algorithms assume all instances to be \textit{identically and independently
distributed}, which violates real-world scenarios since the instances within a
bag are rarely independent. In this work, we propose the Multi-Instance
Variational Auto-Encoder (MIVAE) algorithm which explicitly models the
dependencies among the instances for predicting both bag labels and instance
labels. Experimental results on several multi-instance benchmarks and
end-to-end medical imaging datasets demonstrate that MIVAE performs better than
state-of-the-art algorithms for both instance label and bag label prediction
tasks.
|
The global pandemic caused by the COVID virus led universities to a change in
the way they teach classes, moving to a distance mode. The subject "Modelos y
Sistemas de Costos " of the CPA career of the Faculty of Economic Sciences and
Administration of the Universidad de la Rep\'ublica (Uruguay) incorporated
audiovisual material as a pedagogical resource consisting of videos recorded by
a group of well experienced and highest ranked teachers. The objective of this
research is to analyze the efficiency of the audiovisual resources used in the
course, seeking to answer whether the visualizations of said materials follow
certain patterns of behavior. 13 videos were analyzed, which had 16,340 views,
coming from at least 1,486 viewers. It was obtained that the visualizations
depend on the proximity to the test dates and that although the visualization
time has a curve that accompanies the duration of the videos, it is limited and
the average number of visualizations is 10 minutes and 4 seconds. It is also
concluded that the efficiency in viewing time increases in short videos.
|
The $H$-join of a family of graphs $\mathcal{G}=\{G_1, \dots, G_p\}$, also
called the generalized composition, $H[G_1, \dots, G_p]$, where all graphs are
undirected, simple and finite, is the graph obtained by replacing each vertex
$i$ of $H$ by $G_i$ and adding to the edges of all graphs in $\mathcal{G}$ the
edges of the join $G_i \vee G_j$, for every edge $ij$ of $H$. Some well known
graph operations are particular cases of the $H$-join of a family of graphs
$\mathcal{G}$ as it is the case of the lexicographic product (also called
composition) of two graphs $H$ and $G$, $H[G]$. During long time the known
expressions for the determination of the entire spectrum of the $H$-join in
terms of the spectra of its components and an associated matrix were limited to
families of regular graphs. In this work, we extend such a determination, as
well as the determination of the characteristic polynomial, to families of
arbitrary graphs. From the obtained results, the eigenvectors of the adjacency
matrix of the $H$-join can also be determined in terms of the adjacency
matrices of the components and an associated matrix.
|
A class of network codes have been proposed in the literature where the
symbols transmitted on network edges are binary vectors and the coding
operation performed in network nodes consists of the application of (possibly
several) permutations on each incoming vector and XOR-ing the results to obtain
the outgoing vector. These network codes, which we will refer to as
permute-and-add network codes, involve simpler operations and are known to
provide lower complexity solutions than scalar linear codes. The complexity of
these codes is determined by their degree which is the number of permutations
applied on each incoming vector to compute an outgoing vector. Constructions of
permute-and-add network codes for multicast networks are known. In this paper,
we provide a new framework based on group algebras to design permute-and-add
network codes for arbitrary (not necessarily multicast) networks. Our framework
allows the use of any finite group of permutations (including circular shifts,
proposed in prior work) and admits a trade-off between coding rate and the
degree of the code. Further, our technique permits elegant recovery and
generalizations of the key results on permute-and-add network codes known in
the literature.
|
The main purpose of this paper is to construct high-girth regular expander
graphs with localized eigenvectors for general degrees, which is inspired by a
recent work due to Alon, Ganguly and Srivastava (to appear in Israel J. Math.).
|
Video streaming became an undivided part of the Internet. To efficiently
utilize the limited network bandwidth it is essential to encode the video
content. However, encoding is a computationally intensive task, involving
high-performance resources provided by private infrastructures or public
clouds. Public clouds, such as Amazon EC2, provide a large portfolio of
services and instances optimized for specific purposes and budgets. The
majority of Amazon instances use x86 processors, such as Intel Xeon or AMD
EPYC. However, following the recent trends in computer architecture, Amazon
introduced Arm-based instances that promise up to 40% better cost-performance
ratio than comparable x86 instances for specific workloads. We evaluate in this
paper the video encoding performance of x86 and Arm instances of four instance
families using the latest FFmpeg version and two video codecs. We examine the
impact of the encoding parameters, such as different presets and bitrates, on
the time and cost for encoding. Our experiments reveal that Arm instances show
high time and cost-saving potential of up to 33.63% for specific bitrates and
presets, especially for the x264 codec. However, the x86 instances are more
general and achieve low encoding times, regardless of the codec.
|
We have implemented training of neural networks in secure multi-party
computation (MPC) using quantization commonly used in the said setting. To the
best of our knowledge, we are the first to present an MNIST classifier purely
trained in MPC that comes within 0.2 percent of the accuracy of the same
convolutional neural network trained via plaintext computation. More
concretely, we have trained a network with two convolution and two dense layers
to 99.2% accuracy in 25 epochs. This took 3.5 hours in our MPC implementation
(under one hour for 99% accuracy).
|
Interpretation of machine learning models has become one of the most
important research topics due to the necessity of maintaining control and
avoiding bias in these algorithms. Since many machine learning algorithms are
published every day, there is a need for novel model-agnostic interpretation
approaches that could be used to interpret a great variety of algorithms. Thus,
one advantageous way to interpret machine learning models is to feed different
input data to understand the changes in the prediction. Using such an approach,
practitioners can define relations among data patterns and a model's decision.
This work proposes a model-agnostic interpretation approach that uses
visualization of feature perturbations induced by the PSO algorithm. We
validate our approach on publicly available datasets, showing the capability to
enhance the interpretation of different classifiers while yielding very stable
results compared with state-of-the-art algorithms.
|
Using the integral field unit (IFU) data from Mapping Nearby Galaxies at
Apache Point Observatory (MaNGA) survey, we collect a sample of 36 star forming
galaxies that host galactic-scale outflows in ionized gas phase. The control
sample is matched in the three dimensional parameter space of stellar mass,
star formation rate and inclination angle. Concerning the global properties,
the outflows host galaxies tend to have smaller size, more asymmetric gas disk,
more active star formation in the center and older stellar population than the
control galaxies. Comparing the stellar population properties along axes, we
conclude that the star formation in the outflows host galaxies can be divided
into two branches. One branch evolves following the inside-out formation
scenario. The other locating in the galactic center is triggered by gas
accretion or galaxy interaction, and further drives the galactic-scale
outflows. Besides, the enhanced star formation and metallicity along minor axis
of outflows host galaxies uncover the positive feedback and metal entrainment
in the galactic-scale outflows. Observational data in different phases with
higher spatial resolution are needed to reveal the influence of galactic-scale
outflows on the star formation progress in detail.
|
We consider in parallel pointed homotopy automorphisms of iterated wedge sums
of topological spaces and boundary relative homotopy automorphisms of iterated
connected sums of manifolds minus a disk. Under certain conditions on the
spaces and manifolds, we prove that the rational homotopy groups of these
homotopy automorphisms form finitely generated FI-modules, and thus satisfy
representation stability for symmetric groups, in the sense of Church and Farb.
|
In this manuscript, we report strong linear correlation between shifted
velocity and line width of the broad blue-shifted [OIII] components in SDSS
quasars. Broad blue-shifted [OIII] components are commonly treated as
indicators of outflows related to central engine, however, it is still an open
question whether the outflows are related to central accretion properties or
related to local physical properties of NLRs (narrow emission line regions).
Here, the reported strong linear correlation with the Spearman Rank correlation
coefficient 0.75 can be expected under the assumption of AGN (active galactic
nuclei) feedback driven outflows, through a large sample of 535 SDSS quasars
with reliable blue-shifted broad [OIII] components. Moreover, there are much
different detection rates for broad blue-shifted and broad red-shifted [OIII]
components in quasars, and no positive correlation can be found between shifted
velocity and line width of the broad red-shifted [OIII] components, which
provide further and strong evidence to reject possibility of local outflows in
NLRs leading to the broad blue-shifted [OIII] components in quasars. Thus, the
strong linear correlation can be treated as strong evidence for the broad
blue-shifted [OIII] components as better indicators of outflows related to
central engine in AGN. Furthermore, rather than central BH masses, Eddington
ratios and continuum luminosities have key roles on properties of the broad
blue-shifted [OIII] components in quasars.
|
This paper illustrates a detail description of the system and its results
that developed as a part of the participation at CONSTRAINT shared task in
AAAI-2021. The shared task comprises two tasks: a) COVID19 fake news detection
in English b) Hostile post detection in Hindi. Task-A is a binary
classification problem with fake and real class, while task-B is a multi-label
multi-class classification task with five hostile classes (i.e. defame, fake,
hate, offense, non-hostile). Various techniques are used to perform the
classification task, including SVM, CNN, BiLSTM, and CNN+BiLSTM with tf-idf and
Word2Vec embedding techniques. Results indicate that SVM with tf-idf features
achieved the highest 94.39% weighted $f_1$ score on the test set in task-A.
Label powerset SVM with n-gram features obtained the maximum coarse-grained and
fine-grained $f_1$ score of 86.03% and 50.98% on the task-B test set
respectively.
|
Doping is considered to be the main method for improving the thermoelectric
performance of layered sodium cobaltate (Na$_{1-x}$CoO$_2$). However, in the
vast majority of past reports, the equilibrium location of the dopant in the
Na$_{1-x}$CoO$_2$'s complex layered lattice has not been confidently
identified. Consequently, a universal strategy for choosing a suitable dopant
for enhancing Na$_{1-x}$CoO$_2$'s figure of merit is yet to be established.
Here, by examining the formation energy of Gd and Yb dopants in
Na$_{0.75}$CoO$_2$ and Na$_{0.50}$CoO$_2$, we demonstrate that in an oxygen
poor environment, Gd and Yb dopants reside in the Na layer while in an oxygen
rich environment these dopants replace a Co in CoO$_2$ layer. When at Na layer,
Gd and Yb dopants reduce the carrier concentration via electron-hole
recombination, simultaneously increasing the Seebeck coefficient ($S$) and
reducing electric conductivity ($\sigma$). Na site doping, however, improves
the thermoelectric power factor (PF) only in Na$_{0.50}$CoO$_2$. When replacing
a Co, these dopants reduce $S$ and PF. The results demonstrate how
thermoelectric performance critically depends on the synthesis environment that
must be fine-tuned for achieving any thermoelectric enhancement.
|
Equation learning aims to infer differential equation models from data. While
a number of studies have shown that differential equation models can be
successfully identified when the data are sufficiently detailed and corrupted
with relatively small amounts of noise, the relationship between observation
noise and uncertainty in the learned differential equation models remains
unexplored. We demonstrate that for noisy data sets there exists great
variation in both the structure of the learned differential equation models as
well as the parameter values. We explore how to combine data sets to quantify
uncertainty in the learned models, and at the same time draw mechanistic
conclusions about the target differential equations. We generate noisy data
using a stochastic agent-based model and combine equation learning methods with
approximate Bayesian computation (ABC) to show that the correct differential
equation model can be successfully learned from data, while a quantification of
uncertainty is given by a posterior distribution in parameter space.
|
We perform binary neutron star merger simulations using a newly derived set
of finite-temperature equations of state in the Brueckner-Hartree-Fock
approach. We point out the important and opposite roles of finite temperature
and rotation for stellar stability and systematically investigate the
gravitational-wave properties, matter distribution, and ejecta properties in
the postmerger phase for the different cases. The validity of several universal
relations is also examined and the most suitable EOSs are identified.
|
Granger causality has been employed to investigate causality relations
between components of stationary multiple time series. We generalize this
concept by developing statistical inference for local Granger causality for
multivariate locally stationary processes. Our proposed local Granger causality
approach captures time-evolving causality relationships in nonstationary
processes. The proposed local Granger causality is well represented in the
frequency domain and estimated based on the parametric time-varying spectral
density matrix using the local Whittle likelihood. Under regularity conditions,
we demonstrate that the estimators converge to multivariate normal in
distribution. Additionally, the test statistic for the local Granger causality
is shown to be asymptotically distributed as a quadratic form of a multivariate
normal distribution. The finite sample performance is confirmed with several
simulation studies for multivariate time-varying autoregressive models. For
practical demonstration, the proposed local Granger causality method uncovered
new functional connectivity relationships between channels in brain signals.
Moreover, the method was able to identify structural changes in financial data.
|
The Fermi surface properties of a nontrivial system YSi is investigated by de
Haas-van Alphen (dHvA) oscillation measurements combined with the
first-principle calculations. Three main frequencies ($\alpha$, $\beta$,
$\gamma$) are probed up to $14$~T magnetic field in dHvA oscillations. The
$\alpha$-branch corresponding to $21$~T frequency possesses non-trivial
topological character with $\pi$ Berry phase and a linear dispersion along
$\Gamma$ to $Z$ direction with a small effective mass of $0.069~m_e$ with
second-lowest Landau-level up to $14$~T. For $B~\parallel$~[010] direction, the
295~T frequency exhibits non-trivial $2D$ character with $1.24\pi$ Berry phase
and a high Fermi velocity of $6.7 \times 10^5$~ms$^{-1}$. The band structure
calculations reveal multiple nodal crossings in the vicinity of Fermi energy
$E_f$ without spin-orbit coupling (SOC). Inclusion of SOC opens a small gap in
the nodal crossings and results in nonsymmorphic symmetry enforced Dirac points
at some high symmetry points, suggesting YSi to be a symmetry enforced
topological metal.
|
Let $\alpha=(A_g,\alpha_g)_{g\in G}$ be a group-type partial action of a
connected groupoid $G$ on a ring $A=\bigoplus_{z\in G_0}A_z$ and
$B=A\star_{\alpha}G$ the corresponding partial skew groupoid ring. In the first
part of this paper we investigate the relation of several ring theoretic
properties between $A$ and $B$. For the second part, using that every Leavitt
path algebra is isomorphic to a partial skew groupoid ring obtained from a
partial groupoid action $\lambda$, we characterize when $\lambda$ is
group-type. In such a case, we obtain ring theoretic properties of Leavitt path
algebras from the results on general partial skew groupoid rings. Several
examples that illustrate the results on Leavitt path algebras are presented.
|
Keyword spotting and in particular Wake-Up-Word (WUW) detection is a very
important task for voice assistants. A very common issue of voice assistants is
that they get easily activated by background noise like music, TV or background
speech that accidentally triggers the device. In this paper, we propose a
Speech Enhancement (SE) model adapted to the task of WUW detection that aims at
increasing the recognition rate and reducing the false alarms in the presence
of these types of noises. The SE model is a fully-convolutional denoising
auto-encoder at waveform level and is trained using a log-Mel Spectrogram and
waveform reconstruction losses together with the BCE loss of a simple WUW
classification network. A new database has been purposely prepared for the task
of recognizing the WUW in challenging conditions containing negative samples
that are very phonetically similar to the keyword. The database is extended
with public databases and an exhaustive data augmentation to simulate different
noises and environments. The results obtained by concatenating the SE with a
simple and state-of-the-art WUW detectors show that the SE does not have a
negative impact on the recognition rate in quiet environments while increasing
the performance in the presence of noise, especially when the SE and WUW
detector are trained jointly end-to-end.
|
Transfer reinforcement learning aims to improve the sample efficiency of
solving unseen new tasks by leveraging experiences obtained from previous
tasks. We consider the setting where all tasks (MDPs) share the same
environment dynamic except reward function. In this setting, the MDP dynamic is
a good knowledge to transfer, which can be inferred by uniformly random policy.
However, trajectories generated by uniform random policy are not useful for
policy improvement, which impairs the sample efficiency severely. Instead, we
observe that the binary MDP dynamic can be inferred from trajectories of any
policy which avoids the need of uniform random policy. As the binary MDP
dynamic contains the state structure shared over all tasks we believe it is
suitable to transfer. Built on this observation, we introduce a method to infer
the binary MDP dynamic on-line and at the same time utilize it to guide state
embedding learning, which is then transferred to new tasks. We keep state
embedding learning and policy learning separately. As a result, the learned
state embedding is task and policy agnostic which makes it ideal for transfer
learning. In addition, to facilitate the exploration over the state space, we
propose a novel intrinsic reward based on the inferred binary MDP dynamic. Our
method can be used out-of-box in combination with model-free RL algorithms. We
show two instances on the basis of \algo{DQN} and \algo{A2C}. Empirical results
of intensive experiments show the advantage of our proposed method in various
transfer learning tasks.
|
Traditionally, the efficiency and effectiveness of search systems have both
been of great interest to the information retrieval community. However, an
in-depth analysis of the interaction between the response latency and users'
subjective search experience in the mobile setting has been missing so far. To
address this gap, we conduct a controlled study that aims to reveal how
response latency affects mobile web search. Our preliminary results indicate
that mobile web search users are four times more tolerant to response latency
reported for desktop web search users. However, when exceeding a certain
threshold of 7-10 sec, the delays have a sizeable impact and users report
feeling significantly more tensed, tired, terrible, frustrated and sluggish,
all which contribute to a worse subjective user experience.
|
Industrial plants suffer from a high degree of complexity and incompatibility
in their communication infrastructure, caused by a wild mix of proprietary
technologies. This prevents transformation towards Industry 4.0 and the
Industrial Internet of Things. Open Platform Communications Unified
Architecture (OPC UA) is a standardized protocol that addresses these problems
with uniform and semantic communication across all levels of the hierarchy.
However, its adoption in embedded field devices, such as sensors and actors, is
still lacking due to prohibitive memory and power requirements of software
implementations. We have developed a dedicated hardware engine that offloads
processing of the OPC UA protocol and enables realization of compact and
low-power field devices with OPC UA support. As part of a proof-of-concept
embedded system we have implemented this engine in a 22 nm FDSOI technology. We
measured performance, power consumption, and memory footprint of our test chip
and compared it with a software implementation based on open62541 and a
Raspberry Pi 2B. Our OPC UA hardware engine is 50 times more energy efficient
and only requires 36 KiB of memory. The complete chip consumes only 24 mW under
full load, making it suitable for low-power embedded applications.
|
The Bogomolov multiplier $B_0(G)$ of a finite group $G$ is the subgroup of
the Schur multiplier $H^2(G,\mathbb Q/\mathbb Z)$ consisting of the cohomology
classes which vanishes after restricting to any abelian subgroup of $G$. We
give a proof of Hopf-type formula for $B_0(G)$ and derive an exact sequence for
cohomological version of the Bogomolov multiplier. Using this exact sequence we
provide necessary and sufficient conditions for the corresponding inflation
homomorphism to be an epimorphism and to be zero map. We provide some
conditions for the triviality of $B_0(G)$ for central product of groups $G$ and
show that the Bogomolov multiplier of generalized discrete Heisenberg groups is
trivial. We also give a complete characterization of groups of order $p^6, p>3$
having trivial Bogomolov multiplier.
|
Assuming the Generalized Continuum Hypothesis, this paper answers the
question: when is the tensor product of two ultrafilters equal to their
Cartesian product? It is necessary and sufficient that their Cartesian product
is an ultrafilter; that the two ultrafilters commute in the tensor product;
that for all cardinals $\lambda$, one of the ultrafilters is both
$\lambda$-indecomposable and $\lambda^+$-indecomposable; that the ultrapower
embedding associated to each ultrafilter restricts to a definable embedding of
the ultrapower of the universe associated to the other.
|
Let $G$ be a finite permutation group on $\Omega$. An ordered sequence of
elements of $\Omega$, $(\omega_1,\dots, \omega_t)$, is an irredundant base for
$G$ if the pointwise stabilizer $G_{(\omega_1,\dots, \omega_t)}$ is trivial and
no point is fixed by the stabilizer of its predecessors. If all irredundant
bases of $G$ have the same size we say that $G$ is an IBIS group. In this paper
we show that if a primitive permutation group is IBIS, then it must be almost
simple, of affine-type, or of diagonal type. Moreover we prove that a
diagonal-type primitive permutation groups is IBIS if and only if it is
isomorphic to $PSL(2,2^f)\times PSL(2,2^f)$ for some $f\geq 2,$ in its diagonal
action of degree $2^f(2^{2f}-1).$
|
Gliomas are among the most aggressive and deadly brain tumors. This paper
details the proposed Deep Neural Network architecture for brain tumor
segmentation from Magnetic Resonance Images. The architecture consists of a
cascade of three Deep Layer Aggregation neural networks, where each stage
elaborates the response using the feature maps and the probabilities of the
previous stage, and the MRI channels as inputs. The neuroimaging data are part
of the publicly available Brain Tumor Segmentation (BraTS) 2020 challenge
dataset, where we evaluated our proposal in the BraTS 2020 Validation and Test
sets. In the Test set, the experimental results achieved a Dice score of
0.8858, 0.8297 and 0.7900, with an Hausdorff Distance of 5.32 mm, 22.32 mm and
20.44 mm for the whole tumor, core tumor and enhanced tumor, respectively.
|
We study a model of a thermoelectric nanojunction driven by
vibrationally-assisted tunneling. We apply the reaction coordinate formalism to
derive a master equation governing its thermoelectric performance beyond the
weak electron-vibrational coupling limit. Employing full counting statistics we
calculate the current flow, thermopower, associated noise, and efficiency
without resorting to the weak vibrational coupling approximation. We
demonstrate intricacies of the power-efficiency-precision trade-off at strong
coupling, showing that the three cannot be maximised simultaneously in our
model. Finally, we emphasise the importance of capturing non-additivity when
considering strong coupling and multiple environments, demonstrating that an
additive treatment of the environments can violate the upper bound on
thermoelectric efficiency imposed by Carnot.
|
The nuclear matrix element (NME) of the neutrinoless double-$\beta$
($0\nu\beta\beta$) decay is an essential input for determining the neutrino
effective mass, if the half-life of this decay is measured. The reliable
calculation of this NME has been a long-standing problem because of the
diversity of the predicted values of the NME depending on the calculation
method. In this paper, we focus on the shell model and the QRPA. The shell
model have a rich amount of the many-particle many-hole correlations, and the
QRPA can obtain the convergence of the result of calculation with respect to
the extension of the single-particle space. It is difficult for the shell model
to obtain the convergence of the $0\nu\beta\beta$ NME with respect to the
valence single-particle space. The many-body correlations of the QRPA are
insufficient depending on nuclei. We propose a new method to modify
phenomenologically the results of the shell model and the QRPA compensating the
insufficient point of each method by using the information of other method
complementarily. Extrapolations of the components of the $0\nu\beta\beta$ NME
of the shell model are made toward a very large valence single-particle space.
We introduce a modification factor to the components of the $0\nu\beta\beta$
NME of the QRPA. Our modification method gives similar values of the
$0\nu\beta\beta$ NME of the two methods for $^{48}$Ca. The NME of the
two-neutrino double-$\beta$ decay is also modified in a similar but simpler
manner, and the consistency of the two methods is improved.
|
Network softwarization has revolutionized the architecture of cellular
wireless networks. State-of-the-art container based virtual radio access
networks (vRAN) provide enormous flexibility and reduced life cycle management
costs, but they also come with prohibitive energy consumption. We argue that
for future AI-native wireless networks to be flexible and energy efficient,
there is a need for a new abstraction in network softwarization that caters for
neural network type of workloads and allows a large degree of service
composability. In this paper we present the NeuroRAN architecture, which
leverages stateful function as a user facing execution model, and is
complemented with virtualized resources and decentralized resource management.
We show that neural network based implementations of common transceiver
functional blocks fit the proposed architecture, and we discuss key research
challenges related to compilation and code generation, resource management,
reliability and security.
|
Semi-supervised learning through deep generative models and multi-lingual
pretraining techniques have orchestrated tremendous success across different
areas of NLP. Nonetheless, their development has happened in isolation, while
the combination of both could potentially be effective for tackling
task-specific labelled data shortage. To bridge this gap, we combine
semi-supervised deep generative models and multi-lingual pretraining to form a
pipeline for document classification task. Compared to strong supervised
learning baselines, our semi-supervised classification framework is highly
competitive and outperforms the state-of-the-art counterparts in low-resource
settings across several languages.
|
We present Hubble Space Telescope Cosmic Origin Spectrograph (COS) UV line
spectroscopy and integral-field unit (IFU) observations of the intra-group
medium in Stephan's Quintet (SQ). SQ hosts a 30 kpc long shocked ridge
triggered by a galaxy collision at a relative velocity of 1000 km/s, where
large amounts of molecular gas coexist with a hot, X-ray emitting, plasma. COS
spectroscopy at five positions sampling the diverse environments of the SQ
intra-group medium reveals very broad (2000 km/s) Ly$\alpha$ line emission with
complex line shapes. The Ly$\alpha$ line profiles are similar to or much
broader than those of H$\beta$, [CII]$\lambda157.7\mu$m and CO~(1-0) emission.
The extreme breadth of the Ly$\alpha$ emission, compared with H$\beta$, implies
resonance scattering within the observed structure. Scattering indicates that
the neutral gas of the intra-group medium is clumpy, with a significant surface
covering factor. We observe significant variations in the Ly$\alpha$/H$\beta$
flux ratio between positions and velocity components. From the mean line ratio
averaged over positions and velocities, we estimate the effective escape
fraction of Ly$\alpha$ photons to be 10-30%. Remarkably, over more than four
orders of magnitude in temperature, the powers radiated by X-rays, Ly$\alpha$,
H$_2$, [CII] are comparable within a factor of a few, assuming that the ratio
of the Ly$\alpha$ to H$_2$ fluxes over the whole shocked intra-group medium
stay in line with those observed at those five positions. Both shocks and
mixing layers could contribute to the energy dissipation associated with a
turbulent energy cascade. Our results may be relevant for the cooling of gas at
high redshifts, where the metal content is lower than in this local system, and
a high amplitude of turbulence is more common.
|
In this paper, we study the Nisnevich sheafification
$\mathcal{H}^1_{\acute{e}t}(G)$ of the presheaf associating to a smooth scheme
the set of isomorphism classes of $G$-torsors, for a reductive group $G$. We
show that if $G$-torsors on affine lines are extended, then
$\mathcal{H}^1_{\acute{e}t}(G)$ is homotopy invariant and show that the sheaf
is unramified if and only if Nisnevich-local purity holds for $G$-torsors. We
also identify the sheaf $\mathcal{H}^1_{\acute{e}t}(G)$ with the sheaf of
$\mathbb{A}^1$-connected components of the classifying space ${\rm
B}_{\acute{e}t}G$. This establishes the homotopy invariance of the sheaves of
components as conjectured by Morel. It moreover provides a computation of the
sheaf of $\mathbb{A}^1$-connected components in terms of unramified $G$-torsors
over function fields whenever Nisnevich-local purity holds for $G$-torsors.
|
In this paper, we consider the structural change in a class of discrete
valued time series, which the true conditional distribution of the observations
is assumed to be unknown.
The conditional mean of the process depends on a parameter $\theta^*$ which
may change over time.
We provide sufficient conditions for the consistency and the asymptotic
normality of the Poisson quasi-maximum likelihood estimator (QMLE) of the
model.
We consider an epidemic change-point detection and propose a test statistic
based on the QMLE of the parameter. Under the null hypothesis of a constant
parameter (no change), the test statistic converges to a distribution obtained
from a difference of two Brownian bridge. The test statistic diverges to
infinity under the epidemic alternative, which establishes that the proposed
procedure is consistent in power. The effectiveness of the proposed procedure
is illustrated by simulated and real data examples.
|
We classify rank two vector bundles on a del Pezzo threefold $X$ of Picard
rank one whose projectivizations are weak Fano. We also investigate the moduli
spaces of such vector bundles when $X$ is of degree five, especially whether it
is smooth, irreducible, or fine.
|
In this paper we study elimination of imaginaries in some classes of pure
ordered abelian groups. For the class of ordered abelian groups with bounded
regular rank (equivalently with finite spines) we obtain weak elimination of
imaginaries once we add sorts for the quotient groups $\Gamma/ \Delta$ for each
definable convex subgroup $\Delta$, and sorts for the quotient groups $\Gamma/
\Delta+ l\Gamma$ where $\Delta$ is a definable convex subgroup and $l \in
\mathbb{N}_{\geq 2}$. We refer to these sorts as the \emph{quotient sorts}. For
the dp-minimal case we obtain a complete elimination of imaginaries, if we also
add constants to distinguish the cosets of $\Delta+n\Gamma$ in $\Gamma$, where
$\Delta$ is a definable convex subgroup and $n \in \mathbb{N}_{\geq 2}$.
|
The main theme of the paper is the detailed discussion of the renormalization
of the quantum field theory comprising two interacting scalar fields. The
potential of the model is the fourth-order homogeneous polynomial of the
fields, symmetric with respect to the transformation
$\phi_{i}\rightarrow{-\phi_{i}}$. We determine the Feynman rules for the model
and then we present a detailed discussion of the renormalization of the theory
at one loop. Next, we derive the one loop renormalization group equations for
the running masses and coupling constants. At the level of two loops, we use
the FeynArts package of Mathematica to generate the two loops Feynman diagrams
and calculate in detail the setting sun diagram.
|
This paper provides a theoretical and numerical comparison of classical first
order splitting methods for solving smooth convex optimization problems and
cocoercive equations. In a theoretical point of view, we compare convergence
rates of gradient descent, forward-backward, Peaceman-Rachford, and
Douglas-Rachford algorithms for minimizing the sum of two smooth convex
functions when one of them is strongly convex. A similar comparison is given in
the more general cocoercive setting under the presence of strong monotonicity
and we observe that the convergence rates in optimization are strictly better
than the corresponding rates for cocoercive equations for some algorithms. We
obtain improved rates with respect to the literature in several instances
exploiting the structure of our problems. In a numerical point of view, we
verify our theoretical results by implementing and comparing previous
algorithms in well established signal and image inverse problems involving
sparsity. We replace the widely used $\ell_1$ norm by the Huber loss and we
observe that fully proximal-based strategies have numerical and theoretical
advantages with respect to methods using gradient steps. In particular,
Peaceman-Rachford is the more performant algorithm in our examples.
|
In a binary classification problem where the goal is to fit an accurate
predictor, the presence of corrupted labels in the training data set may create
an additional challenge. However, in settings where likelihood maximization is
poorly behaved-for example, if positive and negative labels are perfectly
separable-then a small fraction of corrupted labels can improve performance by
ensuring robustness. In this work, we establish that in such settings,
corruption acts as a form of regularization, and we compute precise upper
bounds on estimation error in the presence of corruptions. Our results suggest
that the presence of corrupted data points is beneficial only up to a small
fraction of the total sample, scaling with the square root of the sample size.
|
We systematically investigate the complexity of counting subgraph patterns
modulo fixed integers. For example, it is known that the parity of the number
of $k$-matchings can be determined in polynomial time by a simple reduction to
the determinant. We generalize this to an $n^{f(t,s)}$-time algorithm to
compute modulo $2^t$ the number of subgraph occurrences of patterns that are
$s$ vertices away from being matchings. This shows that the known
polynomial-time cases of subgraph detection (Jansen and Marx, SODA 2015) carry
over into the setting of counting modulo $2^t$.
Complementing our algorithm, we also give a simple and self-contained proof
that counting $k$-matchings modulo odd integers $q$ is Mod_q-W[1]-complete and
prove that counting $k$-paths modulo $2$ is Parity-W[1]-complete, answering an
open question by Bj\"orklund, Dell, and Husfeldt (ICALP 2015).
|
We study the ground state for many interacting bosons in a double-well
potential, in a joint limit where the particle number and the distance between
the potential wells both go to infinity. Two single-particle orbitals (one for
each well) are macroscopically occupied, and we are concerned with deriving the
corresponding effective Bose-Hubbard Hamiltonian. We prove (i) an energy
expansion, including the two-modes Bose-Hubbard energy and two independent
Bogoliubov corrections (one for each potential well), (ii) a variance bound for
the number of particles falling inside each potential well. The latter is a
signature of a correlated ground state in that it violates the central limit
theorem.
|
In this paper, we propose a method for ensembling the outputs of multiple
object detectors for improving detection performance and precision of bounding
boxes on image data. We further extend it to video data by proposing a
two-stage tracking-based scheme for detection refinement. The proposed method
can be used as a standalone approach for improving object detection
performance, or as a part of a framework for faster bounding box annotation in
unseen datasets, assuming that the objects of interest are those present in
some common public datasets.
|
A major issue of the increasingly popular robust optimization is the tendency
to produce overly conservative solutions. This paper deals with this by
proposing a new parameterized robust criterion that is flexible enough to offer
fine-tuned control of conservatism. The proposed criterion also leads to a new
approach for competitive ratio analysis, which can reduce the complexity of
analysis to the level for the minimax regret analysis. The properties of this
new criterion are studied, which facilitates its applications, and validates
the new approach for competitive ratio analysis. Finally, the criterion is
applied to the well studied robust one-way trading problem to demonstrate its
potential in controlling conservatism and reducing the complexity of
competitive ratio analysis.
|
The classic string indexing problem is to preprocess a string S into a
compact data structure that supports efficient pattern matching queries.
Typical queries include existential queries (decide if the pattern occurs in
S), reporting queries (return all positions where the pattern occurs), and
counting queries (return the number of occurrences of the pattern). In this
paper we consider a variant of string indexing, where the goal is to compactly
represent the string such that given two patterns P1 and P2 and a gap range
[\alpha,\beta] we can quickly find the consecutive occurrences of P1 and P2
with distance in [\alpha,\beta], i.e., pairs of occurrences immediately
following each other and with distance within the range. We present data
structures that use \~O(n) space and query time \~O(|P1|+|P2|+n^(2/3)) for
existence and counting and \~O(|P1|+|P2|+n^(2/3)*occ^(1/3)) for reporting. We
complement this with a conditional lower bound based on the set intersection
problem showing that any solution using \~O(n) space must use
\tilde{\Omega}}(|P1|+|P2|+\sqrt{n}) query time. To obtain our results we
develop new techniques and ideas of independent interest including a new suffix
tree decomposition and hardness of a variant of the set intersection problem.
|
We study asymptotic behavior of solutions of the first-order linear consensus
model with delay and anticipation, which is a system of neutral delay
differential equations. We consider both the transmission-type and
reaction-type delay that are motivated by modeling inputs. Studying the
simplified case of two agents, we show that, depending on the parameter regime,
anticipation may have both a stabilizing and destabilizing effect on the
solutions. In particular, we demonstrate numerically that moderate level of
anticipation generically promotes convergence towards consensus, while too high
level disturbs it. Motivated by this observation, we derive sufficient
conditions for asymptotic consensus in the multiple-agent systems, which are
explicit in the parameter values delay length and anticipation level, and
independent of the number of agents. The proofs are based on construction of
suitable Lyapunov-type functionals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.