abstract
stringlengths 42
2.09k
|
---|
We explicitly construct the Dirichlet series
$$L_{\mathrm{Tam}}(s):=\sum_{m=1}^{\infty}\frac{P_{\mathrm{Tam}}(m)}{m^s},$$
where $P_{\mathrm{Tam}}(m)$ is the proportion of elliptic curves $E/\mathbb{Q}$
in short Weierstrass form with Tamagawa product $m.$ Although there are no
$E/\mathbb{Q}$ with everywhere good reduction, we prove that the proportion
with trivial Tamagawa product is $P_{\mathrm{Tam}}(1)=0.5053\dots.$ As a
corollary, we find that $L_{\mathrm{Tam}}(-1)=1.8193\dots$ is the average
Tamagawa product for elliptic curves over $\mathbb{Q}.$ We give an application
of these results to canonical and Weil heights.
|
This article focuses on a preaveraging description of polymer nonequilibrium
stretching, where a single polymer undergoes a transient process from
equilibrium to nonequilibrium steady state by pulling one chain end. The
preaveraging method combined with mode analysis reduces the original Langevin
equation to a simplified form for both a stretched steady state and an
equilibrium state, even in the presence of self-avoiding repulsive interactions
spanning a long range. However, the transient stretching process exhibits
evolution of a hierarchal regime structure, which means a qualitative temporal
change in probabilistic distributions assumed in preaveraging. We investigate
the preaveraging method for evolution of the regime structure with
consideration of the nonequilibrium work relations and deviations from the
fluctuation-dissipation relation.
|
We consider an extension of the classical capital accumulation model, and
present an example in which the Hamilton-Jacobi-Bellman (HJB) equation is
neither necessary nor sufficient for a function to be the value function. Next,
we present assumptions under which the HJB equation becomes a necessary and
sufficient condition for a function to be the value function, and using this
result, we propose a new method for solving the original problem using the
solution of the HJB equation. Our assumptions are so mild that many
macroeconomic growth models satisfy them. Therefore, our results ensure that
the solution of the HJB equation is rigorously the value function in many
macroeconomic models, and present a new solving method for these models.
|
We present, here, advanced DFT-NEGF techniques that we have implemented in
our ATOmistic MOdelling Solver, ATOMOS, to explore transport in novel materials
and devices and in particular in van-der-Waals heterojunction transistors. We
describe our methodologies using plane-wave DFT, followed by a Wannierization
step, and linear combination of atomic orbital DFT, that leads to an orthogonal
and non-orthogonal NEGF model, respectively. We then describe in detail our
non-orthogonal NEGF implementation including the Sancho-Rubio and
electron-phonon scattering within a non-orthogonal framework. We also present
our methodology to extract electron-phonon coupling from first principle and
include them in our transport simulations. Finally, we apply our methods
towards the exploration of novel 2D materials and devices. This includes 2D
material selection and the Dynamically-Doped FET for ultimately scaled MOSFETS,
the exploration of vdW TFETs, in particular the HfS2/WSe2 TFET that could
achieve high on-current levels, and the study of Schottky-barrier height and
transport through a metal-semiconducting WTe2/WS2 VDW junction transistor.
|
Typicality arguments attempt to use the Copernican Principle to draw
conclusions about the cosmos and presently unknown conscious beings within it.
The most notorious is the Doomsday Argument, which purports to constrain
humanity's future from its current lifespan alone. These arguments rest on a
likelihood calculation that penalizes models in proportion to the number of
distinguishable observers. I argue that such reasoning leads to solipsism, the
belief that one is the only being in the world, and is therefore unacceptable.
Using variants of the "Sleeping Beauty" thought experiment as a guide, I
present a framework for evaluating observations in a large cosmos: Fine
Graining with Auxiliary Indexicals (FGAI). FGAI requires the construction of
specific models of physical outcomes and observations. Valid typicality
arguments then emerge from the combinatorial properties of third-person
physical microhypotheses. Indexical (observer-relative) facts do not directly
constrain physical theories. Instead they serve to weight different provisional
evaluations of credence. These weights define a probabilistic reference class
of locations. As indexical knowledge changes, the weights shift. I show that
the self-applied Doomsday Argument fails in FGAI, even though it can work for
an external observer. I also discuss how FGAI could handle observations in
large universes with Boltzmann brains.
|
We propose a novel keypoint voting scheme based on intersecting spheres, that
is more accurate than existing schemes and allows for a smaller set of more
disperse keypoints. The scheme is based upon the distance between points, which
as a 1D quantity can be regressed more accurately than the 2D and 3D vector and
offset quantities regressed in previous work, yielding more accurate keypoint
localization. The scheme forms the basis of the proposed RCVPose method for 6
DoF pose estimation of 3D objects in RGB-D data, which is particularly
effective at handling occlusions. A CNN is trained to estimate the distance
between the 3D point corresponding to the depth mode of each RGB pixel, and a
set of 3 disperse keypoints defined in the object frame. At inference, a sphere
centered at each 3D point is generated, of radius equal to this estimated
distance. The surfaces of these spheres vote to increment a 3D accumulator
space, the peaks of which indicate keypoint locations. The proposed radial
voting scheme is more accurate than previous vector or offset schemes, and is
robust to disperse keypoints. Experiments demonstrate RCVPose to be highly
accurate and competitive, achieving state-of-the-art results on the LINEMOD
99.7% and YCB-Video 97.2% datasets, notably scoring +7.9% higher (71.1%) than
previous methods on the challenging Occlusion LINEMOD dataset.
|
The ability to label and track physical objects that are assets in digital
representations of the world is foundational to many complex systems. Simple,
yet powerful methods such as bar- and QR-codes have been highly successful,
e.g. in the retail space, but the lack of security, limited information content
and impossibility of seamless integration with the environment have prevented a
large-scale linking of physical objects to their digital twins. This paper
proposes to link digital assets created through BIM with their physical
counterparts using fiducial markers with patterns defined by Cholesteric
Spherical Reflectors (CSRs), selective retroreflectors produced using liquid
crystal self-assembly. The markers leverage the ability of CSRs to encode
information that is easily detected and read with computer vision while
remaining practically invisible to the human eye. We analyze the potential of a
CSR-based infrastructure from the perspective of BIM, critically reviewing the
outstanding challenges in applying this new class of functional materials, and
we discuss extended opportunities arising in assisting autonomous mobile robots
to reliably navigate human-populated environments, as well as in augmented
reality.
|
During the preparatory phase of the International Linear Collider (ILC)
project, all technical development and engineering design needed for the start
of ILC construction must be completed, in parallel with intergovernmental
discussion of governance and sharing of responsibilities and cost. The ILC
Preparatory Laboratory (Pre-lab) is conceived to execute the technical and
engineering work and to assist the intergovernmental discussion by providing
relevant information upon request. It will be based on a worldwide partnership
among laboratories with a headquarters hosted in Japan. This proposal, prepared
by the ILC International Development Team and endorsed by the International
Committee for Future Accelerators, describes an organisational framework and
work plan for the Pre-lab. Elaboration, modification and adjustment should be
introduced for its implementation, in order to incorporate requirements arising
from the physics community, laboratories, and governmental authorities
interested in the ILC.
|
Music source separation (MSS) is the task of separating a music piece into
individual sources, such as vocals and accompaniment. Recently, neural network
based methods have been applied to address the MSS problem, and can be
categorized into spectrogram and time-domain based methods. However, there is a
lack of research of using complementary information of spectrogram and
time-domain inputs for MSS. In this article, we propose a CatNet framework that
concatenates a UNet separation branch using spectrogram as input and a WavUNet
separation branch using time-domain waveform as input for MSS. We propose an
end-to-end and fully differentiable system that incorporate spectrogram
calculation into CatNet. In addition, we propose a novel mix-audio data
augmentation method that randomly mix audio segments from the same source as
augmented audio segments for training. Our proposed CatNet MSS system achieves
a state-of-the-art vocals separation source distortion ratio (SDR) of 7.54 dB,
outperforming MMDenseNet of 6.57 dB evaluated on the MUSDB18 dataset.
|
Laser Doppler holography was introduced as a full-field imaging technique to
measure blood flow in the retina and choroid with an as yet unrivaled temporal
resolution. We here investigate separating the different contributions to the
power Doppler signal in order to isolate the flow waveforms of vessels in the
posterior pole of the human eye. Distinct flow behaviors are found in retinal
arteries and veins with seemingly interrelated waveforms. We demonstrate a full
field mapping of the local resistivity index, and the possibility to perform
unambiguous identification of retinal arteries and veins on the basis of their
systolodiastolic variations. Finally we investigate the arterial flow waveforms
in the retina and choroid and find synchronous and similar waveforms, although
with a lower pulsatility in choroidal vessels. This work demonstrates the
potential held by laser Doppler holography to study ocular hemodynamics in
healthy and diseased eyes.
|
Recent worldwide events shed light on the need of human-centered systems
engineering in the healthcare domain. These systems must be prepared to evolve
quickly but safely, according to unpredicted environments and ever-changing
pathogens that spread ruthlessly. Such scenarios suffocate hospitals'
infrastructure and disable healthcare systems that are not prepared to deal
with unpredicted environments without costly re-engineering. In the face of
these challenges, we offer the SA-BSN -- Self-Adaptive Body Sensor Network --
prototype to explore the rather dynamic patient's health status monitoring. The
exemplar is focused on self-adaptation and comes with scenarios that hinder an
interplay between system reliability and battery consumption that is available
after each execution. Also, we provide: (i) a noise injection mechanism, (ii)
file-based patient profiles' configuration, (iii) six healthcare sensor
simulations, and (iv) an extensible/reusable controller implementation for
self-adaptation. The artifact is implemented in ROS (Robot Operating System),
which embraces principles such as ease of use and relies on an active open
source community support.
|
This paper investigates the theory of robustness against adversarial attacks.
We focus on randomized classifiers (\emph{i.e.} classifiers that output random
variables) and provide a thorough analysis of their behavior through the lens
of statistical learning theory and information theory. To this aim, we
introduce a new notion of robustness for randomized classifiers, enforcing
local Lipschitzness using probability metrics. Equipped with this definition,
we make two new contributions. The first one consists in devising a new upper
bound on the adversarial generalization gap of randomized classifiers. More
precisely, we devise bounds on the generalization gap and the adversarial gap
(\emph{i.e.} the gap between the risk and the worst-case risk under attack) of
randomized classifiers. The second contribution presents a yet simple but
efficient noise injection method to design robust randomized classifiers. We
show that our results are applicable to a wide range of machine learning models
under mild hypotheses. We further corroborate our findings with experimental
results using deep neural networks on standard image datasets, namely CIFAR-10
and CIFAR-100. All robust models we trained models can simultaneously achieve
state-of-the-art accuracy (over $0.82$ clean accuracy on CIFAR-10) and enjoy
\emph{guaranteed} robust accuracy bounds ($0.45$ against $\ell_2$ adversaries
with magnitude $0.5$ on CIFAR-10).
|
We study the performance of quantum annealing for two sets of problems,
namely, 2-satisfiability (2-SAT) problems represented by Ising-type
Hamiltonians, and nonstoquastic problems which are obtained by adding extra
couplings to the 2-SAT problem Hamiltonians. In addition, we add to the
transverse Ising-type Hamiltonian used for quantum annealing a third term, the
trigger Hamiltonian with ferromagnetic or antiferromagnetic couplings, which
vanishes at the beginning and end of the annealing process. We also analyze
some problem instances using the energy spectrum, average energy or overlap of
the state during the evolution with the instantaneous low lying eigenstates of
the Hamiltonian, and identify some non-adiabatic mechanisms which can enhance
the performance of quantum annealing.
|
We analyze the citation time-series of manuscripts in three different fields
of science; physics, social science and technology. The evolution of the
time-series of the yearly number of citations, namely the citation
trajectories, diffuse anomalously, their variance scales with time $\propto
t^{2H}$, where $H\neq 1/2$. We provide detailed analysis of the various factors
that lead to the anomalous behavior: non-stationarity, long-ranged correlations
and a fat-tailed increment distribution. The papers exhibit high degree of
heterogeneity, across the various fields, as the statistics of the highest
cited papers is fundamentally different from that of the lower ones. The
citation data is shown to be highly correlated and non-stationary; as all the
papers except the small percentage of them with high number of citations, die
out in time.
|
The Ensemble Kalman inversion (EKI), proposed by Iglesias et al. for the
solution of Bayesian inverse problems of type $y=A u^\dagger +\varepsilon$,
with $u^\dagger$ being an unknown parameter and $y$ a given datum, is a
powerful tool usually derived from a sequential Monte Carlo point of view. It
describes the dynamics of an ensemble of particles $\{u^j(t)\}_{j=1}^J$, whose
initial empirical measure is sampled from the prior, evolving over an
artificial time $t$ towards an approximate solution of the inverse problem.
Using spectral techniques, we provide a complete description of the
deterministic dynamics of EKI and their asymptotic behavior in parameter space.
In particular, we analyze the dynamics of deterministic EKI and mean-field EKI.
We demonstrate that the Bayesian posterior can only be recovered with the
mean-field limit and not with finite sample sizes or deterministic EKI.
Furthermore, we show that -- even in the deterministic case -- residuals in
parameter space do not decrease monotonously in the Euclidean norm and suggest
a problem-adapted norm, where monotonicity can be proved. Finally, we derive a
system of ordinary differential equations governing the spectrum and
eigenvectors of the covariance matrix.
|
The LIGO and Virgo gravitational-wave detectors carried out the first half of
their third observing run from April through October 2019. During this period,
they detected 39 new signals from the coalescence of black holes or neutron
stars, more than quadrupling the total number of detected events. These
detections included some unprecedented sources, like a pair of black holes with
unequal masses (GW190412), a massive pair of neutron stars (GW190425), a black
hole potentially in the supernova pair-instability mass gap (GW190521), and
either the lightest black hole or the heaviest neutron star known to date
(GW190814). Collectively, the full set of signals provided astrophysically
valuable information about the distributions of compact objects and their
evolution throughout cosmic history. It also enabled more constraining and
diverse tests of general relativity, including new probes of the fundamental
nature of black holes. This review summarizes the highlights of these results
and their implications.
|
Spin sum rules depend on the choice of a pivot, i.e. the point about which
the angular momentun is defined, usually identified with the center of the
nucleon. The latter is however not unique in a relativistic theory and has led
to apparently contradictory results in the literature. Using the recently
developed phase-space approach, we compute for the first time the contribution
associated with the motion of the center of the nucleon, and we derive a
general spin sum rule which reduces to established results after appropriate
choices for the pivot and the spin component.
|
Recently developed large pre-trained language models, e.g., BERT, have
achieved remarkable performance in many downstream natural language processing
applications. These pre-trained language models often contain hundreds of
millions of parameters and suffer from high computation and latency in
real-world applications. It is desirable to reduce the computation overhead of
the models for fast training and inference while keeping the model performance
in downstream applications. Several lines of work utilize knowledge
distillation to compress the teacher model to a smaller student model. However,
they usually discard the teacher's knowledge when in inference. Differently, in
this paper, we propose RefBERT to leverage the knowledge learned from the
teacher, i.e., facilitating the pre-computed BERT representation on the
reference sample and compressing BERT into a smaller student model. To
guarantee our proposal, we provide theoretical justification on the loss
function and the usage of reference samples. Significantly, the theoretical
result shows that including the pre-computed teacher's representations on the
reference samples indeed increases the mutual information in learning the
student model. Finally, we conduct the empirical evaluation and show that our
RefBERT can beat the vanilla TinyBERT over 8.1\% and achieves more than 94\% of
the performance of $\BERTBASE$ on the GLUE benchmark. Meanwhile, RefBERT is
7.4x smaller and 9.5x faster on inference than BERT$_{\rm BASE}$.
|
As machine learning (ML) models become more widely deployed in high-stakes
applications, counterfactual explanations have emerged as key tools for
providing actionable model explanations in practice. Despite the growing
popularity of counterfactual explanations, a deeper understanding of these
explanations is still lacking. In this work, we systematically analyze
counterfactual explanations through the lens of adversarial examples. We do so
by formalizing the similarities between popular counterfactual explanation and
adversarial example generation methods identifying conditions when they are
equivalent. We then derive the upper bounds on the distances between the
solutions output by counterfactual explanation and adversarial example
generation methods, which we validate on several real-world data sets. By
establishing these theoretical and empirical similarities between
counterfactual explanations and adversarial examples, our work raises
fundamental questions about the design and development of existing
counterfactual explanation algorithms.
|
Mahalanobis distance between treatment group and control group covariate
means is often adopted as a balance criterion when implementing a
rerandomization strategy. However, this criterion may not work well for
high-dimensional cases because it balances all orthogonalized covariates
equally. Here, we propose leveraging principal component analysis (PCA) to
identify proper subspaces in which Mahalanobis distance should be calculated.
Not only can PCA effectively reduce the dimensionality for high-dimensional
cases while capturing most of the information in the covariates, but it also
provides computational simplicity by focusing on the top orthogonal components.
We show that our PCA rerandomization scheme has desirable theoretical
properties on balancing covariates and thereby on improving the estimation of
average treatment effects. We also show that this conclusion is supported by
numerical studies using both simulated and real examples.
|
We prove that if a family of compact connected sets in the plane has the
property that every three members of it are intersected by a line, then there
are three lines intersecting all the sets in the family. This answers a
question of Eckhoff from 1993, who proved that, under the same condition, there
are four lines intersecting all the sets. In fact, we prove a colorful version
of this result, under weakened conditions on the sets.
A triple of sets $A,B,C$ in the plane is said to be a {\em tight} if
$\textrm{conv}(A\cup B)\cap \textrm{conv}(A\cup C)\cap \textrm{conv}(B\cap
C)\neq \emptyset.$ This notion was first introduced by Holmsen, where he showed
that if $\mathcal{F}$ is a family of compact convex sets in the plane in which
every three sets form a tight triple, then there is a line intersecting at
least $\frac{1}{8}|\mathcal{F}|$ members of $\mathcal{F}$. Here we prove that
if $\mathcal{F}_1,\dots,\mathcal{F}_6$ are families of compact connected sets
in the plane such that every three sets, chosen from three distinct families
$\mathcal{F}_i$, form a tight triple, then there exists $1\le j\le 6$ and three
lines intersecting every member of $\mathcal{F}_j$. In particular, this
improves $\frac{1}{8}$ to $\frac{1}{3}$ in Holmsen's result.
|
This paper describes a machine learning approach for annotating and analyzing
data curation work logs at ICPSR, a large social sciences data archive. The
systems we studied track curation work and coordinate team decision-making at
ICPSR. Repository staff use these systems to organize, prioritize, and document
curation work done on datasets, making them promising resources for studying
curation work and its impact on data reuse, especially in combination with data
usage analytics. A key challenge, however, is classifying similar activities so
that they can be measured and associated with impact metrics. This paper
contributes: 1) a schema of data curation activities; 2) a computational model
for identifying curation actions in work log descriptions; and 3) an analysis
of frequent data curation activities at ICPSR over time. We first propose a
schema of data curation actions to help us analyze the impact of curation work.
We then use this schema to annotate a set of data curation logs, which contain
records of data transformations and project management decisions completed by
repository staff. Finally, we train a text classifier to detect the frequency
of curation actions in a large set of work logs. Our approach supports the
analysis of curation work documented in work log systems as an important step
toward studying the relationship between research data curation and data reuse.
|
In a recent article we presented a model for hadronic rescattering, and some
results were shown for pp collisions at LHC energies. In order to extend the
studies to pA and AA collisions, the Angantyr model for heavy-ion collisions is
taken as the starting point. Both these models are implemented within the
general-purpose Monte Carlo event generator Pythia, which makes the matching
reasonably straightforward, and allows for detailed studies of the full
space--time evolution. The rescattering rate is significantly higher than in
pp, especially for central AA collisions, where the typical primary hadron
rescatters several times. We study the impact of rescattering on a number of
distributions, such as pT and eta spectra, and the space--time evolution of the
whole collision process. Notably rescattering is shown to give a significant
contribution to elliptic flow in XeXe and PbPb, and to give a nontrivial impact
on charm production.
|
In passive linear systems, complete combining of powers carried by waves from
several input channels into a single output channel is forbidden by the energy
conservation law. Here, we demonstrate that complete combination of both
coherent and incoherent plane waves can be achieved using metasurfaces with
properties varying in space and time. The proposed structure reflects waves of
the same frequency but incident at different angles towards a single direction.
The frequencies of the output waves are shifted by the metasurface, ensuring
perfect incoherent power combining. The proposed concept of power combining is
general and can be applied for electromagnetic waves from the microwave to
optical domains, as well as for waves of other physical nature.
|
In recent years, there has been significant growth of distributed energy
resources (DERs) penetration in the power grid. The stochastic and intermittent
features of variable DERs such as roof top photovoltaic (PV) bring substantial
uncertainties to the grid on the consumer end and weaken the grid reliability.
In addition, the fact that numerous DERs are widespread in the grid makes it
hard to monitor and manage DERs. To address this challenge, this paper proposes
a novel real-time grid-supporting energy management (GSEM) strategy for
grid-supporting microgrid (MG). This strategy can not only properly manage DERs
in a MG but also enable DERs to provide grid services, which enables a MG to be
grid-supporting via flexible trading power. The proposed GSEM strategy is based
on a 2-step optimization which includes a routine economic dispatch (ED) step
and an acceptable trading power range determination step. Numerical simulations
demonstrate the performance of the proposed GSEM strategy which enables the
grid operator to have a dispatch choice of trading power with MG and enhance
the reliability and resilience of the main grid.
|
The approximation of solutions to second order Hamilton--Jacobi--Bellman
(HJB) equations by deep neural networks is investigated. It is shown that for
HJB equations that arise in the context of the optimal control of certain
Markov processes the solution can be approximated by deep neural networks
without incurring the curse of dimension. The dynamics is assumed to depend
affinely on the controls and the cost depends quadratically on the controls.
The admissible controls take values in a bounded set.
|
Creating and destroying threads on modern Linux systems incurs high latency,
absent concurrency, and fails to scale as we increase concurrency. To address
this concern we introduce a process-local cache of idle threads. Specifically,
instead of destroying a thread when it terminates, we cache and then recycle
that thread in the context of subsequent thread creation requests. This
approach shows significant promise in various applications and benchmarks that
create and destroy threads rapidly and illustrates the need for and potential
benefits of improved concurrency infrastructure. With caching, the cost of
creating a new thread drops by almost an order of magnitude. As our experiments
demonstrate, this results in significant performance improvements for multiple
applications that aggressively create and destroy numerous threads.
|
An intense activity is nowadays devoted to the definition of models capturing
the properties of complex networks. Among the most promising approaches, it has
been proposed to model these graphs via their clique incidence bipartite
graphs. However, this approach has, until now, severe limitations resulting
from its incapacity to reproduce a key property of this object: the overlapping
nature of cliques in complex networks. In order to get rid of these limitations
we propose to encode the structure of clique overlaps in a network thanks to a
process consisting in iteratively factorising the maximal bicliques between the
upper level and the other levels of a multipartite graph. We show that the most
natural definition of this factorising process leads to infinite series for
some instances. Our main result is to design a restriction of this process that
terminates for any arbitrary graph. Moreover, we show that the resulting
multipartite graph has remarkable combinatorial properties and is closely
related to another fundamental combinatorial object. Finally, we show that, in
practice, this multipartite graph is computationally tractable and has a size
that makes it suitable for complex network modelling.
|
In a conventional domain adaptation person Re-identification (Re-ID) task,
both the training and test images in target domain are collected under the
sunny weather. However, in reality, the pedestrians to be retrieved may be
obtained under severe weather conditions such as hazy, dusty and snowing, etc.
This paper proposes a novel Interference Suppression Model (ISM) to deal with
the interference caused by the hazy weather in domain adaptation person Re-ID.
A teacherstudent model is used in the ISM to distill the interference
information at the feature level by reducing the discrepancy between the clear
and the hazy intrinsic similarity matrix. Furthermore, in the distribution
level, the extra discriminator is introduced to assist the student model make
the interference feature distribution more clear. The experimental results show
that the proposed method achieves the superior performance on two synthetic
datasets than the stateof-the-art methods. The related code will be released
online https://github.com/pangjian123/ISM-ReID.
|
The mobile data traffic has been exponentially growing during the last
decades, which has been enabled by the densification of the network
infrastructure in terms of increased cell density (i.e., ultra-dense network
(UDN)) and/or increased number of active antennas per access point (AP) (i.e.,
massive multiple-input multiple-output (mMIMO)). However, neither UDN nor mMIMO
will meet the increasing data rate demands of the sixth generation (6G)
wireless communications due to the inter-cell interference and large
quality-of-service variations, respectively. Cell-free (CF) mMIMO, which
combines the best aspects of UDN and mMIMO, is viewed as a key solution to this
issue. In such systems, each user equipment (UE) is served by a preferred set
of surrounding APs cooperatively. In this paper, we provide a survey of the
state-of-the-art literature on CF mMIMO. As a starting point, the significance
and the basic properties of CF mMIMO are highlighted. We then present the
canonical framework, where the essential details (i.e., transmission procedure
and mathematical system model) are discussed. Next, we provide a deep look at
the resource allocation and signal processing problems related to CF mMIMO and
survey the up-to-date schemes and algorithms. After that, we discuss the
practical issues when implementing CF mMIMO. Potential future directions are
then pointed out. Finally, we conclude this paper with a summary of the key
lessons learned in this field. This paper aims to provide a starting point for
anyone who wants to conduct research on CF mMIMO for future wireless networks.
|
Autonomous exploration is an application of growing importance in robotics. A
promising strategy is ergodic trajectory planning, whereby an agent spends in
each area a fraction of time which is proportional to its probability
information density function. In this paper, a decentralized ergodic
multi-agent trajectory planning algorithm featuring limited communication
constraints is proposed. The agents' trajectories are designed by optimizing a
weighted cost encompassing ergodicity, control energy and close-distance
operation objectives. To solve the underlying optimal control problem, a
second-order descent iterative method coupled with a projection operator in the
form of an optimal feedback controller is used. Exhaustive numerical analyses
show that the multi-agent solution allows a much more efficient exploration in
terms of completion task time and control energy distribution by leveraging
collaboration among agents.
|
Federated averaging (FedAvg) is a communication efficient algorithm for the
distributed training with an enormous number of clients. In FedAvg, clients
keep their data locally for privacy protection; a central parameter server is
used to communicate between clients. This central server distributes the
parameters to each client and collects the updated parameters from clients.
FedAvg is mostly studied in centralized fashions, which requires massive
communication between server and clients in each communication. Moreover,
attacking the central server can break the whole system's privacy. In this
paper, we study the decentralized FedAvg with momentum (DFedAvgM), which is
implemented on clients that are connected by an undirected graph. In DFedAvgM,
all clients perform stochastic gradient descent with momentum and communicate
with their neighbors only. To further reduce the communication cost, we also
consider the quantized DFedAvgM. We prove convergence of the (quantized)
DFedAvgM under trivial assumptions; the convergence rate can be improved when
the loss function satisfies the P{\L} property. Finally, we numerically verify
the efficacy of DFedAvgM.
|
Since the evolution of digital computers, the storage of data has always been
in terms of discrete bits that can store values of either 1 or 0. Hence, all
computer programs (such as MATLAB), convert any input continuous signal into a
discrete dataset. Applying this to oscillating signals, such as audio, opens a
domain for processing as well as editing. The Fourier transform, which is an
integral over infinite limits, for the use of signal processing is discrete.
The essential feature of the Fourier transform is to decompose any signal into
a combination of multiple sinusoidal waves that are easy to deal with. The
discrete Fourier transform (DFT) can be represented as a matrix, with each data
point acting as an orthogonal point, allowing one to perform complicated
transformations on individual frequencies. Due to this formulation, all the
concepts of linear algebra and linear transforms prove to be extremely useful
here. In this paper, we first explain the theoretical basis of audio processing
using linear algebra, and then focus on a simulation coded in MATLAB, to
process and edit various audio samples. The code is open ended and easily
expandable by just defining newer matrices which can transform over the
original audio signal. Finally, this paper attempts to highlight and briefly
explain the results that emerge from the simulation
|
Resistive Random-Access-Memory (ReRAM) crossbar is a promising technique for
deep neural network (DNN) accelerators, thanks to its in-memory and in-situ
analog computing abilities for Vector-Matrix Multiplication-and-Accumulations
(VMMs). However, it is challenging for crossbar architecture to exploit the
sparsity in the DNN. It inevitably causes complex and costly control to exploit
fine-grained sparsity due to the limitation of tightly-coupled crossbar
structure. As the countermeasure, we developed a novel ReRAM-based DNN
accelerator, named Sparse-Multiplication-Engine (SME), based on a hardware and
software co-design framework. First, we orchestrate the bit-sparse pattern to
increase the density of bit-sparsity based on existing quantization methods.
Second, we propose a novel weigh mapping mechanism to slice the bits of a
weight across the crossbars and splice the activation results in peripheral
circuits. This mechanism can decouple the tightly-coupled crossbar structure
and cumulate the sparsity in the crossbar. Finally, a superior squeeze-out
scheme empties the crossbars mapped with highly-sparse non-zeros from the
previous two steps. We design the SME architecture and discuss its use for
other quantization methods and different ReRAM cell technologies. Compared with
prior state-of-the-art designs, the SME shrinks the use of crossbars up to 8.7x
and 2.1x using Resent-50 and MobileNet-v2, respectively, with less than 0.3%
accuracy drop on ImageNet.
|
The current world challenges include issues such as infectious disease
pandemics, environmental health risks, food safety, and crime prevention.
Through this article, a special emphasis is given to one of the main challenges
in the healthcare sector during the COVID-19 pandemic, the cyber risk. Since
the beginning of the Covid-19 pandemic, the World Health Organization has
detected a dramatic increase in the number of cyber-attacks. For instance, in
Italy the COVID-19 emergency has heavily affected cybersecurity; from January
to April 2020, the total of attacks, accidents, and violations of privacy to
the detriment of companies and individuals has doubled. Using a systematic and
rigorous approach, this paper aims to analyze the literature on the cyber risk
in the healthcare sector to understand the real knowledge on this topic. The
findings highlight the poor attention of the scientific community on this
topic, except in the United States. The literature lacks research contributions
to support cyber risk management in subject areas such as Business, Management
and Accounting; Social Science; and Mathematics. This research outlines the
need to empirically investigate the cyber risk, giving a practical solution to
health facilities. Keywords: cyber risk; cyber-attack; cybersecurity; computer
security; COVID-19; coronavirus;information technology risk; risk management;
risk assessment; health facilities; healthcare sector;systematic literature
review; insurance
|
The paper presents a solution for the problem of choosing a method for
analytical determining of weight factors for a genetic algorithm additive
fitness function. This algorithm is the basis for an evolutionary process,
which forms a stable and effective query population in a search engine to
obtain highly relevant results. The paper gives a formal description of an
algorithm fitness function, which is a weighted sum of three heterogeneous
criteria. The selected methods for analytical determining of weight factors are
described in detail. It is noted that expert assessment methods are impossible
to use. The authors present a research methodology using the experimental
results from earlier in the discussed project "Data Warehouse Support on the
Base Intellectual Web Crawler and Evolutionary Model for Target Information
Selection". There is a description of an initial dataset with data ranges for
calculating weights. The calculation order is illustrated by examples. The
research results in graphical form demonstrate the fitness function behavior
during the genetic algorithm operation using various weighting options.
|
Saccadic eye movements allow animals to bring different parts of an image
into high-resolution. During free viewing, inhibition of return incentivizes
exploration by discouraging previously visited locations. Despite this
inhibition, here we show that subjects make frequent return fixations. We
systematically studied a total of 44,328 return fixations out of 217,440
fixations across different tasks, in monkeys and humans, and in static images
or egocentric videos. The ubiquitous return fixations were consistent across
subjects, tended to occur within short offsets, and were characterized by
longer duration than non-return fixations. The locations of return fixations
corresponded to image areas of higher saliency and higher similarity to the
sought target during visual search tasks. We propose a biologically-inspired
computational model that capitalizes on a deep convolutional neural network for
object recognition to predict a sequence of fixations. Given an input image,
the model computes four maps that constrain the location of the next saccade: a
saliency map, a target similarity map, a saccade size map, and a memory map.
The model exhibits frequent return fixations and approximates the properties of
return fixations across tasks and species. The model provides initial steps
towards capturing the trade-off between exploitation of informative image
locations combined with exploration of novel image locations during scene
viewing.
|
This note relies mainly on a refined version of the main results of the paper
by F. Catrina and D. Costa (J. Differential Equations 2009). We provide very
short and self-contained proofs. Our results are sharp and minimizers are
obtained in suitable functional spaces. As main tools we use the so-called
\textit{expand of squares} method to establish sharp weighted
$L^{2}$-Caffarelli-Kohn-Nirenberg (CKN) inequalities and density arguments.
|
In this paper we study quantum group deformations of the infinite dimensional
symmetry algebra of asymptotically AdS spacetimes in three dimensions. Building
on previous results in the finite dimensional subalgebras we classify all
possible Lie bialgebra structures and for selected examples, we explicitly
construct the related Hopf algebras. Using cohomological arguments we show that
this construction can always be performed by a so-called twist deformation. The
resulting structures can be compared to the well-known $\kappa$-Poincar\'e Hopf
algebras constructed on the finite dimensional Poincar\'e or (anti) de Sitter
algebra. The dual $\kappa$ Minkowski spacetime is supposed to describe a
specific non-commutative geometry. Importantly, we find that some incarnations
of the $\kappa$-Poincar\'e can not be extended consistently to the infinite
dimensional algebras. Furthermore, certain deformations can have potential
physical applications if subalgebras are considered. Since the conserved
charges associated with asymptotic symmetries in 3-dimensional form a centrally
extended algebra we also discuss briefly deformations of such algebras. The
presence of the full symmetry algebra might have observable consequences that
could be used to rule out these deformations. }
|
The properties of modified Hayward black hole space-time can be investigated
through analyzing the particle geodesics. By means of a detailed analysis of
the corresponding effective potentials for a massive particle, we find all
possible orbits which are allowed by the energy levels. The trajectories of
orbits are plotted by solving the equation of orbital motion numerically. We
conclude that whether there is an escape orbit is associated with $b$ (angular
momentum). The properties of orbital motion are related to $b$, $\alpha$
($\alpha$ is associated with the time delay) and $\beta$ ($\beta$ is related to
1-loop quantum corrections). There are no escape orbits when $b$ $<$ $4.016M$,
$\alpha$ = 0.50 and $\beta$ = 1.00. For fixed $\alpha$ = 0.50 and $\beta$ =
1.00, if $b$ $<$ $3.493M$, there only exist unstable orbits. Comparing with the
regular Hayward black hole, we go for a reasonable speculation by mean of the
existing calculating results that the introduction of the modified term makes
the radius of the innermost circular orbit (ISCO) and the corresponding angular
momentum larger.
|
We present Hubble Space Telescope imaging of a pre-explosion counterpart to
SN 2019yvr obtained 2.6 years before its explosion as a type Ib supernova (SN
Ib). Aligning to a post-explosion Gemini-S/GSAOI image, we demonstrate that
there is a single source consistent with being the SN 2019yvr progenitor
system, the second SN Ib progenitor candidate after iPTF13bvn. We also analyzed
pre-explosion Spitzer/IRAC imaging, but we do not detect any counterparts at
the SN location. SN 2019yvr was highly reddened, and comparing its spectra and
photometry to those of other, less extinguished SNe Ib we derive
$E(B-V)=0.51\substack{+0.27\\-0.16}$ mag for SN 2019yvr. Correcting photometry
of the pre-explosion source for dust reddening, we determine that this source
is consistent with a $\log(L/L_{\odot}) = 5.3 \pm 0.2$ and $T_{\mathrm{eff}} =
6800\substack{+400\\-200}$ K star. This relatively cool photospheric
temperature implies a radius of 320$\substack{+30\\-50} R_{\odot}$, much larger
than expectations for SN Ib progenitor stars with trace amounts of hydrogen but
in agreement with previously identified SN IIb progenitor systems. The
photometry of the system is also consistent with binary star models that
undergo common envelope evolution, leading to a primary star hydrogen envelope
mass that is mostly depleted but seemingly in conflict with the SN Ib
classification of SN 2019yvr. SN 2019yvr had signatures of strong circumstellar
interaction in late-time ($>$150 day) spectra and imaging, and so we consider
eruptive mass loss and common envelope evolution scenarios that explain the SN
Ib spectroscopic class, pre-explosion counterpart, and dense circumstellar
material. We also hypothesize that the apparent inflation could be caused by a
quasi-photosphere formed in an extended, low-density envelope or circumstellar
matter around the primary star.
|
Precise localization of polyp is crucial for early cancer screening in
gastrointestinal endoscopy. Videos given by endoscopy bring both richer
contextual information as well as more challenges than still images. The
camera-moving situation, instead of the common camera-fixed-object-moving one,
leads to significant background variation between frames. Severe internal
artifacts (e.g. water flow in the human body, specular reflection by tissues)
can make the quality of adjacent frames vary considerately. These factors
hinder a video-based model to effectively aggregate features from neighborhood
frames and give better predictions. In this paper, we present Spatial-Temporal
Feature Transformation (STFT), a multi-frame collaborative framework to address
these issues. Spatially, STFT mitigates inter-frame variations in the
camera-moving situation with feature alignment by proposal-guided deformable
convolutions. Temporally, STFT proposes a channel-aware attention module to
simultaneously estimate the quality and correlation of adjacent frames for
adaptive feature aggregation. Empirical studies and superior results
demonstrate the effectiveness and stability of our method. For example, STFT
improves the still image baseline FCOS by 10.6% and 20.6% on the comprehensive
F1-score of the polyp localization task in CVC-Clinic and ASUMayo datasets,
respectively, and outperforms the state-of-the-art video-based method by 3.6%
and 8.0%, respectively. Code is available at
\url{https://github.com/lingyunwu14/STFT}.
|
Recently, a careful canonical quantisation of the theory of closed bosonic
tensionless strings has resulted in the discovery of three separate vacua and
hence three different quantum theories that emerge from this single classical
tensionless theory. In this note, we perform lightcone quantisation with the
aim of determination of the critical dimension of these three inequivalent
quantum theories. The satisfying conclusion of a rather long and tedious
calculation is that one of vacua does not lead to any constraint on the number
of dimensions, while the other two give $D=26$. This implies that all three
quantum tensionless theories can be thought of as consistent sub-sectors of
quantum tensile bosonic closed string theory.
|
The Poisson gauge algebra is a semi-classical limit of complete
non-commutative gauge algebra. In the present work we formulate the Poisson
gauge theory which is a dynamical field theoretical model having the Poisson
gauge algebra as a corresponding algebra of gauge symmetries. The proposed
model is designed to investigate the semi-classical features of the full
non-commutative gauge theory with coordinate dependent non-commutativity
$\Theta^{ab}(x)$, especially whose with a non-constant rank. We derive the
expression for the covariant derivative of matter field. The commutator
relation for the covariant derivatives defines the Poisson field strength which
is covariant under the Poisson gauge transformations and reproduces the
standard $U(1)$ field strength in the commutative limit. We derive the
corresponding Bianchi identities. The field equations for the gauge and the
matter fields are obtained from the gauge invariant action. We consider
different examples of linear in coordinates Poisson structures
$\Theta^{ab}(x)$, as well as non-linear ones, and obtain explicit expressions
for all proposed constructions. Our model is unique up to invertible field
redefinitions and coordinate transformations.
|
In stochastic dynamic environments, team stochastic games have emerged as a
versatile paradigm for studying sequential decision-making problems of fully
cooperative multi-agent systems. However, the optimality of the derived
policies is usually sensitive to the model parameters, which are typically
unknown and required to be estimated from noisy data in practice. To mitigate
the sensitivity of the optimal policy to these uncertain parameters, in this
paper, we propose a model of "robust" team stochastic games, where players
utilize a robust optimization approach to make decisions. This model extends
team stochastic games to the scenario of incomplete information and meanwhile
provides an alternative solution concept of robust team optimality. To seek
such a solution, we develop a learning algorithm in the form of a Gauss-Seidel
modified policy iteration and prove its convergence. This algorithm, compared
with robust dynamic programming, not only possesses a faster convergence rate,
but also allows for using approximation calculations to alleviate the curse of
dimensionality. Moreover, some numerical simulations are presented to
demonstrate the effectiveness of the algorithm by generalizing the game model
of social dilemmas to sequential robust scenarios.
|
Traditional toxicity detection models have focused on the single utterance
level without deeper understanding of context. We introduce CONDA, a new
dataset for in-game toxic language detection enabling joint intent
classification and slot filling analysis, which is the core task of Natural
Language Understanding (NLU). The dataset consists of 45K utterances from 12K
conversations from the chat logs of 1.9K completed Dota 2 matches. We propose a
robust dual semantic-level toxicity framework, which handles utterance and
token-level patterns, and rich contextual chatting history. Accompanying the
dataset is a thorough in-game toxicity analysis, which provides comprehensive
understanding of context at utterance, token, and dual levels. Inspired by NLU,
we also apply its metrics to the toxicity detection tasks for assessing
toxicity and game-specific aspects. We evaluate strong NLU models on CONDA,
providing fine-grained results for different intent classes and slot classes.
Furthermore, we examine the coverage of toxicity nature in our dataset by
comparing it with other toxicity datasets.
|
A novel formulation of the hyperspectral broadband phase retrieval is
developed for the scenario where both object and modulation phase masks are
spectrally varying. The proposed algorithm is based on a complex domain version
of the alternating direction method of multipliers (ADMM) and Spectral
Proximity Operators (SPO) derived for Gaussian and Poissonian observations.
Computations for these operators are reduced to the solution of sets of cubic
(for Gaussian) and quadratic (for Poissonian) algebraic equations. These
proximity operators resolve two problems. Firstly, the complex domain spectral
components of signals are extracted from the total intensity observations
calculated as sums of the signal spectral intensities. In this way, the
spectral analysis of the total intensities is achieved. Secondly, the noisy
observations are filtered, compromising noisy intensity observations and their
predicted counterparts. The ability to resolve the hyperspectral broadband
phase retrieval problem and to find the spectrum varying object are essentially
defined by the spectral properties of object and image formation operators. The
simulation tests demonstrate that the phase retrieval in this formulation can
be successfully resolved.
|
We investigate the interaction between compactness principles and guessing
principles in the Radin forcing extensions \cite{MR670992}. In particular, we
show that in any Radin forcing extension with respect to a measure sequence on
$\kappa$, if $\kappa$ is weakly compact, then $\diamondsuit(\kappa)$ holds,
answering a question raised in \cite{MR3960897}. This provides contrast with a
well-known theorem of Woodin \cite{CummingsWoodin}, who showed in a certain
Radin extension over a suitably prepared ground model relative to the existence
of large cardinals, the diamond principle fails at a strongly inaccessible
Mahlo cardinal. Refining the analysis of the Radin extensions, we consistently
demonstrate a scenario where a compactness principle, stronger than the
diagonal stationary reflection principle, holds yet the diamond principle fails
at a strongly inaccessible cardinal, improving a result from \cite{MR3960897}.
|
In general-purpose particle detectors, the particle-flow algorithm may be
used to reconstruct a comprehensive particle-level view of the event by
combining information from the calorimeters and the trackers, significantly
improving the detector resolution for jets and the missing transverse momentum.
In view of the planned high-luminosity upgrade of the CERN Large Hadron
Collider (LHC), it is necessary to revisit existing reconstruction algorithms
and ensure that both the physics and computational performance are sufficient
in an environment with many simultaneous proton-proton interactions (pileup).
Machine learning may offer a prospect for computationally efficient event
reconstruction that is well-suited to heterogeneous computing platforms, while
significantly improving the reconstruction quality over rule-based algorithms
for granular detectors. We introduce MLPF, a novel, end-to-end trainable,
machine-learned particle-flow algorithm based on parallelizable,
computationally efficient, and scalable graph neural networks optimized using a
multi-task objective on simulated events. We report the physics and
computational performance of the MLPF algorithm on a Monte Carlo dataset of top
quark-antiquark pairs produced in proton-proton collisions in conditions
similar to those expected for the high-luminosity LHC. The MLPF algorithm
improves the physics response with respect to a rule-based benchmark algorithm
and demonstrates computationally scalable particle-flow reconstruction in a
high-pileup environment.
|
A multilayer network depicts different types of interactions among the same
set of nodes. For example, protease networks consist of five to seven layers,
where different layers represent distinct types of experimentally confirmed
molecule interactions among proteins. In a multilayer protease network, the
co-expression layer is obtained through the meta-analysis of transcriptomic
data from various sources and platforms. While in some researches the
co-expression layer is in turn represented as a multilayered network, a
fundamental problem is how to obtain a single-layer network from the
corresponding multilayered network. This process is called multilayer network
aggregation. In this work, we propose a maximum a posteriori estimation-based
algorithm for multilayer network aggregation. The method allows to aggregate a
weighted multilayer network while conserving the core information of the
layers. We evaluate the method through an unweighted friendship network and a
multilayer gene co-expression network. We compare the aggregated gene
co-expression network with a network obtained from conflated datasets and a
network obtained from averaged weights. The Von Neumann entropy is adopted to
compare the mixedness of the three networks, and, together with other network
measurements, shows the effectiveness of the proposes method.
|
Over the past two decades machine learning has permeated almost every realm
of technology. At the same time, many researchers have begun using category
theory as a unifying language, facilitating communication between different
scientific disciplines. It is therefore unsurprising that there is a burgeoning
interest in applying category theory to machine learning. We aim to document
the motivations, goals and common themes across these applications. We touch on
gradient-based learning, probability, and equivariant learning.
|
In this paper, we propose CHOLAN, a modular approach to target end-to-end
entity linking (EL) over knowledge bases. CHOLAN consists of a pipeline of two
transformer-based models integrated sequentially to accomplish the EL task. The
first transformer model identifies surface forms (entity mentions) in a given
text. For each mention, a second transformer model is employed to classify the
target entity among a predefined candidates list. The latter transformer is fed
by an enriched context captured from the sentence (i.e. local context), and
entity description gained from Wikipedia. Such external contexts have not been
used in the state of the art EL approaches. Our empirical study was conducted
on two well-known knowledge bases (i.e., Wikidata and Wikipedia). The empirical
results suggest that CHOLAN outperforms state-of-the-art approaches on standard
datasets such as CoNLL-AIDA, MSNBC, AQUAINT, ACE2004, and T-REx.
|
In this paper we prove that the Cauchy problem of the Muskat equation is
wellposed locally in time for any initial data in $\dot C^1(\mathbb{R}^d)\cap
L^2(\mathbb{R}^d)$.
|
For the need of measurements focused in condensed matter physics and
especially Bernoulli effect in superconductors we have developed an active
resonator with dual operational amplifiers. A tunable high-Q resonator is
performed in the schematics of the the General Impedance Converter (GIC). In
the framework of frequency dependent open-loop gain of operational amplifiers,
a general formula of the frequency dependence of the impedance of GIC is
derived. The explicit formulas for the resonance frequency and Q-factor include
as immanent parameter the crossover frequency of the operational amplifier.
Voltage measurements of GIC with a lock-in amplifier perfectly agree with the
derived formulas. A table reveals that electrometer operational amplifiers are
the best choice to build the described resonator.
|
Clinical SPECT-MPI images of 345 patients acquired from a dedicated cardiac
SPECT in list-mode format were retrospectively employed to predict normal-dose
images from low-dose data at the half, quarter, and one-eighth-dose levels. A
generative adversarial network was implemented to predict non-gated normal-dose
images in the projection space at the different reduced dose levels.
Established metrics including the peak signal-to-noise ratio (PSNR), root mean
squared error (RMSE), and structural similarity index metrics (SSIM) in
addition to Pearson correlation coefficient analysis and derived parameters
from Cedars-Sinai software were used to quantitatively assess the quality of
the predicted normal-dose images. For clinical evaluation, the quality of the
predicted normal-dose images was evaluated by a nuclear medicine specialist
using a seven-point (-3 to +3) grading scheme. By considering PSNR, SSIM, and
RMSE quantitative parameters among the different reduced dose levels, the
highest PSNR (42.49) and SSIM (0.99), and the lowest RMSE (1.99) were obtained
at the half-dose level in the reconstructed images. Pearson correlation
coefficients were measured 0.997, 0.994, and 0.987 for the predicted
normal-dose images at the half, quarter, and one-eighth-dose levels,
respectively. Regarding the normal-dose images as the reference, the
Bland-Altman plots sketched for the Cedars-Sinai selected parameters exhibited
remarkably less bias and variance in the predicted normal-dose images compared
with the low-dose data at the entire reduced dose levels. Overall, considering
the clinical assessment performed by a nuclear medicine specialist, 100%, 80%,
and 11% of the predicted normal-dose images were clinically acceptable at the
half, quarter, and one-eighth-dose levels, respectively.
|
Classically transmission conditions between subdomains are optimized for a
simplified two subdomain decomposition to obtain optimized Schwarz methods for
many subdomains. We investigate here if such a simplified optimization suffices
for the magnetotelluric approximation of Maxwell's equation which leads to a
complex diffusion problem. We start with a direct analysis for 2 and 3
subdomains, and present asymptotically optimized transmission conditions in
each case. We then optimize transmission conditions numerically for 4, 5 and 6
subdomains and observe the same asymptotic behavior of optimized transmission
conditions. We finally use the technique of limiting spectra to optimize for a
very large number of subdomains in a strip decomposition. Our analysis shows
that the asymptotically best choice of transmission conditions is the same in
all these situations, only the constants differ slightly. It is therefore
enough for such diffusive type approximations of Maxwell's equations, which
include the special case of the Laplace and screened Laplace equation, to
optimize transmission parameters in the simplified two subdomain decomposition
setting to obtain good transmission conditions for optimized Schwarz methods
for more general decompositions.
|
Given a prediction task, understanding when one can and cannot design a
consistent convex surrogate loss, particularly a low-dimensional one, is an
important and active area of machine learning research. The prediction task may
be given as a target loss, as in classification and structured prediction, or
simply as a (conditional) statistic of the data, as in risk measure estimation.
These two scenarios typically involve different techniques for designing and
analyzing surrogate losses. We unify these settings using tools from property
elicitation, and give a general lower bound on prediction dimension. Our lower
bound tightens existing results in the case of discrete predictions, showing
that previous calibration-based bounds can largely be recovered via property
elicitation. For continuous estimation, our lower bound resolves on open
problem on estimating measures of risk and uncertainty.
|
Serverless computing is the latest paradigm in cloud computing, offering a
framework for the development of event driven, pay-as-you-go functions in a
highly scalable environment. While these traits offer a powerful new
development paradigm, they have also given rise to a new form of cyber-attack
known as Denial of Wallet (forced financial exhaustion). In this work, we
define and identify the threat of Denial of Wallet and its potential attack
patterns. Also, we demonstrate how this new form of attack can potentially
circumvent existing mitigation systems developed for a similar style of attack,
Denial of Service. Our goal is twofold. Firstly, we will provide a concise and
informative overview of this emerging attack paradigm. Secondly, we propose
this paper as a starting point to enable researchers and service providers to
create effective mitigation strategies. We include some simulated experiments
to highlight the potential financial damage that such attacks can cause and the
creation of an isolated test bed for continued safe research on these attacks.
|
Motivated by the immutable nature of Ethereum smart contracts and of their
transactions, quite many approaches have been proposed to detect defects and
security problems before smart contracts become persistent in the blockchain
and they are granted control on substantial financial value.
Because smart contracts source code might not be available, static analysis
approaches mostly face the challenge of analysing compiled Ethereum bytecode,
that is available directly from the official blockchain. However, due to the
intrinsic complexity of Ethereum bytecode (especially in jump resolution),
static analysis encounters significant obstacles that reduce the accuracy of
exiting automated tools.
This paper presents a novel static analysis algorithm based on the symbolic
execution of the Ethereum operand stack that allows us to resolve jumps in
Ethereum bytecode and to construct an accurate control-flow graph (CFG) of the
compiled smart contracts. EtherSolve is a prototype implementation of our
approach. Experimental results on a significant set of real world Ethereum
smart contracts show that EtherSolve improves the accuracy of the execrated
CFGs with respect to the state of the art available approaches.
Many static analysis techniques are based on the CFG representation of the
code and would therefore benefit from the accurate extraction of the CFG. For
example, we implemented a simple extension of EtherSolve that allows to detect
instances of the re-entrancy vulnerability.
|
Significant galaxy mergers throughout cosmic time play a fundamental role in
theories of galaxy evolution. The widespread usage of human classifiers to
visually assess whether galaxies are in merging systems remains a fundamental
component of many morphology studies. Studies that employ human classifiers
usually construct a control sample, and rely on the assumption that the bias
introduced by using humans will be evenly applied to all samples. In this work,
we test this assumption and develop methods to correct for it. Using the
standard binomial statistical methods employed in many morphology studies, we
find that the merger fraction, error, and the significance of the difference
between two samples are dependent on the intrinsic merger fraction of any given
sample. We propose a method of quantifying merger biases of individual human
classifiers and incorporate these biases into a full probabilistic model to
determine the merger fraction and the probability of an individual galaxy being
in a merger. Using 14 simulated human responses and accuracies, we are able to
correctly label a galaxy as ''merger'' or ''isolated'' to within 1\% of the
truth. Using 14 real human responses on a set of realistic mock galaxy
simulation snapshots our model is able to recover the pre-coalesced merger
fraction to within 10\%. Our method can not only increase the accuracy of
studies probing the merger state of galaxies at cosmic noon, but also can be
used to construct more accurate training sets in machine learning studies that
use human classified data-sets.
|
We present predictions for the gluon-fusion Higgs $p_T$ spectrum at third
resummed and fixed order (N$^3$LL$'+$N$^3$LO) including fiducial cuts as
required by experimental measurements at the Large Hadron Collider. Integrating
the spectrum, we predict for the first time the total fiducial cross section to
third order (N$^3$LO) and improved by resummation. The N$^3$LO correction is
enhanced by cut-induced logarithmic effects and is not reproduced by the
inclusive N$^3$LO correction times a lower-order acceptance. These are the
highest-order predictions of their kind achieved so far at a hadron collider.
|
We introduce a random differential operator, that we call the
$\mathtt{CS}_\tau$ operator, whose spectrum is given by the $\mbox{Sch}_\tau$
point process introduced by Kritchevski, Valk\'o and Vir\'ag (2012) and whose
eigenvectors match with the description provided by Rifkind and Vir\'ag (2018).
This operator acts on $\mathbf{R}^2$-valued functions from the interval $[0,1]$
and takes the form: $$ 2 \begin{pmatrix} 0 & -\partial_t \\ \partial_t & 0
\end{pmatrix} + \sqrt{\tau} \begin{pmatrix} d\mathcal{B} + \frac1{\sqrt 2}
d\mathcal{W}_1 & \frac1{\sqrt 2} d\mathcal{W}_2\\ \frac1{\sqrt 2}
d\mathcal{W}_2 & d\mathcal{B} - \frac1{\sqrt 2} d\mathcal{W}_1\end{pmatrix}\,,
$$ where $d\mathcal{B}$, $d\mathcal{W}_1$ and $d\mathcal{W}_2$ are independent
white noises. Then, we investigate the high part of the spectrum of the
Anderson Hamiltonian $\mathcal{H}_L := -\partial_t^2 + dB$ on the segment
$[0,L]$ with white noise potential $dB$, when $L\to\infty$. We show that the
operator $\mathcal{H}_L$, recentred around energy levels $E \sim L/\tau$ and
unitarily transformed, converges in law as $L\to\infty$ to $\mathtt{CS}_\tau$
in an appropriate sense. This allows to answer a conjecture of Rifkind and
Vir\'ag (2018) on the behavior of the eigenvectors of $\mathcal{H}_L$. Our
approach also explains how such an operator arises in the limit of
$\mathcal{H}_L$. Finally we show that at higher energy levels, the Anderson
Hamiltonian matches (asymptotically in $L$) with the unperturbed Laplacian
$-\partial_t^2$. In a companion paper, it is shown that at energy levels much
smaller than $L$, the spectrum is localized with Poisson statistics: the
present paper therefore identifies the delocalized phase of the Anderson
Hamiltonian.
|
We study the problem of controlling the free surface, by fluid jets on the
boundary, for a two dimensional solid container in the context of the gravity
waves and the sloshing problem. By using conformal maps and the
Dirichlet--Neumann operator, the problem is formulated as a second order
evolutionary equation on the free surface involving a self-adjoint operator. We
present then the appropriate Sobolev spaces where having solutions for the
system and study the exact controllability through an observability inequality
for the adjoint problem.
|
While state-of-the-art NLP models have been achieving the excellent
performance of a wide range of tasks in recent years, important questions are
being raised about their robustness and their underlying sensitivity to
systematic biases that may exist in their training and test data. Such issues
come to be manifest in performance problems when faced with out-of-distribution
data in the field. One recent solution has been to use counterfactually
augmented datasets in order to reduce any reliance on spurious patterns that
may exist in the original data. Producing high-quality augmented data can be
costly and time-consuming as it usually needs to involve human feedback and
crowdsourcing efforts. In this work, we propose an alternative by describing
and evaluating an approach to automatically generating counterfactual data for
data augmentation and explanation. A comprehensive evaluation on several
different datasets and using a variety of state-of-the-art benchmarks
demonstrate how our approach can achieve significant improvements in model
performance when compared to models training on the original data and even when
compared to models trained with the benefit of human-generated augmented data.
|
The family of graphynes, novel two-dimensional semiconductors with various
and fascinating chemical and physical properties, has attracted great interest
from both science and industry. Currently, the focus of graphynes is on
graphdiyne, or graphyne-2. In this work, we systematically study the effect of
acetylene, i.e., carbon-carbon triple bond, links on the electronic and optical
properties of a series of graphynes (graphyne-n, where n = 1-5, the number of
acetylene bonds) using the ab initio calculations. We find an even-odd pattern,
i.e., n = 1, 3, 5 and n = 2, 4 having different features, which has not be
discovered in studying graphyne or graphdyine only. It is found that as the
number of acetylene bonds increases, the electron effective mass increases
continuously in the low energy range because of the flatter conduction band
induced by the longer acetylene links. Meanwhile, longer acetylene links result
in larger redshift of the imaginary part of the dielectric function, loss
function, and extinction coefficient. In this work, we propose an effective
method to tune and manipulate both the electronic and optical properties of
graphynes for the applications in optoelectronic devices and photo-chemical
catalysis.
|
Most deep learning models are data-driven and the excellent performance is
highly dependent on the abundant and diverse datasets. However, it is very hard
to obtain and label the datasets of some specific scenes or applications. If we
train the detector using the data from one domain, it cannot perform well on
the data from another domain due to domain shift, which is one of the big
challenges of most object detection models. To address this issue, some
image-to-image translation techniques have been employed to generate some fake
data of some specific scenes to train the models. With the advent of Generative
Adversarial Networks (GANs), we could realize unsupervised image-to-image
translation in both directions from a source to a target domain and from the
target to the source domain. In this study, we report a new approach to making
use of the generated images. We propose to concatenate the original 3-channel
images and their corresponding GAN-generated fake images to form 6-channel
representations of the dataset, hoping to address the domain shift problem
while exploiting the success of available detection models. The idea of
augmented data representation may inspire further study on object detection and
other applications.
|
Probing optical excitations with high resolution is important for
understanding their dynamics and controlling their interaction with other
photonic elements. This can be done using state-of-the-art electron
microscopes, which provide the means to sample optical excitations with
combined meV--sub-nm energy--space resolution. For reciprocal photonic systems,
electrons traveling in opposite directions produce identical signals, while
this symmetry is broken in nonreciprocal structures. Here, we theoretically
investigate this phenomenon by analyzing electron energy-loss spectroscopy
(EELS) and cathodoluminescence (CL) in structures consisting of magnetically
biased InAs as an instance of gyrotropic nonreciprocal material. We find that
the spectral features associated with excitations of InAs films depend on the
electron propagation direction in both EELS and CL, and can be tuned by varying
the applied magnetic field within a relatively modest sub-tesla regime. The
magnetic field modifies the optical field distribution of the sampled
resonances, and this in turn produces a direction-dependent coupling to the
electron. The present results pave the way to the use of electron microscope
spectroscopies to explore the near-field characteristics of nonreciprocal
systems with high spatial resolution.
|
In this work, we consider the target detection problem in a sensing
architecture where the radar is aided by a reconfigurable intelligent surface
(RIS), that can be modeled as an array of sub-wavelength small reflective
elements capable of imposing a tunable phase shift to the impinging waves and,
ultimately, of providing the radar with an additional echo of the target. A
theoretical analysis is carried out for closely- and widely-spaced (with
respect to the target) radar and RIS and for different beampattern
configurations, and some examples are provided to show that large gains can be
achieved by the considered detection architecture.
|
In this note we study analytically and numerically the existence and
stability of standing waves for one dimensional nonlinear Schr\"odinger
equations whose nonlinearities are the sum of three powers. Special attention
is paid to the curves of non-existence and curves of stability change on the
parameter planes.
|
Quantum reservoir computing (QRC) and quantum extreme learning machines
(QELM) are two emerging approaches that have demonstrated their potential both
in classical and quantum machine learning tasks. They exploit the quantumness
of physical systems combined with an easy training strategy, achieving an
excellent performance. The increasing interest in these unconventional
computing approaches is fueled by the availability of diverse quantum platforms
suitable for implementation and the theoretical progresses in the study of
complex quantum systems. In this review article, recent proposals and first
experiments displaying a broad range of possibilities are reviewed when quantum
inputs, quantum physical substrates and quantum tasks are considered. The main
focus is the performance of these approaches, on the advantages with respect to
classical counterparts and opportunities.
|
We consider the problem of scheduling maintenance for a collection of
machines under partial observations when the state of each machine deteriorates
stochastically in a Markovian manner. We consider two observational models:
first, the state of each machine is not observable at all, and second, the
state of each machine is observable only if a service-person visits them. The
agent takes a maintenance action, e.g., machine replacement, if he is chosen
for the task. We model both problems as restless multi-armed bandit problem and
propose the Whittle index policy for scheduling the visits. We show that both
models are indexable. For the first model, we derive a closed-form expression
for the Whittle index. For the second model, we propose an efficient algorithm
to compute the Whittle index by exploiting the qualitative properties of the
optimal policy. We present detailed numerical experiments which show that for
multiple instances of the model, the Whittle index policy outperforms myopic
policy and can be close-to-optimal in different setups.
|
By using a novel technique that establishes a correspondence between general
relativity and metric-affine theories based on the Ricci tensor, we are able to
set stringent constraints on the free parameter of Born-Infeld gravity from the
ones recently obtained for Born-Infeld electrodynamics by using light-by-light
scattering data from ATLAS. We also discuss how these gravity theories plus
matter fit within an effective field theory framework.
|
We apply a quantum teleportation protocol based on the Hayden-Preskill
thought experiment to quantify how scrambling a given quantum evolution is. It
has an advantage over the direct measurement of out-of-time ordered correlators
when used to diagnose the information scrambling in the presence of decoherence
effects stemming from a noisy quantum device. We demonstrate the protocol by
applying it to two physical systems: Ising spin chain and SU(2) lattice
Yang-Mills theory. To this end, we numerically simulate the time evolution of
the two theories in the Hamiltonian formalism. The lattice Yang-Mills theory is
implemented with a suitable truncation of Hilbert space on the basis of the
Kogut-Susskind formalism. On a two-leg ladder geometry and with the lowest
nontrivial spin representations, it can be mapped to a spin chain, which we
call it Yang-Mills-Ising model and is also directly applicable to future
digital quantum simulations. We find that the Yang-Mills-Ising model shows the
signal of information scrambling at late times.
|
Zernike polynomials are one of the most widely used mathematical descriptors
of optical aberrations in the fields of imaging and adaptive optics. Their
mathematical orthogonality as well as isomorphisms with experimentally
observable aberrations make them a very powerful tool in solving numerous
problems in beam optics. However, Zernike aberrations show cross-coupling
between individual modes when used in combination with Gaussian beams, an
effect that has not been extensively studied. Here we propose a novel framework
that is capable of explaining the fundamental cross-compensation of Zernike
type aberrations, both in low-aberration and high-aberration regimes. Our
approach is based on analysing the coupling between Zernike modes and different
classes of Laguerre-Gauss modes which allows investigating aberrated beams not
only on a single plane but also during their 3D propagation.
|
Earth's modern atmosphere is highly oxygenated and is a remotely detectable
signal of its surface biosphere. However, the lifespan of oxygen-based
biosignatures in Earth's atmosphere remains uncertain, particularly for the
distant future. Here we use a combined biogeochemistry and climate model to
examine the likely timescale of oxygen-rich atmospheric conditions on Earth.
Using a stochastic approach, we find that the mean future lifespan of Earth's
atmosphere with oxygen levels more than 1% of the present atmospheric level is
1.08+-0.14 billion years. The model projects that a deoxygenation of the
atmosphere, with atmospheric oxygen dropping sharply to levels reminiscnet of
the Archaean Earth, will most probably be triggered before the inception of
moist greenhouse conditions in Earth's climate system and before the extensive
loss of surface water from the atmosphere. We find that future deoxygenation is
an inevitable consequence of increasing solar fluxes, whereas its precise
timing is modulated by the exchange flux of reducing power between the mantle
and the ocean-atmosphere-crust system. Our results suggest that the planetary
carbonate-silicate cycle will tend to lead to terminally CO2-limited biospheres
and rapid atmospheric deoxygenation, emphasizing the need for robust
atmospheric biosignatures applicable to weakly oxygenated and anoxic exoplanet
atmospheres and highlighting the potential importance of atmospheric organic
haze during the terminal stages of planetary habitability.
|
We prove two theorems about the Malcev Lie algebra associated to the Torelli
group of a surface of genus $g$: stably, it is Koszul and the kernel of the
Johnson homomorphism consists only of trivial $Sp_{2g}(Z)$-representations
lying in the centre.
|
A wide variety of use case templates supports different variants to link a
use case with its associated requirements. Regardless of the linking, a reader
must process the related information simultaneously to understand them. Linking
variants are intended to cause a specific reading behavior in which a reader
interrelates a use case and its associated requirements. Due to the effort to
create and maintain links, we investigated the impact of different linking
variants on the reading behavior in terms of visual effort and the intended way
of interrelating both artifacts. We designed an eye tracking study about
reading a use case and requirements. We conducted the study twice each with 15
subjects as a baseline experiment and as a repetition. The results of the
baseline experiment, its repetition, and their joint analysis are consistent.
All investigated linking variants cause comparable visual effort. In all cases,
reading the single artifacts one after the other is the most frequently
occurring behavior. Only links embedded in the fields of a use case description
significantly increase the readers' efforts to interrelate both artifacts. None
of the investigated linking variants impedes reading a use case and
requirements. However, only the most detailed linking variant causes readers to
process related information simultaneously.
|
Nowadays new technologies, and especially artificial intelligence, are more
and more established in our society. Big data analysis and machine learning,
two sub-fields of artificial intelligence, are at the core of many recent
breakthroughs in many application fields (e.g., medicine, communication,
finance, ...), including some that are strongly related to our day-to-day life
(e.g., social networks, computers, smartphones, ...). In machine learning,
significant improvements are usually achieved at the price of an increasing
computational complexity and thanks to bigger datasets. Currently, cutting-edge
models built by the most advanced machine learning algorithms typically became
simultaneously very efficient and profitable but also extremely complex. Their
complexity is to such an extent that these models are commonly seen as
black-boxes providing a prediction or a decision which can not be interpreted
or justified. Nevertheless, whether these models are used autonomously or as a
simple decision-making support tool, they are already being used in machine
learning applications where health and human life are at stake. Therefore, it
appears to be an obvious necessity not to blindly believe everything coming out
of those models without a detailed understanding of their predictions or
decisions. Accordingly, this thesis aims at improving the interpretability of
models built by a specific family of machine learning algorithms, the so-called
tree-based methods. Several mechanisms have been proposed to interpret these
models and we aim along this thesis to improve their understanding, study their
properties, and define their limitations.
|
Upon investigating whether the variation of the antineutron-nucleus
annihilation cross-sections at very low energies satisfy Bethe-Landau's power
law of $\sigma_{\rm ann} (p) \propto 1/p^{\alpha}$ behavior as a function of
the antineutron momentum $p$, we uncover unexpected regular oscillatory
structures in the low antineutron energy region from 0.001 to 10 MeV, with
small amplitudes and narrow periodicity in the logarithm of the antineutron
energies, for large-$A$ nuclei such as Pb and Ag. Subsequent semiclassical
analyses of the $S$ matrices reveal that these oscillations are pocket
resonances that arise from quasi-bound states inside the pocket and the
interference between the waves reflecting inside the optical potential pockets
with those from beyond the potential barriers, implicit in the nuclear Ramsauer
effect. They are the continuation of bound states in the continuum.
Experimental observations of these pocket resonances will provide vital
information on the properties of the optical model potentials and the nature of
the antineutron annihilation process.
|
We revisit Allendoerfer-Weil's formula for the Euler characteristic of
embedded hypersurfaces in constant sectional curvature manifolds, first taking
some time to re-prove it while demonstrating techniques of [2] and then
applying it to gain new understanding of isoparametric hypersurfaces.
|
Neural data compression has been shown to outperform classical methods in
terms of $RD$ performance, with results still improving rapidly. At a high
level, neural compression is based on an autoencoder that tries to reconstruct
the input instance from a (quantized) latent representation, coupled with a
prior that is used to losslessly compress these latents. Due to limitations on
model capacity and imperfect optimization and generalization, such models will
suboptimally compress test data in general. However, one of the great strengths
of learned compression is that if the test-time data distribution is known and
relatively low-entropy (e.g. a camera watching a static scene, a dash cam in an
autonomous car, etc.), the model can easily be finetuned or adapted to this
distribution, leading to improved $RD$ performance. In this paper we take this
concept to the extreme, adapting the full model to a single video, and sending
model updates (quantized and compressed using a parameter-space prior) along
with the latent representation. Unlike previous work, we finetune not only the
encoder/latents but the entire model, and - during finetuning - take into
account both the effect of model quantization and the additional costs incurred
by sending the model updates. We evaluate an image compression model on
I-frames (sampled at 2 fps) from videos of the Xiph dataset, and demonstrate
that full-model adaptation improves $RD$ performance by ~1 dB, with respect to
encoder-only finetuning.
|
We give the characterization of the embeddings between weighted Tandori and
Ces\`{a}ro function spaces using the combination of duality arguments for
weighted Lebesgue spaces and weighted Tandori spaces with estimates for the
iterated integral operators.
|
In the dynamical models of gamma-ray burst (GRB) afterglows, the uniform
assumption of the shocked region is known as provoking total energy
conservation problem. In this work we consider shocks originating from
magnetized ejecta, extend the energy-conserving hydrodynamical model of Yan et
al. (2007) to the MHD limit by applying the magnetized jump conditions from
Zhang & Kobayashi (2005). Compared with the non-conservative models, our
Lorentz factor of the whole shocked region is larger by a factor
$\lesssim\sqrt{2}$. The total pressure of the forward shocked region is higher
than the reversed shocked region, in the relativistic regime with a factor of
about 3 in our interstellar medium (ISM) cases while ejecta magnetization
degree $\sigma<1$, and a factor of about 2.4 in the wind cases. For $\sigma\le
1$, the non-conservative model loses $32-42$% of its total energy for ISM
cases, and for wind cases $25-38$%, which happens specifically in the forward
shocked region, making the shock synchrotron emission from the forward shock
less luminous than expected. Once the energy conservation problem is fixed, the
late time light curves from the forward shock become nearly independent of the
ejecta magnetization. The reverse shocked region doesn't suffer from the energy
conservation problem since the changes of the Lorentz factor are recompensed by
the changes of the shocked particle number density. The early light curves from
the reverse shock are sensitive to the magnetization of the ejecta, thus are an
important probe of the magnetization degree.
|
We associate a deformation of Heisenberg algebra to the suitably normalized
Yang $R$-matrix and we investigate its properties. Moreover, we construct new
examples of quantum vertex algebras which possess the same representation
theory as the aforementioned deformed Heisenberg algebra.
|
Recently discovered intrinsic antiferromagnetic topological insulator
MnBi$_2$Te$_4$ presents an exciting platform for realization of the quantum
anomalous Hall effect and a number of related phenomena at elevated
temperatures. An important characteristic making this material attractive for
applications is its predicted large magnetic gap at the Dirac point (DP).
However, while the early experimental measurements reported on large DP gaps, a
number of recent studies claimed to observe a gapless dispersion of the
MnBi$_2$Te$_4$ Dirac cone. Here, using micro($\mu$)-laser angle-resolved
photoemission spectroscopy, we study the electronic structure of 15 different
MnBi$_2$Te$_4$ samples, grown by two different chemists groups. Based on the
careful energy distribution curves analysis, the DP gaps between 15 and 65 meV
are observed, as measured below the N\'eel temperature at about 10-16 K. At
that, roughly half of the studied samples show the DP gap of about 30 meV,
while for a quarter of the samples the gaps are in the 50 to 60 meV range.
Summarizing the results of both our and other groups, in the currently
available MnBi$_2$Te$_4$ samples the DP gap can acquire an arbitrary value
between a few and several tens of meV. Further, based on the density functional
theory, we discuss a possible factor that might contribute to the reduction of
the DP gap size, which is the excess surface charge that can appear due to
various defects in surface region. We demonstrate that the DP gap is influenced
by the applied surface charge and even can be closed, which can be taken
advantage of to tune the MnBi$_2$Te$_4$ DP gap size.
|
Society is changing, has always changed, and will keep changing. However,
changes are becoming faster and what used to happen between generations, now
happens in the same generation. Computing Science is one of the reasons for
this speed and permeates, basically, every other knowledge area. This paper
(written in Portugu\^es) describes, briefly, the worldwide initiatives to
introduce Computing Science teaching in schools. As the paper's main
conclusion, it is essential to introduce Computing Science and Computational
Thinking for kids before they enter into a university.
|
Innovations in neural architectures have fostered significant breakthroughs
in language modeling and computer vision. Unfortunately, novel architectures
often result in challenging hyper-parameter choices and training instability if
the network parameters are not properly initialized. A number of
architecture-specific initialization schemes have been proposed, but these
schemes are not always portable to new architectures. This paper presents
GradInit, an automated and architecture agnostic method for initializing neural
networks. GradInit is based on a simple heuristic; the norm of each network
layer is adjusted so that a single step of SGD or Adam with prescribed
hyperparameters results in the smallest possible loss value. This adjustment is
done by introducing a scalar multiplier variable in front of each parameter
block, and then optimizing these variables using a simple numerical scheme.
GradInit accelerates the convergence and test performance of many convolutional
architectures, both with or without skip connections, and even without
normalization layers. It also improves the stability of the original
Transformer architecture for machine translation, enabling training it without
learning rate warmup using either Adam or SGD under a wide range of learning
rates and momentum coefficients. Code is available at
https://github.com/zhuchen03/gradinit.
|
High-level understanding of stories in video such as movies and TV shows from
raw data is extremely challenging. Modern video question answering (VideoQA)
systems often use additional human-made sources like plot synopses, scripts,
video descriptions or knowledge bases. In this work, we present a new approach
to understand the whole story without such external sources. The secret lies in
the dialog: unlike any prior work, we treat dialog as a noisy source to be
converted into text description via dialog summarization, much like recent
methods treat video. The input of each modality is encoded by transformers
independently, and a simple fusion method combines all modalities, using soft
temporal attention for localization over long inputs. Our model outperforms the
state of the art on the KnowIT VQA dataset by a large margin, without using
question-specific human annotation or human-made plot summaries. It even
outperforms human evaluators who have never watched any whole episode before.
Code is available at https://engindeniz.github.io/dialogsummary-videoqa
|
The effect of radiative heat transfer on the entropy generation in a
two-phase non-isothermal fluid flow between two infinite horizontal parallel
plates under the influence of a constant pressure gradient and transverse
non-invasive magnetic field have been explored. Both the fluids are considered
to be viscous, incompressible, immiscible, Newtonian, and electrically
conducting. The governing equations in Cartesian coordinate are solved
analytically with the help of appropriate boundary conditions to obtain the
velocity and temperature profile inside the channel. Application of transverse
magnetic field is found to reduce the throughput and the temperature
distribution of the fluids in a pressure-driven flow. The temperature and fluid
flow inside the channel can also be non-invasively altered by tuning the
magnetic field intensity, the temperature difference between the channel walls
and the fluids, and several intrinsic fluid properties. The entropy generation
due to the heat transfer, magnetic field, and fluid flow irreversibilities can
be controlled by altering the Hartmann number, radiation parameter, Brinkmann
number, filling ratio, and the ratios of fluid viscosities, thermal and
electrical conductivities. The surfaces of the channel wall are found to act as
a strong source of entropy generation and heat transfer irreversibility. The
rate of heat transfer at the channel walls can also be tweaked by the magnetic
field intensity, temperature differences, and fluid properties. The proposed
strategies in the present study can be of significance in the design and
development of gen-next microscale reactors, micro heat exchangers, and energy
harvesting devices.
|
For any regular Courant algebroid $E$ over a smooth manifold $M$ with
characteristic distribution $F$ and ample Lie algebroid $A_E$, we prove that
there exists a canonical homological vector field on the graded manifold
$A_E[1] \oplus (TM/F)^\ast[2]$ such that the associated dg manifold
$\mathcal{M}_E$, which we call the minimal model of the Courant algebroid $E$,
encodes all cohomological data of $E$. Thereby, the standard cohomology
$H^\bullet_{\operatorname{st}}(E)$ of $E$ can be identified with the cohomology
$H^\bullet(\mathcal{M}_E)$ of the function space on $\mathcal{M}_E$. To compute
it, we find a natural transgression map $[d_T] \colon
H^{\bullet}_{\operatorname{CE}}\big(A_E; S^{\diamond}(TM/F[-2])\big) \to
H^{\bullet+3}_{\CE}\big(A_E; S^{\diamond-1}(TM/F[-2])\big)$ from which we
construct a spectral sequence which converges to
$H^\bullet_{\operatorname{st}}(E)$. Moreover, we give applications to
generalized exact Courant algebroids and those arising from regular Lie
algebroids .
|
For optimal power flow problems with chance constraints, a particularly
effective method is based on a fixed point iteration applied to a sequence of
deterministic power flow problems. However, a priori, the convergence of such
an approach is not necessarily guaranteed. This article analyses the
convergence conditions for this fixed point approach, and reports numerical
experiments including for large IEEE networks.
|
We consider QCD radiative corrections to the associated production of a
heavy-quark pair ($Q{\bar Q}$) with a generic colourless system $F$ at hadron
colliders. We discuss the resummation formalism for the production of the
$Q{\bar Q}F$ system at small values of its total transverse momentum $q_T$. The
perturbative expansion of the resummation formula leads to the explicit
ingredients that can be used to apply the $q_T$ subtraction formalism to
fixed-order calculations for this class of processes. We use the $q_T$
subtraction formalism to perform a fully differential perturbative computation
for the production of a top-antitop quark pair and a Higgs boson. At
next-to-leading order we compare our results with those obtained with
established subtraction methods and we find complete agreement. We present, for
the first time, the results for the flavour off-diagonal partonic channels at
the next-to-next-to-leading order.
|
A strong edge coloring of a graph $G$ is a proper edge coloring of $G$ such
that every color class is an induced matching. The minimum number of colors
required is termed the strong chromatic index. In this paper, we determine the
exact value of the strong chromatic index of all unitary Cayley graphs. Our
investigations reveal an underlying product structure from which the unitary
Cayley graphs emerge. We then go on to give tight bounds for the strong
chromatic index of the Cartesian product of two trees, including an exact
formula for the product in the case of stars. Further, we give bounds for the
strong chromatic index of the product of a tree with a cycle. For any tree,
those bounds may differ from the actual value only by not more than a small
additive constant (at most 2 for even cycles and at most 5 for odd cycles),
moreover they yield the exact value when the length of the cycle is divisible
by $4$.
|
The complex matrix representation for a quaternion matrix is used in this
paper to find necessary and sufficient conditions for the existence of an
$H$-selfadjoint $m$th root of a given $H$-selfadjoint quaternion matrix. In the
process, when such an $H$-selfadjoint $m$th root exists, its construction is
also given.
|
Decentralized network theories focus on achieving consensus and in speeding
up the rate of convergence to consensus. However, network cohesion (i.e.,
maintaining consensus) during transitions between consensus values is also
important when transporting flexible structures. Deviations in the robot
positions due to loss of cohesion when moving flexible structures from one
position to another, such as uncuredcomposite aircraft wings, can cause large
deformations, which in turn, can result in potential damage. The major
contribution of this work is to develop a decentralized approach to transport
flexible objects in a cohesive manner using local force measurements, without
the need for additional communication between the robots. Additionally,
stability conditions are developed for discrete-time implementation of the
proposed cohesive transition approach, and experimental results are presented,
which show that the proposed cohesive transportation approach can reduce the
relative deformations by 85% when compared to the case without it.
|
In this paper, we present a new mathematical model for pandemics called
SUTRA. The acronym stands for Susceptible, Undetected, Tested (positive), and
Removed Approach. A novel feature of our model is that it allows estimation of
parameters from reported infection data, unlike most other models that estimate
parameter values from other considerations. This gives the model the ability to
predict the future trajectory well, as long as parameters do not change. In
addition, it is possible to quantify how the model parameter values were
affected by various interventions to control the pandemic, and/or the arrival
of new mutants. We have applied our model to analyze and predict the
progression of the COVID-19 pandemic in several countries. We present our
predictions for two countries: India and US. In both cases, the model-computed
trajectory closely matches actual one. Moreover, our predictions were used by
entities such as the Reserve Bank of India to formulate policy.
|
The classical Jordan curve theorem for digital curves asserts that the Jordan
curve theorem remains valid in the Khalimsky plane. Since the Khalimsky plane
is a quotient space of $\mathbb R^2$ induced by a tiling of squares, it is
natural to ask for which other tilings of the plane it is possible to obtain a
similar result. In this paper we prove a Jordan curve theorem which is valid
for every locally finite tiling of $\mathbb R^2$. As a corollary of our result,
we generalize some classical Jordan curve theorems for grids of points,
including Rosenfeld's theorem.
|
The use of statistical methods in sport analytics has gained a rapidly
growing interest over the last decade, and nowadays is common practice. In
particular, the interest in understanding and predicting an athlete's
performance throughout his/her career is motivated by the need to evaluate the
efficacy of training programs, anticipate fatigue to prevent injuries and
detect unexpected of disproportionate increases in performance that might be
indicative of doping. Moreover, fast evolving data gathering technologies
require up to date modelling techniques that adapt to the distinctive features
of sports data. In this work, we propose a hierarchical Bayesian model for
describing and predicting the evolution of performance over time for shot put
athletes. To account for seasonality and heterogeneity in recorded results, we
rely both on a smooth functional contribution and on a linear mixed effect
model with heteroskedastic errors to represent the athlete-specific
trajectories. The resulting model provides an accurate description of the
performance trajectories and helps specifying both the intra- and
inter-seasonal variability of measurements. Further, the model allows for the
prediction of athletes' performance in future seasons. We apply our model to an
extensive real world data set on performance data of professional shot put
athletes recorded at elite competitions.
|
We analyze dispersion relations of magnons in ferromagnetic nanostructures
with uniaxial anisotropy taking into account inertial terms, i.e. magnetic
nutation. Inertial effects are parametrized by damping-independent parameter
$\beta$, which allows for an unambiguous discrimination of inertial effects
from Gilbert damping parameter $\alpha$. The analysis of magnon dispersion
relation shows its two branches are modified by the inertial effect, albeit in
different ways. The upper nutation branch starts at $\omega=1/ \beta$, the
lower branch coincides with FMR in the long-wavelength limit and deviates from
the zero-inertia parabolic dependence $\simeq\omega_{FMR}+Dk^2$ of the exchange
magnon. Taking a realistic experimental geometry of magnetic thin films,
nanowires and nanodiscs, magnon eigenfrequencies, eigenvectors and $Q$-factors
are found to depend on the shape anisotropy. The possibility of phase-matched
magneto-elastic excitation of nutation magnons is discussed and the condition
was found to depend on $\beta$, exchange stiffness $D$ and the acoustic
velocity.
|
In this paper, we present a neat yet effective transformer-based framework
for visual grounding, namely TransVG, to address the task of grounding a
language query to the corresponding region onto an image. The state-of-the-art
methods, including two-stage or one-stage ones, rely on a complex module with
manually-designed mechanisms to perform the query reasoning and multi-modal
fusion. However, the involvement of certain mechanisms in fusion module design,
such as query decomposition and image scene graph, makes the models easily
overfit to datasets with specific scenarios, and limits the plenitudinous
interaction between the visual-linguistic context. To avoid this caveat, we
propose to establish the multi-modal correspondence by leveraging transformers,
and empirically show that the complex fusion modules (\eg, modular attention
network, dynamic graph, and multi-modal tree) can be replaced by a simple stack
of transformer encoder layers with higher performance. Moreover, we
re-formulate the visual grounding as a direct coordinates regression problem
and avoid making predictions out of a set of candidates (\emph{i.e.}, region
proposals or anchor boxes). Extensive experiments are conducted on five widely
used datasets, and a series of state-of-the-art records are set by our TransVG.
We build the benchmark of transformer-based visual grounding framework and make
the code available at \url{https://github.com/djiajunustc/TransVG}.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.