abstract
stringlengths 42
2.09k
|
---|
This work introduces ParAMS -- a versatile Python package that aims to make
parameterization workflows in computational chemistry and physics more
accessible, transparent and reproducible. We demonstrate how ParAMS facilitates
the parameter optimization for potential energy surface (PES) models, which can
otherwise be a tedious specialist task. Because of the package's modular
structure, various functionality can be easily combined to implement a
diversity of parameter optimization protocols. For example, the choice of PES
model and the parameter optimization algorithm can be selected independently.
An illustration of ParAMS' strengths is provided in two case studies: i) a
density functional-based tight binding (DFTB) repulsive potential for the
inorganic ionic crystal ZnO, and ii) a ReaxFF force field for the simulation of
organic disulfides.
|
Even though Afaan Oromo is the most widely spoken language in the Cushitic
family by more than fifty million people in the Horn and East Africa, it is
surprisingly resource-scarce from a technological point of view. The increasing
amount of various useful documents written in English language brings to
investigate the machine that can translate those documents and make it easily
accessible for local language. The paper deals with implementing a translation
of English to Afaan Oromo and vice versa using Neural Machine Translation. But
the implementation is not very well explored due to the limited amount and
diversity of the corpus. However, using a bilingual corpus of just over 40k
sentence pairs we have collected, this study showed a promising result. About a
quarter of this corpus is collected via Community Engagement Platform (CEP)
that was implemented to enrich the parallel corpus through crowdsourcing
translations.
|
We introduce the notion of a bicocycle double cross product (resp. sum) Lie
group (resp. Lie algebra), and a bicocycle double cross product bialgebra,
generalizing the unified products. On the level of Lie groups the construction
yields a Lie group on the product space of two pointed manifolds, none of which
being necessarily a subgroup. On the level of Lie algebras, similarly, a Lie
algebra is obtained on the direct sum of two vector spaces, none of which is
required to be a subalgebra. Finally, on the quantum level the theory presents
a bialgebra, on the tensor product of two (co)algebras that are not necessarily
sub-bialgebras, the semidual of which being a cocycle bicrossproduct bialgebra.
|
The present study deals a scientometric analysis of 8486 bibliometric
publications retrieved from the Web of Science database during the period 2008
to 2017. Data is collected and analyzed using Bibexcel software. The study
focuses on various aspect of the quantitative research such as growth of papers
(year wise), Collaborative Index (CI), Degree of Collaboration (DC),
Co-authorship Index (CAI), Collaborative Co-efficient (CC), Modified
Collaborative Co-Efficient (MCC), Lotka's Exponent value, Kolmogorov-Smirnov
test (K-S Test).
|
The spontaneous breaking of parity-time ($\mathcal{PT}$) symmetry, which
yields rich critical behavior in non-Hermitian systems, has stimulated much
interest. Whereas most previous studies were performed within the
single-particle or mean-field framework, exploring the interplay between
$\mathcal{PT}$ symmetry and quantum fluctuations in a many-body setting is a
burgeoning frontier. Here, by studying the collective excitations of a Fermi
superfluid under an imaginary spin-orbit coupling, we uncover an emergent
$\mathcal{PT}$-symmetry breaking in the Anderson-Bogoliubov (AB) modes, whose
quasiparticle spectra undergo a transition from being completely real to
completely imaginary, even though the superfluid ground state retains an
unbroken $\mathcal{PT}$ symmetry. The critical point of the transition is
marked by a non-analytic kink in the speed of sound, as the latter completely
vanishes at the critical point where the system is immune to low-frequency
perturbations.These critical phenomena derive from the presence of a spectral
point gap in the complex quasiparticle dispersion, and are therefore
topological in origin.
|
The application machine learning (ML) algorithms to turbulence modeling has
shown promise over the last few years, but their application has been
restricted to eddy viscosity based closure approaches. In this article we
discuss rationale for the application of machine learning with high-fidelity
turbulence data to develop models at the level of Reynolds stress transport
modeling. Based on these rationale we compare different machine learning
algorithms to determine their efficacy and robustness at modeling the different
transport processes in the Reynolds Stress Transport Equations. Those data
driven algorithms include Random forests, gradient boosted trees and neural
networks. The direct numerical simulation (DNS) data for flow in channels is
used both as training and testing of the ML models. The optimal
hyper-parameters of the ML algorithms are determined using Bayesian
optimization. The efficacy of above mentioned algorithms is assessed in the
modeling and prediction of the terms in the Reynolds stress transport
equations. It was observed that all the three algorithms predict the turbulence
parameters with acceptable level of accuracy. These ML models are then applied
for prediction of the pressure strain correlation of flow cases that are
different from the flows used for training, to assess their robustness and
generalizability. This explores the assertion that ML based data driven
turbulence models can overcome the modeling limitations associated with the
traditional turbulence models and ML models trained with large amounts data
with different classes of flows can predict flow field with reasonable accuracy
for unknown flows with similar flow physics. In addition to this verification
we carry out validation for the final ML models by assessing the importance of
different input features for prediction.
|
We investigate the effects of field temperature $T^{(f)}$ on the entanglement
harvesting between two uniformly accelerated detectors. For their parallel
motion, the thermal nature of fields does not produce any entanglement, and
therefore, the outcome is the same as the non-thermal situation. On the
contrary, $T^{(f)}$ affects entanglement harvesting when the detectors are in
anti-parallel motion, i.e., when detectors $A$ and $B$ are in the right and
left Rindler wedges, respectively. While for $T^{(f)}=0$ entanglement
harvesting is possible for all values of $A$'s acceleration $a_A$, in the
presence of temperature, it is possible only within a narrow range of $a_A$. In
$(1+1)$ dimensions, the range starts from specific values and extends to
infinity, and as we increase $T^{(f)}$, the minimum required value of $a_A$ for
entanglement harvesting increases. Moreover, above a critical value $a_A=a_c$
harvesting increases as we increase $T^{(f)}$, which is just opposite to the
accelerations below it. There are several critical values in $(1+3)$ dimensions
when they are in different accelerations. Contrary to the single range in
$(1+1)$ dimensions, here harvesting is possible within several discrete ranges
of $a_A$. Interestingly, for equal accelerations, one has a single critical
point, with nature quite similar to $(1+1)$ dimensional results. We also
discuss the dependence of mutual information among these detectors on $a_A$ and
$T^{(f)}$.
|
We study thermodynamics and critical behaviors of higher-dimensional Lovelock
black holes with non-maximally symmetric horizons in the canonical ensemble of
extended phase space. The effects from non-constancy of the horizon of the
black hole via appearing two chargelike parameters in thermodynamic quantities
of third-order Lovelock black holes are investigated. We find that Ricci flat
black holes with nonconstant curvature horizon show critical behavior. This is
an interesting feature that is not seen for any kind of black hole in Einstein
or Lovelock gravity in the literature. We examine how various interesting
thermodynamic phenomena such as standard first-order small-large black hole
phase transition, a reentrant phase transition, or zeroth order phase
transition happens for Ricci flat, spherical, or hyperbolic black holes with
nonconstant curvature horizon depending on the values of Lovelock coefficient
and chargelike parameters. While for a spherical black hole of third order
Lovelock gravity with constant curvature horizon phase transition is observed
only for $7\leq d \leq11$, for our solution criticality and phase transition
exist in every dimension. With a proper choice of the free parameters, a
large-small-large black hole phase transition occurs. This process is
accompanied by a finite jump of the Gibbs free energy referred to as a
zeroth-order phase transition. For the case $\kappa=-1$ a novel behavior is
found for which three critical points could exist.
|
We study from the proof complexity perspective the (informal) proof search
problem:
Is there an optimal way to search for propositional proofs?
We note that for any fixed proof system there exists a time-optimal proof
search algorithm. Using classical proof complexity results about reflection
principles we prove that a time-optimal proof search algorithm exists without
restricting proof systems iff a p-optimal proof system exists.
To characterize precisely the time proof search algorithms need for
individual formulas we introduce a new proof complexity measure based on
algorithmic information concepts. In particular, to a proof system $P$ we
attach {\bf information-efficiency function} $i_P(\tau)$ assigning to a
tautology a natural number, and we show that:
- $i_P(\tau)$ characterizes time any $P$-proof search algorithm has to use on
$\tau$ and that for a fixed $P$ there is such an information-optimal algorithm,
- a proof system is information-efficiency optimal iff it is p-optimal,
- for non-automatizable systems $P$ there are formulas $\tau$ with short
proofs but having large information measure $i_P(\tau)$.
We isolate and motivate the problem to establish unconditional
super-logarithmic lower bounds for $i_P(\tau)$ where no super-polynomial size
lower bounds are known. We also point out connections of the new measure with
some topics in proof complexity other than proof search.
|
Finely tuning MPI applications and understanding the influence of
keyparameters (number of processes, granularity, collective
operationalgorithms, virtual topology, and process placement) is critical
toobtain good performance on supercomputers. With the high consumptionof
running applications at scale, doing so solely to optimize theirperformance is
particularly costly. Havinginexpensive but faithful predictions of expected
performance could bea great help for researchers and system administrators.
Themethodology we propose decouples the complexity of the platform, whichis
captured through statistical models of the performance of its maincomponents
(MPI communications, BLAS operations), from the complexityof adaptive
applications by emulating the application and skippingregular non-MPI parts of
the code. We demonstrate the capability of our method with
High-PerformanceLinpack (HPL), the benchmark used to rank supercomputers in
theTOP500, which requires careful tuning. We briefly present (1) how
theopen-source version of HPL can be slightly modified to allow a fastemulation
on a single commodity server at the scale of asupercomputer. Then we present
(2) an extensive (in)validation studythat compares simulation with real
experiments and demonstrates our ability to predict theperformance of HPL
within a few percent consistently. This study allows us toidentify the main
modeling pitfalls (e.g., spatial and temporal nodevariability or network
heterogeneity and irregular behavior) that needto be considered. Last, we show
(3) how our ``surrogate'' allowsstudying several subtle HPL parameter
optimization problems whileaccounting for uncertainty on the platform.
|
In 2010, the unified gas kinetic scheme (UGKS) was proposed by Xu et al . (A
unified gas-kinetic scheme for continuum and rarefied flows, Journal of
Computational Physics, 2010). In the past decade, many numerical techniques
have been developed to improve the capability of the UGKS in the aspects of
efficiency increment, memory reduction, and physical modeling. The methodology
of the direct modeling of the UGKS on discretization scale provides a general
framework for construction of multiscale method for multiscale transport
processes. This paper reviews the development and extension of the UGKS in its
first decade.
|
An accurate description of electron correlation is one of the most
challenging problems in quantum chemistry. The exact electron correlation can
be obtained by means of full configuration interaction (FCI). A simple strategy
for approximating FCI at a reduced computational cost is selected CI (SCI),
which diagonalizes the Hamiltonian within only the chosen configuration space.
Recovery of the contributions of the remaining configurations is possible with
second-order perturbation theory. Here, we apply adaptive sampling
configuration interaction (ASCI) combined with molecular orbital optimizations
(ASCI-SCF) corrected with second-order perturbation theory (ASCI-SCF-PT2) for
geometry optimization by implementing the analytical nuclear gradient algorithm
for ASCI-PT2 with the Z-vector (Lagrangian) formalism. We demonstrate that for
phenalenyl radicals and anthracene, optimized geometries and the number of
unpaired electrons can be obtained at nearly the CASSCF accuracy by
incorporating PT2 corrections and extrapolating them. We demonstrate the
current algorithm's utility for optimizing the equilibrium geometries and
electronic structures of 6-ring-fused polycyclic aromatic hydrocarbons and
4-periacene.
|
Deep neural networks (DNNs) are widely used in pattern-recognition tasks for
which a human comprehensible, quantitative description of the data-generating
process, e.g., in the form of equations, cannot be achieved. While doing so,
DNNs often produce an abstract (entangled and non-interpretable) representation
of the data-generating process. This is one of the reasons why DNNs are not
extensively used in physics-signal processing: physicists generally require
their analyses to yield quantitative information about the studied systems. In
this article we use DNNs to disentangle components of oscillating time series,
and recover meaningful information. We show that, because DNNs can find useful
abstract feature representations, they can be used when prior knowledge about
the signal-generating process exists, but is not complete, as it is
particularly the case in "new-physics" searches. To this aim, we train our DNN
on synthetic oscillating time series to perform two tasks: a regression of the
signal latent parameters and signal denoising by an Autoencoder-like
architecture. We show that the regression and denoising performance is similar
to those of least-square curve fittings (LS-fit) with true latent parameters'
initial guesses, in spite of the DNN needing no initial guesses at all. We then
explore applications in which we believe our architecture could prove useful
for time-series processing in physics, when prior knowledge is incomplete. As
an example, we employ DNNs as a tool to inform LS-fits when initial guesses are
unknown. We show that the regression can be performed on some latent
parameters, while ignoring the existence of others. Because the Autoencoder
needs no prior information about the physical model, the remaining unknown
latent parameters can still be captured, thus making use of partial prior
knowledge, while leaving space for data exploration and discoveries.
|
The reliability of cardiovascular computational models depends on the
accurate solution of the hemodynamics, the realistic characterization of the
hyperelastic and electric properties of the tissues along with the correct
description of their interaction. The resulting
fluid-structure-electrophysiology interaction (FSEI) thus requires an immense
computational power, usually available in large supercomputing centers, and
requires long time to obtain results even if multi-CPU processors are used (MPI
acceleration). In recent years, graphics processing units (GPUs) have emerged
as a convenient platform for high performance computing, as they allow for
considerable reductions of the time-to-solution. This approach is particularly
appealing if the tool has to support medical decisions that require solutions
within reduced times and possibly obtained by local computational resources.
Accordingly, our multi-physics solver has been ported to GPU architectures
using CUDA Fortran to tackle fast and accurate hemodynamics simulations of the
human heart without resorting to large-scale supercomputers. This work
describes the use of CUDA to accelerate the FSEI on heterogeneous clusters,
where both the CPUs and GPUs are used in synergistically with minor
modifications of the original source code. The resulting GPU accelerated code
solves a single heartbeat within a few hours (from three to ten depending on
the grid resolution) running on premises computing facility made of few GPU
cards, which can be easily installed in a medical laboratory or in a hospital,
thus opening towards a systematic computational fluid dynamics (CFD) aided
diagnostic.
|
The density matrix formalism is a fundamental tool in studying various
problems in quantum information processing. In the space of density matrices,
the most well-known and physically relevant measures are the Hilbert-Schmidt
ensemble and the Bures-Hall ensemble. In this work, we propose a generalized
ensemble of density matrices, termed quantum interpolating ensemble, which is
able to interpolate between these two seemingly unrelated ensembles. As a first
step to understand the proposed ensemble, we derive the exact mean formulas of
entanglement entropies over such an ensemble generalizing several recent
results in the literature. We also derive some key properties of the
corresponding orthogonal polynomials relevant to obtaining other statistical
information of the entropies. Numerical results demonstrate the usefulness of
the proposed ensemble in estimating the degree of entanglement of quantum
states.
|
We study tight projective 2-designs in three different settings. In the
complex setting, Zauner's conjecture predicts the existence of a tight
projective 2-design in every dimension. Pandey, Paulsen, Prakash, and Rahaman
recently proposed an approach to make quantitative progress on this conjecture
in terms of the entanglement breaking rank of a certain quantum channel. We
show that this quantity is equal to the size of the smallest weighted
projective 2-design. Next, in the finite field setting, we introduce a notion
of projective 2-designs, we characterize when such projective 2-designs are
tight, and we provide a construction of such objects. Finally, in the
quaternionic setting, we show that every tight projective 2-design for H^d
determines an equi-isoclinic tight fusion frame of d(2d-1) subspaces of
R^d(2d+1) of dimension 3.
|
Quantum key distribution (QKD) provides information theoretically secures key
exchange requiring authentication of the classic data processing channel via
pre-sharing of symmetric private keys. In previous studies, the lattice-based
post-quantum digital signature algorithm Aigis-Sig, combined with public-key
infrastructure (PKI) was used to achieve high-efficiency quantum security
authentication of QKD, and its advantages in simplifying the MAN network
structure and new user entry were demonstrated. This experiment further
integrates the PQC algorithm into the commercial QKD system, the Jinan field
metropolitan QKD network comprised of 14 user nodes and 5 optical switching
nodes. The feasibility, effectiveness and stability of the post-quantum
cryptography (PQC) algorithm and advantages of replacing trusted relays with
optical switching brought by PQC authentication large-scale metropolitan area
QKD network were verified. QKD with PQC authentication has potential in
quantum-secure communications, specifically in metropolitan QKD networks.
|
Neural network training and validation rely on the availability of large
high-quality datasets. However, in many cases only incomplete datasets are
available, particularly in health care applications, where each patient
typically undergoes different clinical procedures or can drop out of a study.
Since the data to train the neural networks need to be complete, most studies
discard the incomplete datapoints, which reduces the size of the training data,
or impute the missing features, which can lead to artefacts. Alas, both
approaches are inadequate when a large portion of the data is missing. Here, we
introduce GapNet, an alternative deep-learning training approach that can use
highly incomplete datasets. First, the dataset is split into subsets of samples
containing all values for a certain cluster of features. Then, these subsets
are used to train individual neural networks. Finally, this ensemble of neural
networks is combined into a single neural network whose training is fine-tuned
using all complete datapoints. Using two highly incomplete real-world medical
datasets, we show that GapNet improves the identification of patients with
underlying Alzheimer's disease pathology and of patients at risk of
hospitalization due to Covid-19. By distilling the information available in
incomplete datasets without having to reduce their size or to impute missing
values, GapNet will permit to extract valuable information from a wide range of
datasets, benefiting diverse fields from medicine to engineering.
|
Any-to-any voice conversion (VC) aims to convert the timbre of utterances
from and to any speakers seen or unseen during training. Various any-to-any VC
approaches have been proposed like AUTOVC, AdaINVC, and FragmentVC. AUTOVC, and
AdaINVC utilize source and target encoders to disentangle the content and
speaker information of the features. FragmentVC utilizes two encoders to encode
source and target information and adopts cross attention to align the source
and target features with similar phonetic content. Moreover, pre-trained
features are adopted. AUTOVC used dvector to extract speaker information, and
self-supervised learning (SSL) features like wav2vec 2.0 is used in FragmentVC
to extract the phonetic content information. Different from previous works, we
proposed S2VC that utilizes Self-Supervised features as both source and target
features for VC model. Supervised phoneme posteriororgram (PPG), which is
believed to be speaker-independent and widely used in VC to extract content
information, is chosen as a strong baseline for SSL features. The objective
evaluation and subjective evaluation both show models taking SSL feature CPC as
both source and target features outperforms that taking PPG as source feature,
suggesting that SSL features have great potential in improving VC.
|
We consider a toy model for emergence of chaos in a quantum many-body
short-range-interacting system: two one-dimensional hard-core particles in a
box, with a small mass defect as a perturbation over an integrable system, the
latter represented by two equal mass particles. To that system, we apply a
quantum generalization of Chirikov's criterion for the onset of chaos, i.e. the
criterion of overlapping resonances. There, classical nonlinear resonances
translate almost verbatim to the quantum language. Quantum mechanics intervenes
at a later stage: the resonances occupying less than one Hamiltonian eigenstate
are excluded from the chaos criterion. Resonances appear as contiguous patches
of low purity unperturbed eigenstates, separated by the groups of undestroyed
states -- the quantum analogues of the classical KAM tori.
|
We review recent numerical studies of two-dimensional (2D) Dirac fermion
theories that exhibit an unusual mechanism of topological protection against
Anderson localization. These describe surface-state quasiparticles of
time-reversal invariant, three-dimensional (3D) topological superconductors
(TSCs), subject to the effects of quenched disorder. Numerics reveal a
surprising connection between 3D TSCs in classes AIII, CI, and DIII, and 2D
quantum Hall effects in classes A, C, and D. Conventional arguments derived
from the non-linear $\sigma$-model picture imply that most TSC surface states
should Anderson localize for arbitrarily weak disorder (CI, AIII), or exhibit
weak antilocalizing behavior (DIII). The numerical studies reviewed here
instead indicate spectrum-wide surface quantum criticality, characterized by
robust eigenstate multifractality throughout the surface-state energy spectrum.
In other words, there is an "energy stack" of critical wave functions. For
class AIII, multifractal eigenstate and conductance analysis reveals identical
statistics for states throughout the stack, consistent with the class A integer
quantum-Hall plateau transition (QHPT). Class CI TSCs exhibit surface stacks of
class C spin QHPT states. Critical stacking of a third kind, possibly
associated to the class D thermal QHPT, is identified for nematic velocity
disorder of a single Majorana cone in class DIII. The Dirac theories studied
here can be represented as perturbed 2D Wess-Zumino-Novikov-Witten sigma
models; the numerical results link these to Pruisken models with the
topological angle $\vartheta = \pi$. Beyond applications to TSCs, all three
stacked Dirac theories (CI, AIII, DIII) naturally arise in the effective
description of dirty $d$-wave quasiparticles, relevant to the high-$T_c$
cuprates.
|
We consider the effects of the heat balance on the structural stability of a
preflare current layer. The problem of small perturbations is solved in the
piecewise homogeneous MHD approximation taking into account the viscosity, the
electrical and thermal conductivity, and the radiative cooling. Solution of the
problem allows the formation of an instability of a thermal nature. There is no
external magnetic field inside the current layer in equilibrium state, but it
can penetrate inside when the current layer is disturbed. Formation of a
magnetic field perturbation inside the layer creates a dedicated frequency in a
broadband disturbance subject to thermal instability. In the linear phase, the
growth time of the instability is proportional to the characteristic time of
radiative cooling of plasma and depends on the logarithmic derivatives of the
radiative cooling function with respect to the plasma parameters. The
instability results in transverse fragmentation of the current layer with a
spatial period of 1-10 Mm along the layer in a wide range of coronal plasma
parameters. The role of that instability in the triggering for the primary
energy release in solar flares is discussed.
|
In this article, we consider a class of functions on $\mathbb{R}^d$, called
positive homogeneous functions, which interact well with certain continuous
one-parameter groups of (generally anisotropic) dilations. Generalizing the
Euclidean norm, positive homogeneous functions appear naturally in the study of
convolution powers of complex-valued functions on $\mathbb{Z}^d$. As the
spherical measure is a Radon measure on the unit sphere which is invariant
under the symmetry group of the Euclidean norm, to each positive homogeneous
function $P$, we construct a Radon measure $\sigma_P$ on $S=\{\eta \in
\mathbb{R}^d:P(\eta)=1\}$ which is invariant under the symmetry group of $P$.
With this measure, we prove a generalization of the classical polar-coordinate
integration formula and deduce a number of corollaries in this setting. We then
turn to the study of convolution powers of complex functions on $\mathbb{Z}^d$
and certain oscillatory integrals which arise naturally in that context. Armed
with our integration formula and the Van der Corput lemma, we establish sup
norm-type estimates for convolution powers; this result is new and partially
extends results of [20] and [21].
|
Cyber attacks pose crucial threats to computer system security, and put
digital treasuries at excessive risks. This leads to an urgent call for an
effective intrusion detection system that can identify the intrusion attacks
with high accuracy. It is challenging to classify the intrusion events due to
the wide variety of attacks. Furthermore, in a normal network environment, a
majority of the connections are initiated by benign behaviors. The class
imbalance issue in intrusion detection forces the classifier to be biased
toward the majority/benign class, thus leave many attack incidents undetected.
Spurred by the success of deep neural networks in computer vision and natural
language processing, in this paper, we design a new system named DeepIDEA that
takes full advantage of deep learning to enable intrusion detection and
classification. To achieve high detection accuracy on imbalanced data, we
design a novel attack-sharing loss function that can effectively move the
decision boundary towards the attack classes and eliminates the bias towards
the majority/benign class. By using this loss function, DeepIDEA respects the
fact that the intrusion mis-classification should receive higher penalty than
the attack mis-classification. Extensive experimental results on three
benchmark datasets demonstrate the high detection accuracy of DeepIDEA. In
particular, compared with eight state-of-the-art approaches, DeepIDEA always
provides the best class-balanced accuracy.
|
We establish a convergence theorem for a certain type of stochastic gradient
descent, which leads to a convergent variant of the back-propagation algorithm
|
Many applications require the robustness, or ideally the invariance, of a
neural network to certain transformations of input data. Most commonly, this
requirement is addressed by either augmenting the training data, using
adversarial training, or defining network architectures that include the
desired invariance automatically. Unfortunately, the latter often relies on the
ability to enlist all possible transformations, which make such approaches
largely infeasible for infinite sets of transformations, such as arbitrary
rotations or scaling. In this work, we propose a method for provably invariant
network architectures with respect to group actions by choosing one element
from a (possibly continuous) orbit based on a fixed criterion. In a nutshell,
we intend to 'undo' any possible transformation before feeding the data into
the actual network. We analyze properties of such approaches, extend them to
equivariant networks, and demonstrate their advantages in terms of robustness
as well as computational efficiency in several numerical examples. In
particular, we investigate the robustness with respect to rotations of images
(which can possibly hold up to discretization artifacts only) as well as the
provable rotational and scaling invariance of 3D point cloud classification.
|
Reranking is attracting incremental attention in the recommender systems,
which rearranges the input ranking list into the final rank-ing list to better
meet user demands. Most existing methods greedily rerank candidates through the
rating scores from point-wise or list-wise models. Despite effectiveness,
neglecting the mutual influence between each item and its contexts in the final
ranking list often makes the greedy strategy based reranking methods
sub-optimal. In this work, we propose a new context-wise reranking framework
named Generative Rerank Network (GRN). Specifically, we first design the
evaluator, which applies Bi-LSTM and self-attention mechanism to model the
contextual information in the labeled final ranking list and predict the
interaction probability of each item more precisely. Afterwards, we elaborate
on the generator, equipped with GRU, attention mechanism and pointer network to
select the item from the input ranking list step by step. Finally, we apply
cross-entropy loss to train the evaluator and, subsequently, policy gradient to
optimize the generator under the guidance of the evaluator. Empirical results
show that GRN consistently and significantly outperforms state-of-the-art
point-wise and list-wise methods. Moreover, GRN has achieved a performance
improvement of 5.2% on PV and 6.1% on IPV metric after the successful
deployment in one popular recommendation scenario of Taobao application.
|
The formation of the solar system's giant planets predated the ultimate epoch
of massive impacts that concluded the process of terrestrial planet formation.
Following their formation, the giant planets' orbits evolved through an episode
of dynamical instability. Several qualities of the solar system have recently
been interpreted as evidence of this event transpiring within the first ~100
Myr after the Sun's birth; around the same time as the final assembly of the
inner planets. In a series of recent papers we argued that such an early
instability could resolve several problems revealed in classic numerical
studies of terrestrial planet formation; namely the small masses of Mars and
the asteroid belt. In this paper, we revisit the early instability scenario
with a large suite of simulations specifically designed to understand the
degree to which Earth and Mars' formation are sensitive to the specific
evolution of Jupiter and Saturn's orbits. By deriving our initial terrestrial
disks directly from recent high-resolution simulations of planetesimal
accretion, our results largely confirm our previous findings regarding the
instability's efficiency of truncating the terrestrial disk outside of the
Earth-forming region in simulations that best replicate the outer solar system.
Moreover, our work validates the primordial 2:1 Jupiter-Saturn resonance within
the early instability framework as a viable evolutionary path for the solar
system. While our simulations elucidate the fragility of the terrestrial system
during the epoch of giant planet migration, many realizations yield outstanding
solar system analogs when scrutinized against a number of observational
constraints. Finally, we highlight the inability of models to form adequate
Mercury-analogs and the low eccentricities of Earth and Venus as the most
significant outstanding problems for future numerical studies to resolve.
|
5G cellular networks are being deployed all over the world and this
architecture supports ultra-dense network (UDN) deployment. Small cells have a
very important role in providing 5G connectivity to the end users. Exponential
increases in devices, data and network demands make it mandatory for the
service providers to manage handovers better, to cater to the services that a
user desire. In contrast to any traditional handover improvement scheme, we
develop a 'Deep-Mobility' model by implementing a deep learning neural network
(DLNN) to manage network mobility, utilizing in-network deep learning and
prediction. We use network key performance indicators (KPIs) to train our model
to analyze network traffic and handover requirements. In this method, RF signal
conditions are continuously observed and tracked using deep learning neural
networks such as the Recurrent neural network (RNN) or Long Short-Term Memory
network (LSTM) and system level inputs are also considered in conjunction, to
take a collective decision for a handover. We can study multiple parameters and
interactions between system events along with the user mobility, which would
then trigger a handoff in any given scenario. Here, we show the fundamental
modeling approach and demonstrate usefulness of our model while investigating
impacts and sensitivities of certain KPIs from the user equipment (UE) and
network side.
|
As machine learning systems become more powerful they also become
increasingly unpredictable and opaque. Yet, finding human-understandable
explanations of how they work is essential for their safe deployment. This
technical report illustrates a methodology for investigating the causal
mechanisms that drive the behaviour of artificial agents. Six use cases are
covered, each addressing a typical question an analyst might ask about an
agent. In particular, we show that each question cannot be addressed by pure
observation alone, but instead requires conducting experiments with
systematically chosen manipulations so as to generate the correct causal
evidence.
|
In this paper, we investigate the model reference adaptive control approach
for uncertain piecewise affine systems with performance guarantees. The
proposed approach ensures the error metric, defined as the weighted Euclidean
norm of the state tracking error, to be confined within a user-defined
time-varying performance bound. We introduce an auxiliary performance function
to construct a barrier Lyapunov function. This auxiliary performance signal is
reset at each switching instant, which prevents the transgression of the
barriers caused by the jumps of the error metric at switching instants. The
dwell time constraints are derived based on the parameters of the user-defined
performance bound and the auxiliary performance function. We also prove that
the Lyapunov function is non-increasing even at the switching instants and thus
does not impose extra dwell time constraints. Furthermore, we propose the
robust modification of the adaptive controller for the uncertain piecewise
affine systems subject to unmatched disturbances. A Numerical example validates
the correctness of the proposed approach.
|
For a commutative Noetherian ring $R$ and a module-finite $R$-algebra
$\Lambda$, we study the set $\mathsf{tors} \Lambda$ (respectively,
$\mathsf{torf}\Lambda$) of torsion (respectively, torsionfree) classes of the
category of finitely generated $\Lambda$-modules. We construct a bijection from
$\mathsf{torf}\Lambda$ to $\prod_{\mathfrak{p}} \mathsf{torf}(\Lambda\otimes_R
\kappa(\mathfrak{p}))$, and an embedding $\Phi_{\rm t}$ from $\mathsf{tors}
\Lambda$ to $\mathbb{T}_R(\Lambda):=\prod_{\mathfrak{p}}
\mathsf{tors}(\Lambda\otimes_R \kappa(\mathfrak{p}))$, where $\mathfrak{p}$
runs all prime ideals of $R$. When $\Lambda=R$, these give classifications of
torsionfree classes, torsion classes and Serre subcategories of $\mathsf{mod}
R$ due to Takahashi, Stanley-Wang and Gabriel. To give a description of
$\mathrm{Im} \Phi_{\rm t}$, we introduce the notion of compatible elements in
$\mathbb{T}_R(\Lambda)$, and prove that all elements in $\mathrm{Im} \Phi_{\rm
t}$ are compatible. We give a sufficient condition on $(R, \Lambda)$ such that
all compatible elements belong to $\mathrm{Im} \Phi_{\rm t}$ (we call $(R,
\Lambda)$ compatible in this case). For example, if $R$ is semi-local and $\dim
R \leq 1$, then $(R, \Lambda)$ is compatible. We also give a sufficient
condition in terms of silting $\Lambda$-modules. As an application, for a
Dynkin quiver $Q$, $(R, RQ)$ is compatible and we have a poset isomorphism
$\mathsf{tors} RQ \simeq \mathsf{Hom}_{\rm poset}(\mathsf{Spec} R,
\mathfrak{C}_Q)$ for the Cambrian lattice $\mathfrak{C}_Q$ of $Q$.
|
Outflows driven by active galactic nuclei (AGN) are an important channel for
accreting supermassive black holes (SMBHs) to interact with their host galaxies
and clusters. Properties of the outflows are however poorly constrained due to
the lack of kinetically resolved data of the hot plasma that permeates the
circumgalactic and intracluster space. In this work, we use a single parameter,
outflow-to-accretion mass-loading factor $m=\dot{M}_{\rm out}/\dot{M}_{\rm
BH}$, to characterize the outflows that mediate the interaction between SMBHs
and their hosts. By modeling both M87 and Perseus, and comparing the simulated
thermal profiles with the X-ray observations of these two systems, we
demonstrate that $m$ can be constrained between $200-500$. This parameter
corresponds to a bulk flow speed between $4,000-7,000\,{\rm km\,s}^{-1}$ at
around 1 kpc, and a thermalized outflow temperature between
$10^{8.7}-10^{9}\,{\rm K}$. Our results indicate that the dominant outflow
speeds in giant elliptical galaxies and clusters are much lower than in the
close vicinity of the SMBH, signaling an efficient coupling with and
deceleration by the surrounding medium on length scales below 1 kpc.
Consequently, AGNs may be efficient at launching outflows $\sim10$ times more
massive than previously uncovered by measurements of cold, obscuring material.
We also examine the mass and velocity distribution of the cold gas, which
ultimately forms a rotationally supported disk in simulated clusters. The
rarity of such disks in observations indicates that further investigations are
needed to understand the evolution of the cold gas after it forms.
|
Adversarial attacks have threatened the application of deep neural networks
in security-sensitive scenarios. Most existing black-box attacks fool the
target model by interacting with it many times and producing global
perturbations. However, global perturbations change the smooth and
insignificant background, which not only makes the perturbation more easily be
perceived but also increases the query overhead. In this paper, we propose a
novel framework to perturb the discriminative areas of clean examples only
within limited queries in black-box attacks. Our framework is constructed based
on two types of transferability. The first one is the transferability of model
interpretations. Based on this property, we identify the discriminative areas
of a given clean example easily for local perturbations. The second is the
transferability of adversarial examples. It helps us to produce a local
pre-perturbation for improving query efficiency. After identifying the
discriminative areas and pre-perturbing, we generate the final adversarial
examples from the pre-perturbed example by querying the targeted model with two
kinds of black-box attack techniques, i.e., gradient estimation and random
search. We conduct extensive experiments to show that our framework can
significantly improve the query efficiency during black-box perturbing with a
high attack success rate. Experimental results show that our attacks outperform
state-of-the-art black-box attacks under various system settings.
|
Detection of visual anomalies refers to the problem of finding patterns in
different imaging data that do not conform to the expected visual appearance
and is a widely studied problem in different domains. Due to the nature of
anomaly occurrences and underlying generating processes, it is hard to
characterize them and obtain labeled data. Obtaining labeled data is especially
difficult in biomedical applications, where only trained domain experts can
provide labels, which often come in large diversity and complexity. Recently
presented approaches for unsupervised detection of visual anomalies approaches
omit the need for labeled data and demonstrate promising results in domains,
where anomalous samples significantly deviate from the normal appearance.
Despite promising results, the performance of such approaches still lags behind
supervised approaches and does not provide a one-fits-all solution. In this
work, we present an image-to-image translation-based framework that
significantly surpasses the performance of existing unsupervised methods and
approaches the performance of supervised methods in a challenging domain of
cancerous region detection in histology imagery.
|
In this paper, we present two new families of spatially homogeneous black
hole solution for $z=4$ Ho\v{r}ava-Lifshitz Gravity equations in $(4+1)$
dimensions with general coupling constant $\lambda$ and the especial case
$\lambda=1$, considering $\beta=-1/3$. The three-dimensional horizons are
considered to have Bianchi types $II$ and $III$ symmetries, and hence the
horizons are modeled on two types of Thurston $3$-geometries, namely the Nil
geometry and $H^2\times R$. Being foliated by compact 3-manifolds, the horizons
are neither spherical, hyperbolic, nor toroidal, and therefore are not of the
previously studied topological black hole solutions in Ho\v{r}ava-Lifshitz
gravity. Using the Hamiltonian formalism, we establish the conventional
thermodynamics of the solutions defining the mass and entropy of the black hole
solutions for several classes of solutions. It turned out that for both horizon
geometries the area term in the entropy receives two non-logarithmic negative
corrections proportional to Ho\v{r}ava-Lifshitz parameters. Also, we show that
choosing some proper set of parameters the solutions can exhibit locally stable
or unstable behavior.
|
In this paper, we obtain the $H^{p_1}\times H^{p_2}\times H^{p_3}\to H^p$
boundedness for trilinear Fourier multiplier operators, which is a trilinear
analogue of the multiplier theorem of Calder\'on and Torchinsky (Adv. Math. 24
: 101-171, 1977). Our result improves the trilinear estimate in the very recent
work of the authors, Lee, Heo, Hong, Park, and Yang (Math. Ann., to appear ) by
additionally assuming an appropriate vanishing moment condition, which is
natural in the boundedness into the Hardy space $H^p$ for $0<p\le 1$.
|
In this article, we develop an arithmetic analogue of Fourier--Jacobi period
integrals for a pair of unitary groups of equal rank. We construct the
so-called Fourier--Jacobi cycles, which are algebraic cycles on the product of
unitary Shimura varieties and abelian varieties. We propose the arithmetic
Gan--Gross--Prasad conjecture for these cycles, which is related to central
derivatives of certain Rankin--Selberg $L$-functions, and develop a relative
trace formula approach toward this conjecture. As a necessary ingredient, we
propose the conjecture of the corresponding arithmetic fundamental lemma, and
confirm it for unitary groups of rank at most two and for the minuscule case.
|
We present two related Stata modules, r_ml_stata and c_ml_stata, for fitting
popular Machine Learning (ML) methods both in regression and classification
settings. Using the recent Stata/Python integration platform (sfi) of Stata 16,
these commands provide hyper-parameters' optimal tuning via K-fold
cross-validation using greed search. More specifically, they make use of the
Python Scikit-learn API to carry out both cross-validation and outcome/label
prediction.
|
As wearable devices move toward the face (i.e. smart earbuds, glasses), there
is an increasing need to facilitate intuitive interactions with these devices.
Current sensing techniques can already detect many mouth-based gestures;
however, users' preferences of these gestures are not fully understood. In this
paper, we investigate the design space and usability of mouth-based
microgestures. We first conducted brainstorming sessions (N=16) and compiled an
extensive set of 86 user-defined gestures. Then, with an online survey (N=50),
we assessed the physical and mental demand of our gesture set and identified a
subset of 14 gestures that can be performed easily and naturally. Finally, we
conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to
various daily smartphone operations under a sitting and walking context. From
these studies, we develop a taxonomy for mouth gestures, finalize a practical
gesture set for common applications, and provide design guidelines for future
mouth-based gesture interactions.
|
We study the Hawking radiation from the five-dimensional charged static
squashed Kaluza-Klein black hole by the tunneling of charged fermions and
charged scalar particles, including the phenomenological quantum gravity
effects predicted by the generalized uncertainty principle with the minimal
measurable length. We derive corrections of the Hawking temperature to general
relativity, which are related to the energy of the emitted particle, the size
of the compact extra dimension, the charge of the black hole and the existence
of the minimal length in the squashed Kaluza-Klein geometry. We show that the
quantum gravity effect may slow down the increase of the Hawking temperature
due to the radiation, which may lead to the thermodynamic stable remnant of the
order of the Planck mass after the evaporation of the squashed Kaluza-Klein
black hole. We also find that the sparsity of the Hawking radiation may become
infinite when the mass of the squashed Kaluza-Klein black hole approaches its
remnant mass.
|
We study the existence of nontrivial solutions for a nonlinear fractional
elliptic equation in presence of logarithmic and critical exponential
nonlinearities. This problem extends [5] to fractional $N/s$-Laplacian
equations with logarithmic nonlinearity. We overcome the lack of compactness
due to the critical exponential nonlinearity by using the fractional
Trudinger-Moser inequality. The existence result is established via critical
point theory.
|
The non-equilibrium dynamics of stochastic light in a coherently-driven
nonlinear cavity resembles the equilibrium dynamics of a Brownian particle in a
scalar potential. This resemblance has been known for decades, but the
correspondence between the two systems has never been properly assessed. Here
we demonstrate that this correspondence can be exact, approximate, or break
down, depending on the cavity nonlinear response and driving frequency. For
weak on-resonance driving, the nonlinearity vanishes and the correspondence is
exact: The cavity dissipation and driving amplitude define a scalar potential,
the noise variance defines an effective temperature, and the intra-cavity field
satisfies Boltzmann statistics. For moderately strong non-resonant driving, the
correspondence is approximate: We introduce a potential that approximately
captures the nonlinear dynamics of the intra-cavity field, and we quantify the
accuracy of this approximation via deviations from Boltzmann statistics. For
very strong non-resonant driving, the correspondence breaks down: The
intra-cavity field dynamics is governed by non-conservative forces which
preclude a description based on a scalar potential only. We furthermore show
that this breakdown is accompanied by a phase transition for the intra-cavity
field fluctuations, reminiscent of a non-Hermitian phase transition. Our work
establishes clear connections between optical and stochastic thermodynamic
systems, and suggests that many fundamental results for overdamped Langevin
oscillators may be used to understand and improve resonant optical
technologies.
|
We consider the problem of consistently estimating the conditional
distribution $P(Y \in A |X)$ of a functional data object $Y=(Y(t): t\in[0,1])$
given covariates $X$ in a general space, assuming that $Y$ and $X$ are related
by a functional linear regression model. Two natural estimation methods are
proposed, based on either bootstrapping the estimated model residuals, or
fitting functional parametric models to the model residuals and estimating $P(Y
\in A |X)$ via simulation. Whether either of these methods lead to consistent
estimation depends on the consistency properties of the regression operator
estimator, and the space within which $Y$ is viewed. We show that under general
consistency conditions on the regression operator estimator, which hold for
certain functional principal component based estimators, consistent estimation
of the conditional distribution can be achieved, both when $Y$ is an element of
a separable Hilbert space, and when $Y$ is an element of the Banach space of
continuous functions. The latter results imply that sets $A$ that specify path
properties of $Y$, which are of interest in applications, can be considered.
The proposed methods are studied in several simulation experiments, and data
analyses of electricity price and pollution curves.
|
The performance of face recognition system degrades when the variability of
the acquired faces increases. Prior work alleviates this issue by either
monitoring the face quality in pre-processing or predicting the data
uncertainty along with the face feature. This paper proposes MagFace, a
category of losses that learn a universal feature embedding whose magnitude can
measure the quality of the given face. Under the new loss, it can be proven
that the magnitude of the feature embedding monotonically increases if the
subject is more likely to be recognized. In addition, MagFace introduces an
adaptive mechanism to learn a wellstructured within-class feature distributions
by pulling easy samples to class centers while pushing hard samples away. This
prevents models from overfitting on noisy low-quality samples and improves face
recognition in the wild. Extensive experiments conducted on face recognition,
quality assessments as well as clustering demonstrate its superiority over
state-of-the-arts. The code is available at
https://github.com/IrvingMeng/MagFace.
|
Using a fast and accurate neural network potential we are able to
systematically explore the energy landscape of large unit cells of bulk
magnesium oxide with the minima hopping method. The potential is trained with a
focus on the near-stoichiometric compositions, in particular on suboxides,
i.e., Mg$_x$O$_{1-x}$ with $0.50<x<0.60$. Our extensive exploration
demonstrates that for bulk stoichiometric compounds, there are several new
low-energy rocksalt-like structures in which Mg atoms are octahedrally
six--coordinated and form trigonal prismatic motifs with different stacking
sequences. Furthermore, we find a dense spectrum of novel non-stoichiometric
crystal phases of Mg$_x$O$_{1-x}$ for each composition of $x$. These structures
are mostly similar to the rock salt structure with octahedral coordination and
five--coordinated Mg atoms. Due to the removal of one oxygen atom, the energy
landscape becomes more glass-like with oxygen-vacancy type structures that all
lie very close to each other energetically. For the same number of magnesium
and oxygen atoms our oxygen-deficient structures are lower in energy if the
vacancies are aligned along lines or planes than rock salt structures with
randomly distributed oxygen vacancies. We also found the putative global minima
configurations for each composition of the non-stoichiometric suboxide
structures. These structures are predominantly composed of (111) slabs of the
rock salt structure which are terminated with Mg atoms at the top and bottom,
and are stacked in different sequences along the $z$-direction. Like other
Magn\'eli-type phases, these structures have properties that differ
considerably from their stoichiometric counterparts such as low lattice thermal
conductivity and high electrical conductivity.
|
The macroscopic dynamics of a droplet impacting a solid is crucially
determined by the intricate air dynamics occurring at the vanishingly small
length scale between droplet and substrate prior to direct contact. Here we
investigate the inverse problem, namely the role of air for the impact of a
horizontal flat disk onto a liquid surface, and find an equally significant
effect. Using an in-house experimental technique, we measure the free surface
deflections just before impact, with a precision of a few micrometers. Whereas
stagnation pressure pushes down the surface in the center, we observe a lift-up
under the edge of the disk, which sets in at a later stage, and which we show
to be consistent with a Kelvin-Helmholtz instability of the water-air
interface.
|
Reinforcement learning (RL) agents in human-computer interactions
applications require repeated user interactions before they can perform well.
To address this "cold start" problem, we propose a novel approach of using
cognitive models to pre-train RL agents before they are applied to real users.
After briefly reviewing relevant cognitive models, we present our general
methodological approach, followed by two case studies from our previous and
ongoing projects. We hope this position paper stimulates conversations between
RL, HCI, and cognitive science researchers in order to explore the full
potential of the approach.
|
In this paper we investigate how gradient-based algorithms such as gradient
descent, (multi-pass) stochastic gradient descent, its persistent variant, and
the Langevin algorithm navigate non-convex loss-landscapes and which of them is
able to reach the best generalization error at limited sample complexity. We
consider the loss landscape of the high-dimensional phase retrieval problem as
a prototypical highly non-convex example. We observe that for phase retrieval
the stochastic variants of gradient descent are able to reach perfect
generalization for regions of control parameters where the gradient descent
algorithm is not. We apply dynamical mean-field theory from statistical physics
to characterize analytically the full trajectories of these algorithms in their
continuous-time limit, with a warm start, and for large system sizes. We
further unveil several intriguing properties of the landscape and the
algorithms such as that the gradient descent can obtain better generalization
properties from less informed initializations.
|
Magnetic field source localization and imaging happen at different scales.
The sensing baseline ranges from meter scale such as magnetic anomaly
detection, centimeter scale such as brain field imaging to nanometer scale such
as the imaging of magnetic skyrmion and single cell. Here we show how atomic
vapor cell can be used to realize a baseline of 109.6 {\mu}m with a magnetic
sensitivity of 10pT/sqrt(Hz)@0.6-100Hz and a dynamic range of 2062-4124nT.We
use free induction decay (FID) scheme to suppress low-frequency noise and avoid
scale factor variation for different domains due to light non-uniformity. The
measurement domains are scanned by digital micro-mirror device (DMD). The
currents of 22mA, 30mA, 38mA and 44mA are applied in the coils to generate
different fields along the pumping axis which are measured respectively by
fitting the FID signals of the probe light. The residual fields of every domain
are obtained from the intercept of linearly-fitting of the measurement data
corresponding to these four currents. The coil-generated fields are calculated
by deducting the residual fields from the total fields. The results demonstrate
that the hole of shield affects both the residual and the coil-generated field
distribution. The potential impact of field distribution measurement with an
outstanding comprehensive properties of spatial resolution, sensitivity and
dynamic range is far-reaching. It could lead to capability of 3D magnetography
for small stuffs and/or organs in millimeter or even smaller scale.
|
This paper presents libtxsize, a library to estimate the size requirements of
arbitrary Bitcoin transactions. To account for different use cases, the library
provides estimates in bytes, virtual bytes, and weight units. In addition to
all currently existing input, output, and witness types, the library also
supports estimates for the anticipated Pay-to-Taproot transaction type, so that
estimates can be used as input for models attempting to quantify the impact of
Taproot on Bitcoin's scalability. libtxsize is based on analytic models, whose
credibility is established through first-principle analysis of transaction
types as well as exhaustive empirical validation. Consequently, the paper can
also serve as reference for different Bitcoin data and transaction types, their
semantics, and their size requirements (both from an analytic and empirical
point of view).
|
We present a new approach for fast calculation of gravitational lensing
properties, including the lens potential, deflection angles, convergence, and
shear, of elliptical Navarro-Frenk-White (NFW) and Hernquist density profiles,
by approximating them by superpositions of elliptical density profiles for
which simple analytic expressions of gravitational lensing properties are
available. This model achieves high fractional accuracy better than $10^{-4}$
in the range of the radius normalized by the scale radius of $10^{-4}-10^3$.
These new approximations are $\sim 300$ times faster in solving the lens
equation for a point source compared with the traditional approach resorting to
expensive numerical integrations, and are implemented in {\tt glafic} software.
|
There are situations where data relevant to a machine learning problem are
distributed among multiple locations that cannot share the data due to
regulatory, competitiveness, or privacy reasons. For example, data present in
users' cellphones, manufacturing data of companies in a given industrial
sector, or medical records located at different hospitals. Federated Learning
(FL) provides an approach to learn a joint model over all the available data
across silos. In many cases, participating sites have different data
distributions and computational capabilities. In these heterogeneous
environments previous approaches exhibit poor performance: synchronous FL
protocols are communication efficient, but have slow learning convergence;
conversely, asynchronous FL protocols have faster convergence, but at a higher
communication cost. Here we introduce a novel Semi-Synchronous Federated
Learning protocol that mixes local models periodically with minimal idle time
and fast convergence. We show through extensive experiments that our approach
significantly outperforms previous work in data and computationally
heterogeneous environments.
|
Measures of algorithmic fairness often do not account for human perceptions
of fairness that can substantially vary between different sociodemographics and
stakeholders. The FairCeptron framework is an approach for studying perceptions
of fairness in algorithmic decision making such as in ranking or
classification. It supports (i) studying human perceptions of fairness and (ii)
comparing these human perceptions with measures of algorithmic fairness. The
framework includes fairness scenario generation, fairness perception
elicitation and fairness perception analysis. We demonstrate the FairCeptron
framework by applying it to a hypothetical university admission context where
we collect human perceptions of fairness in the presence of minorities. An
implementation of the FairCeptron framework is openly available, and it can
easily be adapted to study perceptions of algorithmic fairness in other
application contexts. We hope our work paves the way towards elevating the role
of studies of human fairness perceptions in the process of designing
algorithmic decision making systems.
|
For a fixed graph $H$ and for arbitrarily large host graphs $G$, the number
of homomorphisms from $H$ to $G$ and the number of subgraphs isomorphic to $H$
contained in $G$ have been extensively studied in extremal graph theory and
graph limits theory when the host graphs are allowed to be dense. This paper
addresses the case when the host graphs are robustly sparse and proves a
general theorem that solves a number of open questions proposed since 1990s and
strengthens a number of results in the literature.
We prove that for any graph $H$ and any set ${\mathcal H}$ of homomorphisms
from $H$ to members of a hereditary class ${\mathcal G}$ of graphs, if
${\mathcal H}$ satisfies a natural and mild condition, and contracting disjoint
subgraphs of radius $O(\lvert V(H) \rvert)$ in members of ${\mathcal G}$ cannot
create a graph with large edge-density, then an obvious lower bound for the
size of ${\mathcal H}$ gives a good estimation for the size of ${\mathcal H}$.
This result determines the maximum number of $H$-homomorphisms, the maximum
number of $H$-subgraphs, and the maximum number $H$-induced subgraphs in graphs
in any hereditary class with bounded expansion up to a constant factor; it also
determines the exact value of the asymptotic logarithmic density for
$H$-homomorphisms, $H$-subgraphs and $H$-induced subgraphs in graphs in any
hereditary nowhere dense class. Hereditary classes with bounded expansion
include (topological) minor-closed families and many classes of graphs with
certain geometric properties; nowhere dense classes are the most general sparse
classes in sparsity theory. Our machinery also allows us to determine the
maximum number of $H$-subgraphs in the class of all $d$-degenerate graphs with
any fixed $d$.
|
The multiplicative and additive compounds of a matrix play an important role
in several fields of mathematics including geometry, multi-linear algebra,
combinatorics, and the analysis of nonlinear time-varying dynamical systems.
There is a growing interest in applications of these compounds, and their
generalizations, in systems and control theory. This tutorial paper provides a
gentle introduction to these topics with an emphasis on the geometric
interpretation of the compounds, and surveys some of their recent applications.
|
Gaussian distributions can be generalized from Euclidean space to a wide
class of Riemannian manifolds. Gaussian distributions on manifolds are harder
to make use of in applications since the normalisation factors, which we will
refer to as partition functions, are complicated, intractable integrals in
general that depend in a highly non-linear way on the mean of the given
distribution. Nonetheless, on Riemannian symmetric spaces, the partition
functions are independent of the mean and reduce to integrals over finite
dimensional vector spaces. These are generally still hard to compute
numerically when the dimension (more precisely the rank $N$) of the underlying
symmetric space gets large. On the space of positive definite Hermitian
matrices, it is possible to compute these integrals exactly using methods from
random matrix theory and the so-called Stieltjes-Wigert polynomials. In other
cases of interest to applications, such as the space of symmetric positive
definite (SPD) matrices or the Siegel domain (related to block-Toeplitz
covariance matrices), these methods seem not to work quite as well.
Nonetheless, it remains possible to compute leading order terms in a large $N$
limit, which provide increasingly accurate approximations as $N$ grows. This
limit is inspired by realizing a given partition function as the partition
function of a zero-dimensional quantum field theory or even Chern-Simons
theory. From this point of view the large $N$ limit arises naturally and
saddle-point methods, Feynman diagrams, and certain universalities that relate
different spaces emerge.
|
A neoclassically optimized compact stellarator with simple coils has been
designed. The magnetic field of the new stellarator is generated by only four
planar coils including two interlocking coils of elliptical shape and two
circular poloidal field coils. The interlocking coil topology is the same as
that of the Columbia Non-neutral Torus (CNT). The new configuration was
obtained by minimizing the effective helical ripple directly via the shape of
the two interlocking coils. The optimized compact stellarator has very low
effective ripple in the plasma core implying excellent neoclassical
confinement. This is confirmed by the results of the drift-kinetic code SFINCS
showing that the particle diffusion coefficient of the new configuration is one
order of magnitude lower than CNT's.
|
Given a compact Riemann surface $\Sigma$ of genus $g_\Sigma\, \geq\, 2$, and
an effective divisor $D\, =\, \sum_i n_i x_i$ on $\Sigma$ with
$\text{degree}(D)\, <\, 2(g_\Sigma -1)$, there is a unique cone metric on
$\Sigma$ of constant negative curvature $-4$ such that the cone angle at each
$x_i$ is $2\pi n_i$ (see McOwen and Troyanov [McO,Tr]). We describe the Higgs
bundle corresponding to this uniformization associated to the above conical
metric. We also give a family of Higgs bundles on $\Sigma$ parametrized by a
nonempty open subset of $H^0(\Sigma,\,K_\Sigma^{\otimes 2}\otimes{\mathcal
O}_\Sigma(-2D))$ that correspond to conical metrics of the above type on moving
Riemann surfaces. These are inspired by Hitchin's results in [Hi1], for the
case $D\,=\, 0$.
|
Package-to-group recommender systems recommend a set of unified items to a
group of people. Different from conventional settings, it is not easy to
measure the utility of group recommendations because it involves more than one
user. In particular, fairness is crucial in group recommendations. Even if some
members in a group are substantially satisfied with a recommendation, it is
undesirable if other members are ignored to increase the total utility. Many
methods for evaluating and applying the fairness of group recommendations have
been proposed in the literature. However, all these methods maximize the score
and output only one package. This is in contrast to conventional recommender
systems, which output several (e.g., top-$K$) candidates. This can be
problematic because a group can be dissatisfied with the recommended package
owing to some unobserved reasons, even if the score is high. To address this
issue, we propose a method to enumerate fair packages efficiently. Our method
furthermore supports filtering queries, such as top-$K$ and intersection, to
select favorite packages when the list is long. We confirm that our algorithm
scales to large datasets and can balance several aspects of the utility of the
packages.
|
Milwaukee's 53206 ZIP code, located on the city's near North Side, has drawn
considerable attention for its poverty and incarceration rates, as well as for
its large proportion of vacant properties. As a result, it has benefited from
targeted policies at the city level. Keeping in mind that ZIP codes are often
not the most effective unit of geographic analysis, this study investigates
Milwaukee's socioeconomic conditions at the block group level. These smaller
areas' statistics are then compared with those of their corresponding ZIP
codes. The 53206 ZIP code is compared against others in Milwaukee for eight
socioeconomic variables and is found to be near the extreme end of most
rankings. This ZIP code would also be among Chicago's most extreme areas, but
would lie near the middle of the rankings if located in Detroit. Parts of other
ZIP codes, which are often adjacent, are statistically similar to 53206,
however--suggesting that a focus solely on ZIP codes, while a convenient
shorthand, might overlook neighborhoods that have similar need for investment.
A multivariate index created for this study performs similarly to a standard
multivariate index of economic deprivation if spatial correlation is taken into
account, confirming that poverty and other socioeconomic stresses are
clustered, both in the 53206 ZIP code and across Milwaukee.
|
Coulomb fission mechanism may take place if the maximum Coulomb-excitation
energy transfer in a reaction exceeds the fission barrier of either the
projectile or target. This condition is satisfied by all the reactions used for
the earlier blocking measurements except one reaction 208 Pb + Natural Ge
crystal, where the measured timescale was below the measuring limit of the
blocking measurements < 1 as. Hence, inclusion of the Coulomb fission in the
data analysis of the blocking experiments leads us to interpret that the
measured time longer than a few attoseconds (about 2-2.5 as) is nothing but
belonging to the Coulomb fission timescale and shorter than 1 as are due to the
quasifission. Consequently, this finding resolves the critical discrepancies
between the fission timescale measurements using the nuclear and blocking
techniques. This, in turn, validates the fact that the quasifission timescale
is indeed of the order of zeptoseconds in accordance with the nuclear
experiments and theories. It thus provides a radical input in understanding the
reaction mechanism for heavy element formation via fusion evaporation processes
|
Consider the family of bounded degree graphs in any minor-closed family (such
as planar graphs). Let d be the degree bound and n be the number of vertices of
such a graph. Graphs in these classes have hyperfinite decompositions, where,
for a sufficiently small \e > 0, one removes \edn edges to get connected
components of size independent of n. An important tool for sublinear algorithms
and property testing for such classes is the partition oracle, introduced by
the seminal work of Hassidim-Kelner-Nguyen-Onak (FOCS 2009). A partition oracle
is a local procedure that gives consistent access to a hyperfinite
decomposition, without any preprocessing. Given a query vertex v, the partition
oracle outputs the component containing v in time independent of n. All the
answers are consistent with a single hyperfinite decomposition. The partition
oracle of Hassidim et al. runs in time d^poly(d/\e) per query. They pose the
open problem of whether poly(d/\e)-time partition oracles exist. Levi-Ron
(ICALP 2013) give a refinement of the previous approach, to get a partition
oracle that runs in time d^{\log(d/\e)-per query. In this paper, we resolve
this open problem and give \poly(d/\e)-time partition oracles for bounded
degree graphs in any minor-closed family. Unlike the previous line of work
based on combinatorial methods, we employ techniques from spectral graph
theory. We build on a recent spectral graph theoretical toolkit for
minor-closed graph families, introduced by the authors to develop efficient
property testers. A consequence of our result is a poly(d/\e)-query tester for
any monotone and additive property of minor-closed families (such as bipartite
planar graphs). Our result also gives poly(d/\e)-query algorithms for additive
{\e}n-approximations for problems such as maximum matching, minimum vertex
cover, maximum independent set, and minimum dominating set for these graph
families.
|
This arXiv report provides a short introduction to the information-theoretic
measure proposed by Chen and Golan in 2016 for analyzing machine- and
human-centric processes in data intelligence workflows. This introduction was
compiled based on several appendices written to accompany a few research papers
on topics of data visualization and visual analytics. Although the original
2016 paper and the follow-on papers were mostly published in the field of
visualization and visual analytics, the cost-benefit measure can help explain
the informative trade-off in a wide range of data intelligence phenomena
including machine learning, human cognition, language development, and so on.
Meanwhile, there is an ongoing effort to improve its mathematical properties in
order to make it more intuitive and usable in practical applications as a
measurement tool.
|
Face recognition (FR) using deep convolutional neural networks (DCNNs) has
seen remarkable success in recent years. One key ingredient of DCNN-based FR is
the appropriate design of a loss function that ensures discrimination between
various identities. The state-of-the-art (SOTA) solutions utilise normalised
Softmax loss with additive and/or multiplicative margins. Despite being
popular, these Softmax+margin based losses are not theoretically motivated and
the effectiveness of a margin is justified only intuitively. In this work, we
utilise an alternative framework that offers a more direct mechanism of
achieving discrimination among the features of various identities. We propose a
novel loss that is equivalent to a triplet loss with proxies and an implicit
mechanism of hard-negative mining. We give theoretical justification that
minimising the proposed loss ensures a minimum separability between all
identities. The proposed loss is simple to implement and does not require heavy
hyper-parameter tuning as in the SOTA solutions. We give empirical evidence
that despite its simplicity, the proposed loss consistently achieves SOTA
performance in various benchmarks for both high-resolution and low-resolution
FR tasks.
|
Bimetallic nanoparticles (BNPs) exhibit diverse morphologies such as
core-shell, Janus, onion-like, quasi-Janus, and homogeneous structures.
Although extensive effort has been directed towards understanding the
equilibrium configurations of BNPs, kinetic mechanisms involved in their
development have not been explored systematically. Since these systems often
contain a miscibility gap, experimental studies have alluded to spinodal
decomposition (SD) as a likely mechanism for the formation of such structures.
We present a novel phase-field model for confined (embedded)systems to study
SD-induced morphological evolution within a BNP. It initiates with the
formation of compositionally modulated rings as a result of surface-directed SD
and eventually develops into core-shell or Janus structures due to
coarsening/breakdown of the rings. The final configuration depends crucially on
contact angle and particle size -Janus is favored at smaller sizes and higher
contact angles. Our simulations also illustrate the formation of metastable,
kinetically trapped structures as a result of competition between capillarity
and diffusion.
|
We consider the vector space $E_{\rho,p}$ of entire functions of finite
order, whose types are not more than $p>0$, endowed with Frechet topology,
which is generated by a sequence of weighted norms. We call a function $f\in
E_{\rho,p}$ {\it typical} if it is surjective and has an infinite number
critical points such that each of them is non-degenerate and all the values of
$f$ at these points are pairwise different. We prove that the set of all
typical functions contains a set which is $G_\delta$ and dense in $E_{\rho,p}$.
Furthermore, we show that inverse to any typical function has Riemann surface
whose monodromy group coincides with finitary symmetric group of permutations
of naturals, which is unsolvable in the following strong sense: it does not
have a normal tower of subgroups, whose factor groups are or abelian or finite.
As a consequence from these facts and Topological Galois Theory, we obtain that
generically (in the above sense) for $f\in E_{\rho,p}$ the solution of equation
$f(w)=z$ cannot be represented via $z$ and complex constants by a finite number
of the following actions: algebraic operations (i.e., rational ones and
solutions of polynomial equations) and quadratures (in particular,
superpositions with elementary functions).
|
The standard diffusive spreading, characterized by a Gaussian distribution
with mean square displacement that grows linearly with time, can break down,
for instance, under the presence of correlations and heterogeneity. In this
work, we consider the spread of a population of fractional (long-time
correlated) Brownian walkers, with time-dependent and heterogeneous
diffusivity. We aim to obtain the possible scenarios related to these
individual-level features from the observation of the temporal evolution of the
population spatial distribution. We develop and discuss the possibility and
limitations of this connection for the broad class of self-similar diffusion
processes. Our results are presented in terms of a general framework, which is
then used to address well-known processes, such as Laplace diffusion, nonlinear
diffusion, and their extensions.
|
The existence of massive compact stars $(M\gtrsim 2.1 M_{\odot})$ implies
that the conformal limit of the speed of sound $c_s^2=1/3$ is violated if those
stars have a crust of ordinary nuclear matter. Here we show that, if the most
massive objects are strange quark stars, i.e. stars entirely composed of
quarks, the conformal limit can be respected while observational limits on
those objects are also satisfied. By using astrophysical data associated with
those massive stars, derived from electromagnetic and gravitational wave
signals, we show, within a Bayesian analysis framework and by adopting a
constant speed of sound equation of state, that the posterior distribution of
$c_s^2$ is peaked around 0.3, and the maximum mass of the most probable
equation of state is $\sim 2.13 M_{\odot}$. We discuss which new data would
require a violation of the conformal limit even when considering strange quark
stars, in particular we analyze the possibility that the maximum mass of
compact stars is larger than $2.5M_{\odot}$, as it would be if the secondary
component of GW190814 is a compact star and not a black hole. Finally, we
discuss how the new data for PSR J0740+6620 obtained by the NICER collaboration
compare with our analysis (not based on them) and with other possible
interpretations.
|
From Swift monitoring of a sample of active galactic nuclei (AGN) we found a
transient X-ray obscuration event in Seyfert-1 galaxy NGC 3227, and thus
triggered our joint XMM-Newton, NuSTAR, and Hubble Space Telescope (HST)
observations to study this event. Here in the first paper of our series we
present the broadband continuum modelling of the spectral energy distribution
(SED) for NGC 3227, extending from near infrared (NIR) to hard X-rays. We use
our new spectra taken with XMM-Newton, NuSTAR, and HST/COS in 2019, together
with archival unobscured XMM-Newton, NuSTAR, and HST/STIS data, in order to
disentangle various spectral components of NGC 3227 and recover the underlying
continuum. We find the observed NIR-optical-UV continuum is explained well by
an accretion disk blackbody component (Tmax = 10 eV), which is internally
reddened by E(B-V) = 0.45 with a Small Magellanic Cloud (SMC) extinction law.
We derive the inner radius (12 Rg) and the accretion rate (0.1 solar mass per
year) of the disk by modelling the thermal disk emission. The internal
reddening in NGC 3227 is most likely associated with outflows from the dusty
AGN torus. In addition, an unreddened continuum component is also evident,
which likely arises from scattered radiation, associated with the extended
narrow-line region (NLR) of NGC 3227. The extreme ultraviolet (EUV) continuum,
and the 'soft X-ray excess', can be explained with a 'warm Comptonisation'
component. The hard X-rays are consistent with a power-law and a neutral
reflection component. The intrinsic bolometric luminosity of the AGN in NGC
3227 is about 2.2e+43 erg/s in 2019, corresponding to 3% Eddington luminosity.
Our continuum modelling of the new triggered data of NGC 3227 requires the
presence of a new obscuring gas with column density NH = 5e+22 cm^-2, partially
covering the X-ray source (Cf = 0.6).
|
No quantum circuit can turn a completely unknown unitary gate into its
coherently controlled version. Yet, coherent control of unknown gates has been
realised in experiments, making use of a different type of initial resources.
Here, we formalise the task achieved by these experiments, extending it to the
control of arbitrary noisy channels, and to more general types of control
involving higher dimensional control systems. For the standard notion of
coherent control, we identify the information-theoretic resource for
controlling an arbitrary quantum channel on a $d$-dimensional system:
specifically, the resource is an extended quantum channel acting as the
original channel on a $d$-dimensional sector of a $(d+1)$-dimensional system.
Using this resource, arbitrary controlled channels can be built with a
universal circuit architecture. We then extend the standard notion of control
to more general notions, including control of multiple channels with possibly
different input and output systems. Finally, we develop a theoretical
framework, called supermaps on routed channels, which provides a compact
representation of coherent control as an operation performed on the extended
channels, and highlights the way the operation acts on different sectors.
|
This paper presents a taxonomy that allows defining the fault tolerance
regimes fail-operational, fail-degraded, and fail-safe in the context of
automotive systems. Fault tolerance regimes such as these are widely used in
recent publications related to automated driving, yet without definitions. This
largely holds true for automotive safety standards, too. We show that fault
tolerance regimes defined in scientific publications related to the automotive
domain are partially ambiguous as well as taxonomically unrelated. The
presented taxonomy is based on terminology stemming from ISO 26262 as well as
from systems engineering. It uses four criteria to distinguish fault tolerance
regimes. In addition to fail-operational, fail-degraded, and fail-safe, the
core terminology consists of operational and fail-unsafe. These terms are
supported by definitions of available performance, nominal performance,
functionality, and a concise definition of the safe state. For verification, we
show by means of two examples from the automotive domain that the taxonomy can
be applied to hierarchical systems of different complexity.
|
We propose Preferential MoE, a novel human-ML mixture-of-experts model that
augments human expertise in decision making with a data-based classifier only
when necessary for predictive performance. Our model exhibits an interpretable
gating function that provides information on when human rules should be
followed or avoided. The gating function is maximized for using human-based
rules, and classification errors are minimized. We propose solving a coupled
multi-objective problem with convex subproblems. We develop approximate
algorithms and study their performance and convergence. Finally, we demonstrate
the utility of Preferential MoE on two clinical applications for the treatment
of Human Immunodeficiency Virus (HIV) and management of Major Depressive
Disorder (MDD).
|
The rapid expansion of distributed energy resources (DERs) is one of the most
significant changes to electricity systems around the world. Examples of DERs
include solar panels, small natural gas-fueled generators, combined heat and
power plants, etc. Due to the small supply capacities of these DERs, it is
impractical for them to participate directly in the wholesale electricity
market. We study in this paper an efficient aggregation model where a
profit-maximizing aggregator procures electricity from DERs, and sells them in
the wholesale market. The interaction between the aggregator and the DER owners
is modeled as a Stackelberg game: the aggregator adopts two-part pricing by
announcing a participation fee and a per-unit price of procurement for each DER
owner, and the DER owner responds by choosing her payoff-maximizing energy
supplies. We show that our proposed model preserves full market efficiency,
i.e., the social welfare achieved by the aggregation model is the same as that
when DERs participate directly in the wholesale market. We also note that
two-part pricing is critical for market efficiency, and illustrate via an
example that with one-part pricing, there will be an efficiency loss from DER
aggregation, due to the profit-seeking behavior of the aggregator.
|
Learning and reasoning over graphs is increasingly done by means of
probabilistic models, e.g. exponential random graph models, graph embedding
models, and graph neural networks. When graphs are modeling relations between
people, however, they will inevitably reflect biases, prejudices, and other
forms of inequity and inequality. An important challenge is thus to design
accurate graph modeling approaches while guaranteeing fairness according to the
specific notion of fairness that the problem requires. Yet, past work on the
topic remains scarce, is limited to debiasing specific graph modeling methods,
and often aims to ensure fairness in an indirect manner.
We propose a generic approach applicable to most probabilistic graph modeling
approaches. Specifically, we first define the class of fair graph models
corresponding to a chosen set of fairness criteria. Given this, we propose a
fairness regularizer defined as the KL-divergence between the graph model and
its I-projection onto the set of fair models. We demonstrate that using this
fairness regularizer in combination with existing graph modeling approaches
efficiently trades-off fairness with accuracy, whereas the state-of-the-art
models can only make this trade-off for the fairness criterion that they were
specifically designed for.
|
This paper reports a comprehensive study on the applicability of ultra-scaled
ferroelectric FinFETs with 6 nm thick hafnium zirconium oxide layer for
neuromorphic computing in the presence of process variation, flicker noise, and
device aging. An intricate study has been conducted about the impact of such
variations on the inference accuracy of pre-trained neural networks consisting
of analog, quaternary (2-bit/cell) and binary synapse. A pre-trained neural
network with 97.5% inference accuracy on the MNIST dataset has been adopted as
the baseline. Process variation, flicker noise, and device aging
characterization have been performed and a statistical model has been developed
to capture all these effects during neural network simulation. Extrapolated
retention above 10 years have been achieved for binary read-out procedure. We
have demonstrated that the impact of (1) retention degradation due to the oxide
thickness scaling, (2) process variation, and (3) flicker noise can be abated
in ferroelectric FinFET based binary neural networks, which exhibits superior
performance over quaternary and analog neural network, amidst all variations.
The performance of a neural network is the result of coalesced performance of
device, architecture and algorithm. This research corroborates the
applicability of deeply scaled ferroelectric FinFETs for non-von Neumann
computing with proper combination of architecture and algorithm.
|
The holographic light-front QCD framework provides a unified nonperturbative
description of the hadron mass spectrum, form factors and quark distributions.
In this article we extend holographic QCD in order to describe the gluonic
distribution in both the proton and pion from the coupling of the metric
fluctuations induced by the spin-two Pomeron with the energy momentum tensor in
anti--de Sitter space, together with constraints imposed by the Veneziano
model{\color{blue},} without additional free parameters. The gluonic and quark
distributions are shown to have significantly different effective QCD scales.
|
Filinski constructed a symmetric lambda-calculus consisting of expressions
and continuations which are symmetric, and functions which have duality. In his
calculus, functions can be encoded to expressions and continuations using
primitive operators. That is, the duality of functions is not derived in the
calculus but adopted as a principle of the calculus. In this paper, we propose
a simple symmetric lambda-calculus corresponding to the negation-free natural
deduction based bilateralism in proof-theoretic semantics. In our calculus,
continuation types are represented as not negations of formulae but formulae
with negative polarity. Function types are represented as the implication and
but-not connectives in intuitionistic and paraconsistent logics, respectively.
Our calculus is not only simple but also powerful as it includes a call-value
calculus corresponding to the call-by-value dual calculus invented by Wadler.
We show that mutual transformations between expressions and continuations are
definable in our calculus to justify the duality of functions. We also show
that every typable function has dual types. Thus, the duality of function is
derived from bilateralism.
|
For a pair of polynomials with real or complex coefficients, given in any
particular basis, the problem of finding their GCD is known to be ill-posed. An
answer is still desired for many applications, however. Hence, looking for a
GCD of so-called approximate polynomials where this term explicitly denotes
small uncertainties in the coefficients has received significant attention in
the field of hybrid symbolic-numeric computation. In this paper we give an
algorithm, based on one of Victor Ya. Pan, to find an approximate GCD for a
pair of approximate polynomials given in a Lagrange basis. More precisely, we
suppose that these polynomials are given by their approximate values at
distinct known points. We first find each of their roots by using a Lagrange
basis companion matrix for each polynomial, cluster the roots of each
polynomial to identify multiple roots, and then "marry" the two polynomials to
find their GCD. At no point do we change to the monomial basis, thus preserving
the good conditioning properties of the original Lagrange basis. We discuss
advantages and drawbacks of this method. The computational cost is dominated by
the rootfinding step; unless special-purpose eigenvalue algorithms are used,
the cost is cubic in the degrees of the polynomials. In principle, this cost
could be reduced but we do not do so here.
|
The diffusive behaviour of simple random-walk proposals of many Markov Chain
Monte Carlo (MCMC) algorithms results in slow exploration of the state space
making inefficient the convergence to a target distribution. Hamiltonian/Hybrid
Monte Carlo (HMC), by introducing fictious momentum variables, adopts
Hamiltonian dynamics, rather than a probability distribution, to propose future
states in the Markov chain. Splitting schemes are numerical integrators for
Hamiltonian problems that may advantageously replace the St\"ormer-Verlet
method within HMC methodology. In this paper a family of stable methods for
univariate and multivariate Gaussian distributions, taken as guide-problems for
more realistic situations, is proposed. Differently from similar methods
proposed in the recent literature, the considered schemes are featured by null
expectation of the random variable representing the energy error. The
effectiveness of the novel procedures is shown for bivariate and multivariate
test cases taken from the literature.
|
This article is an introduction to machine learning for financial
forecasting, planning and analysis (FP\&A). Machine learning appears well
suited to support FP\&A with the highly automated extraction of information
from large amounts of data. However, because most traditional machine learning
techniques focus on forecasting (prediction), we discuss the particular care
that must be taken to avoid the pitfalls of using them for planning and
resource allocation (causal inference). While the naive application of machine
learning usually fails in this context, the recently developed double machine
learning framework can address causal questions of interest. We review the
current literature on machine learning in FP\&A and illustrate in a simulation
study how machine learning can be used for both forecasting and planning. We
also investigate how forecasting and planning improve as the number of data
points increases.
|
We show that the Weil representation associated with any discriminant form
admits a basis in which the action of the representation involves algebraic
integers. The action of a general element of
$\operatorname{SL}_{2}(\mathbb{Z})$ on many parts of these bases is simple an
explicit, a fact that we use for determining the dimension of the space of
invariants for some families of discriminant forms.
|
We give an overview of the work done during the past ten years on the Casimir
interaction in electronic topological materials, our focus being solids which
possess surface or bulk electronic band structures with nontrivial topologies,
which can be evinced through optical properties that are characterizable in
terms of nonzero topological invariants. The examples we review are
three-dimensional magnetic topological insulators, two-dimensional Chern
insulators, graphene monolayers exhibiting the relativistic quantum Hall
effect, and time reversal symmetry-broken Weyl semimetals, which are
fascinating systems in the context of Casimir physics, firstly for the reason
that they possess electromagnetic properties characterizable by axial vectors
(because of time reversal symmetry breaking), and depending on the mutual
orientation of a pair of such axial vectors, two systems can experience a
repulsive Casimir-Lifshitz force even though they may be dielectrically
identical. Secondly, the repulsion thus generated is potentially robust against
weak disorder, as such repulsion is associated with a Hall conductivity which
is topologically protected in the zero-frequency limit. Finally, the far-field
low-temperature behavior of the Casimir force of such systems can provide
signatures of topological quantization.
|
In this paper we study dynamical systems generated by a gonosomal evolution
operator of a bisexual population. We find explicitly all (uncountable set) of
fixed points of the operator. It is shown that each fixed point has eigenvalues
less or equal to 1. Moreover, we show that each trajectory converges to a fixed
point, i.e. the operator is reqular. There are uncountable family of invariant
sets each of which consisting unique fixed point. Thus there is one-to-one
correspondence between such invariant sets and the set of fixed points. Any
trajectory started at a point of the invariant set converges to the
corresponding fixed point.
|
Although routinely utilized in literature, orthogonal waveforms may lose
orthogonality in distributed multi-input multi-output (MIMO) radar with
spatially separated transmit (TX) and receive (RX) antennas, as the waveforms
may experience distinct delays and Doppler frequency offsets unique to
different TX-RX propagation paths. In such cases, the output of each
waveform-specific matched filter (MF), employed to unravel the waveforms at the
RXs, contains both an \auto term and multiple cross terms, i.e., the filtered
response of the desired and, respectively, undesired waveforms. We consider the
impact of non-orthogonal waveforms and their cross terms on target detection
with or without timing, frequency, and phase errors. To this end, we present a
general signal model for distributed MIMO radar, examine target detection using
existing coherent/non-coherent detectors and two new detectors, including a
hybrid detector that requires phase coherence locally but not across
distributed antennas, and provide a statistical analysis leading to closed-form
expressions of false alarm and detection probabilities for all detectors. Our
results show that cross terms can behave like foes or allies, respectively, if
they and the auto term add destructively or constructively, depending on the
propagation delay, frequency, and phase offsets. Regarding sync errors, we show
that phase errors affect only coherent detectors, frequency errors degrade all
but the non-coherent detector, while all are impacted by timing errors, which
result in a loss in the signal-to-noise ratio (SNR).
|
We establish well-posedness conclusions for the Cauchy problem associated to
the dispersion generalized Zakharov-Kutnesov equation in bi-periodic Sobolev
spaces $H^{s}\left(\mathbb{T}^{2}\right)$,
$s>(\frac{3}{2}-\frac{1}{2^{\alpha+2}})(\frac{3}{2}-\frac{\beta}{4})$.
|
In this comment we show untenability of key points of the recent article of
N. Biancacci, E. Metral and M. Migliorati [Phys. Rev. Accel. Beams 23, 124402
(2020)], hereafter the Article and the Authors. Specifically, the main Eqs.
(23), suggested to describe mode coupling, are shown to be unacceptable even as
an approximation. The Article claims the solution of this pair of equations to
be in "excellent agreement" with the pyHEADTAIL simulations for CERN PS, which
is purportedly demonstrated by Fig. 6. Were it really so, it would be a signal
of a mistake in the code. However, the key part of the simulation results is
not actually shown, and the demonstrated agreement has all the features of an
illusion.
|
Let $\mathcal{F}$ and $\mathcal{K}$ be commuting $C^\infty$ diffeomorphisms
of the cylinder $\mathbb{T}\times\mathbb{R}$ that are, respectively, close to
$\mathcal{F}_0 (x, y)=(x+\omega(y), y)$ and $T_\alpha (x, y)=(x+\alpha, y)$,
where $\omega(y)$ is non-degenerate and $\alpha$ is Diophantine. Using the KAM
iterative scheme for the group action we show that $\mathcal{F}$ and
$\mathcal{K}$ are simultaneously $C^\infty$-linearizable if $\mathcal{F}$ has
the intersection property (including the exact symplectic maps) and
$\mathcal{K}$ satisfies a semi-conjugacy condition. We also provide examples
showing necessity of these conditions. As a consequence, we get local rigidity
of certain elliptic $\mathbb{Z}^2$-actions on the cylinder.
|
Recently, Haynes, Hedetniemi and Henning published the book Topics in
Domination in Graphs, which comprises 16 contributions that present advanced
topics in graph domination, featuring open problems, modern techniques, and
recent results. One of these contributions is the chapter Multiple Domination,
by Hansberg and Volkmann, where they put into context all relevant research
results on multiple domination that have been found up to 2020. In this note,
we show how to improve some results on double domination that are included in
the book.
|
The recent discovery of a Galactic fast radio burst (FRB) occurring
simultaneously with an X-ray burst (XRB) from the Galactic magnetar SGR
J1935+2154 implies that at least some FRBs arise from magnetar activities. We
propose that FRBs are triggered by crust fracturing of magnetars, with the
burst event rate depending on the magnetic field strength in the crust. Since
the crust fracturing rate is relatively higher in polar regions, FRBs are
preferred to be triggered near the directions of multipolar magnetic poles.
Crust fracturing produces Alfv\'en waves, forming a charge starved region in
the magnetosphere and leading to non-stationary pair plasma discharges. An FRB
is produced by coherent plasma radiation due to nonuniform pair production
across magnetic field lines. Meanwhile, the FRB-associated XRB is produced by
the rapid relaxation of the external magnetic field lines. In this picture, the
sharp-peak hard X-ray component in association with FRB 200428 is from a region
between adjacent trapped fireballs, and its spectrum with a high cutoff energy
is attributed to resonant Compton scattering. The persistent X-ray emission is
from a hot spot heated by the magnetospheric activities, and its temperature
evolution is dominated by magnetar surface cooling. Within this picture,
magnetars with stronger fields tend to produce brighter and more frequent
repeated bursts.
|
In the Priority $k$-Center problem, the input consists of a metric space
$(X,d)$, an integer $k$ and for each point $v \in X$ a priority radius $r(v)$.
The goal is to choose $k$-centers $S \subseteq X$ to minimize $\max_{v \in X}
\frac{1}{r(v)} d(v,S)$. If all $r(v)$'s were uniform, one obtains the classical
$k$-center problem. Plesn\'ik [Plesn\'ik, Disc. Appl. Math. 1987] introduced
this problem and gave a $2$-approximation algorithm matching the best possible
algorithm for vanilla $k$-center. We show how the problem is related to two
different notions of fair clustering [Harris et al., NeurIPS 2018; Jung et al.,
FORC 2020]. Motivated by these developments we revisit the problem and, in our
main technical contribution, develop a framework that yields constant factor
approximation algorithms for Priority $k$-Center with outliers. Our framework
extends to generalizations of Priority $k$-Center to matroid and knapsack
constraints, and as a corollary, also yields algorithms with fairness
guarantees in the lottery model of Harris et al.
|
In this study, a geometric version of an NP-hard problem ("Almost $2-SAT$"
problem) is introduced which has potential applications in clustering,
separation axis, binary sensor networks, shape separation, image processing,
etc. Furthermore, it has been illustrated that the new problem known as "Two
Disjoint Convex Hulls" can be solved in polynomial time due to some
combinatorial aspects and geometric properties. For this purpose, an $O(n^2)$
algorithm has also been presented which employs the Separating Axis Theorem
(SAT) and the duality of points/lines.
|
How does the chromatic number of a graph chosen uniformly at random from all
graphs on $n$ vertices behave? This quantity is a random variable, so one can
ask (i) for upper and lower bounds on its typical values, and (ii) for bounds
on how much it varies: what is the width (e.g., standard deviation) of its
distribution?
On (i) there has been considerable progress over the last 45 years; on (ii),
which is our focus here, remarkably little. One would like both upper and lower
bounds on the width of the distribution, and ideally a description of the
(appropriately scaled) limiting distribution. There is a well known upper bound
of Shamir and Spencer of order $\sqrt{n}$, improved slightly by Alon to
$\sqrt{n}/\log n$, but no non-trivial lower bound was known until 2019, when
the first author proved that the width is at least $n^{1/4-o(1)}$ for
infinitely many $n$, answering a longstanding question of Bollob\'as.
In this paper we have two main aims: first, we shall prove a much stronger
lower bound on the width. We shall show unconditionally that, for some values
of $n$, the width is at least $n^{1/2-o(1)}$, matching the upper bounds up to
the error term. Moreover, conditional on a recently announced sharper explicit
estimate for the chromatic number, we improve the lower bound to order
$\sqrt{n} \log \log n /\log^3 n$, within a logarithmic factor of the upper
bound.
Secondly, we will describe a number of conjectures as to what the true
behaviour of the variation in $\chi(G_{n,1/2})$ is, and why. The first form of
this conjecture arises from recent work of Bollob\'as, Heckel, Morris,
Panagiotou, Riordan and Smith. We will also give much more detailed
conjectures, suggesting that the true width, for the worst case $n$, matches
our lower bound up to a constant factor. These conjectures also predict a
Gaussian limiting distribution.
|
Over the last few years, ReS2 has generated a myriad of unattended queries
regarding its structure, the concomitant thickness dependent electronic
properties and apparently contrasting experimental optical response. In this
work, with elaborate first-principles investigations, using density functional
theory (DFT) and time-dependent DFT (TDDFT), we identify the structure of ReS2,
which is capable of reproducing and analyzing the layer-dependent optical
response. The theoretical results are further validated by an in-depth
structural, chemical, optical and optoelectronic analysis of the large-area
ReS2 thin films, grown by chemical vapor deposition (CVD) process. Micro-Raman
(MR), X-ray photoelectron spectroscopy (XPS), cross-sectional transmission
electron microscopy (TEM) and energy-dispersive X-ray analysis (EDAX) have
enabled the optimization of the uniform growth of the CVD films. The
correlation between the layer-dependent optical and electronic properties of
the excited states was established by static photoluminescence (PL) and
transient absorption (TA) measurements. Sulfur vacancy-induced localized
mid-gap states render a significantly long life-time of the excitons in these
films. The ionic gel top-gated photo-detectors, fabricated from the as-prepared
CVD films, exhibit a large photo-response of ~ 5 A/W and a remarkable
detectivity of ~ 1011 Jones. The outcome of the present work will be useful to
promote the application of vertically grown large-area films in the field of
optics and opto-electronics.
|
The interest in offensive content identification in social media has grown
substantially in recent years. Previous work has dealt mostly with post level
annotations. However, identifying offensive spans is useful in many ways. To
help coping with this important challenge, we present MUDES, a multilingual
system to detect offensive spans in texts. MUDES features pre-trained models, a
Python API for developers, and a user-friendly web-based interface. A detailed
description of MUDES' components is presented in this paper.
|
Context: Backsourcing is the process of insourcing previously outsourced
activities. When companies experience environmental or strategic changes, or
challenges with outsourcing, backsourcing can be a viable alternative. While
outsourcing and related processes have been extensively studied in software
engineering, few studies report experiences with backsourcing. Objectives: We
intend to summarize the results of the research literature on the backsourcing
of IT, with a focus on software development. By identifying practical relevance
experience, we aim to present findings that may help companies considering
backsourcing. In addition, we aim to identify gaps in the current research
literature and point out areas for future work. Method: Our systematic
literature review (SLR) started with a search for empirical studies on the
backsourcing of software development. From each study we identified the
contexts in which backsourcing occurs, the factors leading to the decision to
backsource, the backsourcing process itself, and the outcomes of backsourcing.
We employed inductive coding to extract textual data from the papers identified
and qualitative cross-case analysis to synthesize the evidence from
backsourcing experiences. Results: We identified 17 papers that reported 26
cases of backsourcing, six of which were related to software development. The
cases came from a variety of contexts. The most common reasons for backsourcing
were improving quality, reducing costs, and regaining control of outsourced
activities. The backsourcing process can be described as containing five
sub-processes: change management, vendor relationship management, competence
building, organizational build-up, and transfer of ownership. Furthermore, ...
|
The unbound proton-rich nuclei $^{16}$F and $^{15}$F are investigated
experimentally and theoretically. Several experiments using the resonant
elastic scattering method were performed at GANIL with radioactive beams to
determine the properties of the low lying states of these nuclei. Strong
asymmetry between $^{16}$F-$^{16}$N and $^{15}$F-$^{15}$C mirror nuclei is
observed. The strength of the $nucleon-nucleon$ effective interaction involving
the loosely bound proton in the $s_{1/2}$ orbit is significantly modified with
respect to their mirror nuclei $^{16}$N and $^{15}$C. The reduction of the
effective interaction is estimated by calculating the interaction energies with
a schematic zero-range force. It is found that, after correcting for the
effects due to changes in the radial distribution of the single-particle wave
functions, the mirror symmetry of the $n-p$ interaction is preserved between
$^{16}$F and $^{16}$N, while a difference of 63\% is measured between the $p-p$
versus $n-n$ interactions in the second excited state of $^{15}$F and $^{15}$C
nuclei. Several explanations are proposed.
|
We extend to Segal-Piatetski-Shapiro sequences previous results on the
Luca-Schinzel question over integral valued polynomial sequences. Namely, we
prove that for any real $c$ larger than $1$ the sequence $(\sum_{m\le n}
\varphi(\lfloor m^c \rfloor) /\lfloor m^c \rfloor)_n$ is dense modulo $1$,
where $\varphi$ denotes Euler's totient function. The main part of the proof
consists in showing that when $R$ is a large integer, the sequence of the
residues of $\lfloor m^c \rfloor$ modulo $R$ contains blocks of consecutive
values which are in an arithmetic progression.
|
A decentralized feedback controller for multi-agent systems, inspired by
vehicle platooning, is proposed. The closed-loop resulting from the
decentralized control action has three distinctive features: the generation of
collision-free trajectories, flocking of the system towards a consensus state
in velocity, and asymptotic convergence to a prescribed pattern of distances
between agents. For each feature, a rigorous dynamical analysis is provided,
yielding a characterization of the set of parameters and initial configurations
where collision avoidance, flocking, and pattern formation is guaranteed.
Numerical tests assess the theoretical results presented.
|
The extended state observer (ESO) is an inherent element of robust
observer-based control systems that allows estimating the impact of disturbance
on system dynamics. Proper tuning of ESO parameters is necessary to ensure a
good quality of estimated quantities and impacts the overall performance of the
robust control structure. In this paper, we propose a neural network (NN) based
tuning procedure that allows the user to prioritize between selected quality
criteria such as the control and observation errors and the specified features
of the control signal. The designed NN provides an accurate assessment of the
control system performance and returns a set of ESO parameters that delivers a
near-optimal solution to the user-defined cost function. The proposed tuning
procedure, using an estimated state from the single closed-loop experiment
produces near-optimal ESO gains within seconds.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.