abstract
stringlengths 42
2.09k
|
---|
I study a population model in which the reproduction rate lambda is inherited
with mutation, favoring fast reproducers in the short term, but conflicting
with a process that eliminates agglomerations of individuals. The model is a
variant of the triplet annihilation model introduced several decades ago [R.
Dickman, Phys. Rev. B~{\bf 40}, 7005 (1989)] in which organisms ("particles")
reproduce and diffuse on a lattice, subject to annihilation when (and only
when) occupying three consecutive sites. For diffusion rates below a certain
value, the population possesses two "survival strategies": (i) rare
reproduction (0 < lambda < lambda_{c,1}), in which a low density of diffusing
particles renders triplets exceedingly rare, and (ii) frequent reproduction
(lambda > lambda_{c,2}). For lambda between lambda_{c,1} and lambda_{c,2} there
is no active steady state. In the rare-reproduction regime, a mutating
$\lambda$ leads to stochastic boom-and-bust cycles in which the reproduction
rate fluctuates upward in certain regions, only to lead to extinction as the
local value of lambda becomes excessive. The global population can nevertheless
survive due to the presence of other regions, with reproduction rates that have
yet to drift upward.
|
We perform a Koopman spectral analysis of elementary cellular automata (ECA).
By lifting the system dynamics using a one-hot representation of the system
state, we derive a matrix representation of the Koopman operator as a transpose
of the adjacency matrix of the state-transition network. The Koopman
eigenvalues are either zero or on the unit circle in the complex plane, and the
associated Koopman eigenfunctions can be explicitly constructed. From the
Koopman eigenvalues, we can judge the reversibility, determine the number of
connected components in the state-transition network, evaluate the periods of
asymptotic orbits, and derive the conserved quantities for each system. We
numerically calculate the Koopman eigenvalues of all rules of ECA on a
one-dimensional lattice of 13 cells with periodic boundary conditions. It is
shown that the spectral properties of the Koopman operator reflect Wolfram's
classification of ECA.
|
We use quantum kinetic theory to calculate the thermoelectric transport
properties of the 2D single band Fermi-Hubbard model in the weak coupling
limit. For generic filling, we find that the high-temperature limiting
behaviors of the electrical ($\sim T$) and thermal ($\sim T^2$) resistivities
persist down to temperatures of order the hopping matrix element $T\sim t$,
almost an order of magnitude below the bandwidth. At half filling, perfect
nesting leads to anomalous low temperature scattering and nearly $T$-linear
electrical resistivity at all temperatures. We hypothesize that the $T$-linear
resistivity observed in recent cold atom experiments is continuously connected
to this weak coupling physics and suggest avenues for experimental
verification. We find a number of other novel thermoelectric results, such as a
low-temperature Wiedemann-Franz law with Lorenz coefficient $5\pi^2/36$.
|
An innovative approach for the synthesis of inexpensive holographic smart
electromagnetic (EM) skins with advanced beamforming features is proposed. The
complex multiscale smart skin design is formulated within the Generalized Sheet
Transition Condition (GSTC) framework as a combination of a mask-constrained
isophoric inverse source problem and a micro-scale susceptibility dyadic
optimization. The solution strategy integrates a local search procedure based
on the iterative projection technique (IPT) and a System-by-Design (SbD)-based
optimization loop for the identification of optimal metasurface descriptors
matching the desired surface currents. The performance and the efficiency of
the proposed approach are assessed in a set of representative test cases
concerned with different smart skin apertures and target pattern masks.
|
We report on the observation of bona fide stochastic resonance (SR) in a
nonGaussian active bath without any periodic forcing. Particles hopping in a
nanoscale double-well potential under the influence of correlated Poisson noise
display a series of equally-spaced peaks in the residence time distribution.
Maximal peaks are measured when the mean residence time matches a double
condition on the interval and correlation timescales of the noise,
demonstrating a new type of SR. The experimental findings agree with a simple
model that explains the emergence of SR without periodicity. Correlated
nonGaussian noise is common in living systems, suggesting that this type of SR
is widespread in this regime.
|
We generalize Birkhoff's Theorem in the following fashion. We find necessary
and sufficient conditions for any spherically symmetric space-time to be static
in terms of the eigenvalues of the stress-energy tensor. In particular, we
generalize the Tolman-Oppenheimer-Volkoff equation and prove that Birkhoff's
theorem holds under the weaker hypothesis of no pressure (with respect to an
appropriate frame.) We provide equations that show how the coefficients of the
metric relate to the eigenvalues of the stress-energy tensor. These involve
integrals that are simple functions of those eigenvalues. We also determine
among all static spherically symmetric space-times those that are
asymptotically flat. A few examples are presented taking advantage of the
results. The calculations are done by viewing the space-times as warped
products and the computations are done using Cartan's moving frames approach.
|
Deep Reinforcement Learning has emerged as an efficient dynamic obstacle
avoidance method in highly dynamic environments. It has the potential to
replace overly conservative or inefficient navigation approaches. However, the
integration of Deep Reinforcement Learning into existing navigation systems is
still an open frontier due to the myopic nature of
Deep-Reinforcement-Learning-based navigation, which hinders its widespread
integration into current navigation systems. In this paper, we propose the
concept of an intermediate planner to interconnect novel
Deep-Reinforcement-Learning-based obstacle avoidance with conventional global
planning methods using waypoint generation. Therefore, we integrate different
waypoint generators into existing navigation systems and compare the joint
system against traditional ones. We found an increased performance in terms of
safety, efficiency and path smoothness especially in highly dynamic
environments.
|
Correlation matrices are used in many domains of neurosciences such as fMRI,
EEG, MEG. However, statistical analyses often rely on embeddings into a
Euclidean space or into Symmetric Positive Definite matrices which do not
provide intrinsic tools. The quotient-affine metric was recently introduced as
the quotient of the affine-invariant metric on SPD matrices by the action of
diagonal matrices. In this work, we provide most of the fundamental Riemannian
operations of the quotient-affine metric: the expression of the metric itself,
the geodesics with initial tangent vector, the Levi-Civita connection and the
curvature.
|
We present a systematic study of holographic correlators in a vast array of
SCFTs with non-maximal superconformal symmetry. These theories include 4d
$\mathcal{N}=2$ SCFTs from D3-branes near F-theory singularities, 5d Seiberg
exceptional theories and 6d E-string theory, as well as 3d and 4d
phenomenological models with probe flavor branes. We consider current
multiplets and their generalizations with higher weights, dual to massless and
massive super gluons in the bulk. At leading order in the inverse central
charge expansion, connected four-point functions of these operators correspond
to tree-level gluon scattering amplitudes in AdS. We show that all such
tree-level four-point amplitudes in all these theories are fully fixed by
symmetries and consistency conditions and explicitly construct them. Our
results encode a wealth of SCFT data and exhibit various interesting emergent
structures. These include Parisi-Sourlas-like dimensional reductions, hidden
conformal symmetry and an AdS version of the color-kinematic duality.
|
New MMT/Hectospec spectroscopy centered on the galaxy cluster A2626 and
covering a ${\sim} 1.8\,\text{deg}^2$ area out to $z \sim 0.46$ more than
doubles the number of galaxy redshifts in this region. The spectra confirm four
clusters previously identified photometrically. A2625, which was previously
thought to be a close neighbor of A2626, is in fact much more distant. The new
data show six substructures associated with A2626 and five more associated with
A2637. There is also a highly collimated collection of galaxies and galaxy
groups between A2626 and A2637 having at least three and probably four
substructures. At larger scales, the A2626--A2637 complex is not connected to
the Pegasus--Perseus filament.
|
In order to safely deploy Deep Neural Networks (DNNs) within the perception
pipelines of real-time decision making systems, there is a need for safeguards
that can detect out-of-training-distribution (OoD) inputs both efficiently and
accurately. Building on recent work leveraging the local curvature of DNNs to
reason about epistemic uncertainty, we propose Sketching Curvature of OoD
Detection (SCOD), an architecture-agnostic framework for equipping any trained
DNN with a task-relevant epistemic uncertainty estimate. Offline, given a
trained model and its training data, SCOD employs tools from matrix sketching
to tractably compute a low-rank approximation of the Fisher information matrix,
which characterizes which directions in the weight space are most influential
on the predictions over the training data. Online, we estimate uncertainty by
measuring how much perturbations orthogonal to these directions can alter
predictions at a new test input. We apply SCOD to pre-trained networks of
varying architectures on several tasks, ranging from regression to
classification. We demonstrate that SCOD achieves comparable or better OoD
detection performance with lower computational burden relative to existing
baselines.
|
For many of the 700 million illiterate people around the world, speech
recognition technology could provide a bridge to valuable information and
services. Yet, those most in need of this technology are often the most
underserved by it. In many countries, illiterate people tend to speak only
low-resource languages, for which the datasets necessary for speech technology
development are scarce. In this paper, we investigate the effectiveness of
unsupervised speech representation learning on noisy radio broadcasting
archives, which are abundant even in low-resource languages. We make three core
contributions. First, we release two datasets to the research community. The
first, West African Radio Corpus, contains 142 hours of audio in more than 10
languages with a labeled validation subset. The second, West African Virtual
Assistant Speech Recognition Corpus, consists of 10K labeled audio clips in
four languages. Next, we share West African wav2vec, a speech encoder trained
on the noisy radio corpus, and compare it with the baseline Facebook speech
encoder trained on six times more data of higher quality. We show that West
African wav2vec performs similarly to the baseline on a multilingual speech
recognition task, and significantly outperforms the baseline on a West African
language identification task. Finally, we share the first-ever speech
recognition models for Maninka, Pular and Susu, languages spoken by a combined
10 million people in over seven countries, including six where the majority of
the adult population is illiterate. Our contributions offer a path forward for
ethical AI research to serve the needs of those most disadvantaged by the
digital divide.
|
Many-access channel (MnAC) model allows the number of users in the system and
the number of active users to scale as a function of the blocklength and as
such is suited for dynamic communication systems with massive number of users
such as the Internet of Things. Existing MnAC models assume a priori knowledge
of channel gains which is impractical since acquiring Channel State Information
(CSI) for massive number of users can overwhelm the available radio resources.
This paper incorporates Rayleigh fading effects to the MnAC model and derives
an upper bound on the symmetric message-length capacity of the Rayleigh-fading
Gaussian MnAC. Furthermore, a lower bound on the minimum number of channel uses
for discovering the active users is established. In addition, the performance
of Noisy Combinatorial Orthogonal Matching Pursuit (N-COMP) based group testing
(GT) is studied as a practical strategy for active device discovery.
Simulations show that, for a given SNR, as the number of users increase, the
required number of channel uses for N-COMP GT scales approximately the same way
as the lower bound on minimum user identification cost. Moreover, in the low
SNR regime, for sufficiently large population sizes, the number of channel uses
required by N-COMP GT was observed to be within a factor of two of the lower
bound when the expected number of active users scales sub-linearly with the
total population size.
|
A long march of fifty years of successive theoretical progress and new
physics discovered using observations of gamma-ray bursts, has finally led to
the formulation of an efficient mechanism able to extract the rotational energy
of a Kerr black hole to power these most energetic astrophysical sources and
active galactic nuclei. We here present the salient features of this
long-sought mechanism, based on gravito-electrodynamics, and which represents
an authentic shift of paradigm of black holes as forever "alive" astrophysical
objects.
|
Graph-based subspace clustering methods have exhibited promising performance.
However, they still suffer some of these drawbacks: encounter the expensive
time overhead, fail in exploring the explicit clusters, and cannot generalize
to unseen data points. In this work, we propose a scalable graph learning
framework, seeking to address the above three challenges simultaneously.
Specifically, it is based on the ideas of anchor points and bipartite graph.
Rather than building a $n\times n$ graph, where $n$ is the number of samples,
we construct a bipartite graph to depict the relationship between samples and
anchor points. Meanwhile, a connectivity constraint is employed to ensure that
the connected components indicate clusters directly. We further establish the
connection between our method and the K-means clustering. Moreover, a model to
process multi-view data is also proposed, which is linear scaled with respect
to $n$. Extensive experiments demonstrate the efficiency and effectiveness of
our approach with respect to many state-of-the-art clustering methods.
|
The paper considers the input-constrained binary erasure channel (BEC) with
causal, noiseless feedback. The channel input sequence respects the
$(d,\infty)$-runlength limited (RLL) constraint, i.e., any pair of successive
$1$s must be separated by at least $d$ $0$s. We derive upper and lower bounds
on the feedback capacity of this channel, for all $d\geq 1$, given by:
$\max\limits_{\delta \in [0,\frac{1}{d+1}]}R(\delta) \leq
C^{\text{fb}}_{(d\infty)}(\epsilon) \leq \max\limits_{\delta \in
[0,\frac{1}{1+d\epsilon}]}R(\delta)$, where the function $R(\delta) =
\frac{h_b(\delta)}{d\delta + \frac{1}{1-\epsilon}}$, with $\epsilon\in [0,1]$
denoting the channel erasure probability, and $h_b(\cdot)$ being the binary
entropy function. We note that our bounds are tight for the case when $d=1$
(see Sabag et al. (2016)), and, in addition, we demonstrate that for the case
when $d=2$, the feedback capacity is equal to the capacity with non-causal
knowledge of erasures, for $\epsilon \in [0,1-\frac{1}{2\log(3/2)}]$. For
$d>1$, our bounds differ from the non-causal capacities (which serve as upper
bounds on the feedback capacity) derived in Peled et al. (2019) in only the
domains of maximization. The approach in this paper follows Sabag et al.
(2017), by deriving single-letter bounds on the feedback capacity, based on
output distributions supported on a finite $Q$-graph, which is a directed graph
with edges labelled by output symbols.
|
The optical properties of bulk ZrSiS nodal-line semimetal are theoretically
studied within a many-body formalism. The G0W0 bands are similar to those
calculated within the density functional theory, except near the {\Gamma}
point; in particular, no significant differences are found around the Fermi
energy. On the other hand, the solution of the Bethe-Salpeter equation reveals
a significant excitonic activity, mostly as dark excitons which appear in a
wide energy range. Bright excitons, on the contrary, are less numerous, but
their location and intensity depend greatly on the polarization of the incident
electric field, as the absorption coefficient itself does. The binding energy
of these excitons correlate well with their spatial distribution functions. In
any case, a good agreement with available experimental data for
absorption-reflection is achieved. Finally, the possible activation of plasma
oscillations at low energies is discarded, because these are damped by
producing electron-hole pairs, more importantly for q along the {\Gamma}-M
path.
|
We theoretically study magnetic field, temperature, and energy band-gap
dependences of magnetizations in the Dirac fermions. We use the zeta function
regularization to obtain analytical expressions of thermodynamic potential,
from which the magnetization of graphene for strong field/low temperature and
weak field/high temperature limits are calculated. Further, we generalize the
result by considering the effects of impurity on orbital susceptibility of
graphene. In particular, we show that in the presence of impurity, the
susceptibility follows a scaling law which can be approximated by the Faddeeva
function. In the case of the massive Dirac fermions, we show that a large
band-gap gives a robust magnetization with respect to temperature and impurity.
In the doped Dirac fermion, we discuss the dependence of the band-gap on the
period and amplitude of the de Haas-van Alphen effect.
|
Data quality monitoring is critical to all experiments impacting the quality
of any physics results. Traditionally, this is done through an alarm system,
which detects low level faults, leaving higher level monitoring to human crews.
Artificial Intelligence is beginning to find its way into scientific
applications, but comes with difficulties, relying on the acquisition of new
skill sets, either through education or acquisition, in data science. This
paper will discuss the development and deployment of the Hydra monitoring
system in production at Gluex. It will show how "off-the-shelf" technologies
can be rapidly developed, as well as discuss what sociological hurdles must be
overcome to successfully deploy such a system. Early results from production
running of Hydra will also be shared as well as a future outlook for
development of Hydra.
|
Motivated by the recent discovery that the interpretation maps of CNNs could
easily be manipulated by adversarial attacks against network interpretability,
we study the problem of interpretation robustness from a new perspective of
\Renyi differential privacy (RDP). The advantages of our Renyi-Robust-Smooth
(RDP-based interpretation method) are three-folds. First, it can offer provable
and certifiable top-$k$ robustness. That is, the top-$k$ important attributions
of the interpretation map are provably robust under any input perturbation with
bounded $\ell_d$-norm (for any $d\geq 1$, including $d = \infty$). Second, our
proposed method offers $\sim10\%$ better experimental robustness than existing
approaches in terms of the top-$k$ attributions. Remarkably, the accuracy of
Renyi-Robust-Smooth also outperforms existing approaches. Third, our method can
provide a smooth tradeoff between robustness and computational efficiency.
Experimentally, its top-$k$ attributions are {\em twice} more robust than
existing approaches when the computational resources are highly constrained.
|
Today, artificial neural networks are one of the major innovators pushing the
progress of machine learning. This has particularly affected the development of
neural network accelerating hardware. However, since most of these
architectures require specialized toolchains, there is a certain amount of
additional effort for developers each time they want to make use of a new deep
learning accelerator. Furthermore the flexibility of the device is bound to the
architecture itself, as well as to the functionality of the runtime
environment.
In this paper we propose a toolflow using TensorFlow as frontend, thus
offering developers the opportunity of using a familiar environment. On the
backend we use an FPGA, which is addressable via an HSA runtime environment. In
this way we are able to hide the complexity of controlling new hardware from
the user, while at the same time maintaining a high amount of flexibility. This
can be achieved by our HSA toolflow, since the hardware is not statically
configured with the structure of the network. Instead, it can be dynamically
reconfigured during runtime with the respective kernels executed by the network
and simultaneously from other sources e.g. OpenCL/OpenMP.
|
Software of Unknown Provenance, SOUP, refers to a software component that is
already developed and widely available from a 3rd party, and that has not been
developed, to be integrated into a medical device. From regulatory perspective,
SOUP software requires special considerations, as the developers' obligations
related to design and implementation are not applied to it. In this paper, we
consider the implications of extending the concept of SOUP to machine learning
(ML) models. As the contribution, we propose practical means to manage the
added complexity of 3rd party ML models in regulated development.
|
The set \[ \overline{\mathbb{E}}= \{ x \in {\mathbb{C}}^3: \quad 1-x_1 z -
x_2 w + x_3 zw \neq 0 \mbox{ whenever } |z| < 1, |w| < 1 \} \] is called the
tetrablock and has intriguing complex-geometric properties. It is polynomially
convex, nonconvex and starlike about $0$. It has a group of automorphisms
parametrised by ${\mathrm{Aut}~} {\mathbb{D}} \times {\mathrm{Aut}~}
{\mathbb{D}} \times {\mathbb{Z}}_2$ and its distinguished boundary
$b\overline{\mathbb{E}}$ is homeomorphic to the solid torus
$\overline{\mathbb{D}} \times {\mathbb{T}}$. It has a special subvariety
\[\mathcal{R}_{\mathbb{\overline{E}}} = \big\{ (x_{1}, x_{2}, x_{3}) \in
\overline{\mathbb{E}} : x_{1}x_{2}=x_{3} \big\}, \] called the royal variety of
$\overline{\mathbb{E}}$, which is a complex geodesic of ${\mathbb{E}}$ that is
invariant under all automorphisms of ${\mathbb{E}}$. We exploit this geometry
to develop an explicit and detailed structure theory for the rational maps from
the unit disc ${\mathbb{D}}$ to $\overline{\mathbb{E}}$ that map the unit
circle ${\mathbb{T}}$ to the distinguished boundary $b\overline{\mathbb{E}}$ of
$\overline{\mathbb{E}}$. Such maps are called rational $\mathbb{ \overline{
E}}$-inner functions. We show that, for each nonconstant rational $\mathbb{
\overline{ E}}$-inner function $x$, either $x(\overline{\mathbb{D}}) \subseteq
\mathcal{R}_{\mathbb{\overline{E}}} \cap \overline{\mathbb{E}}$ or
$x(\overline{\mathbb{D}})$ meets $\mathcal{R}_{\mathbb{\overline{E}}}$ exactly
$deg(x)$ times.
We study convex subsets of the set $\mathcal{J}$ of all rational $\mathbb{
\overline{ E}}$-inner functions and extreme points of $\mathcal{J}$.
|
A strong backdoor in a formula $\phi$ of propositional logic to a tractable
class $\mathcal{C}$ of formulas is a set $B$ of variables of $\phi$ such that
every assignment of the variables in $B$ results in a formula from
$\mathcal{C}$. Strong backdoors of small size or with a good structure, e.g.
with small backdoor treewidth, lead to efficient solutions for the
propositional satisfiability problem SAT. In this paper we propose the new
notion of recursive backdoors, which is inspired by the observation that in
order to solve SAT we can independently recurse into the components that are
created by partial assignments of variables. The quality of a recursive
backdoor is measured by its recursive backdoor depth. Similar to the concept of
backdoor treewidth, recursive backdoors of bounded depth include backdoors of
unbounded size that have a certain treelike structure. However, the two
concepts are incomparable and our results yield new tractability results for
SAT.
|
We study the action of the multiplicative group generated by two prime
numbers in $\mathbf{Z}/Q\mathbf{Z}$. More specifically, we study returns to the
set $([-Q^\varepsilon,Q^\varepsilon]\cap \mathbf{Z})/Q\mathbf{Z}$. This is
intimately related to the problem of bounding the greatest common divisor of
$S$-unit differences, which we revisit. Our main tool is the $S$-adic subspace
theorem.
|
We study cardinality-constrained optimization problems (CCOP) in general
position, i. e. those optimization-related properties that are fulfilled for a
dense and open subset of their defining functions. We show that the well-known
cardinality-constrained linear independence constraint qualification (CC-LICQ)
is generic in this sense. For M-stationary points we define nondegeneracy and
show that it is a generic property too. In particular, the sparsity constraint
turns out to be active at all minimizers of a generic CCOP. Moreover, we
describe the global structure of CCOP in the sense of Morse theory, emphasizing
the strength of the generic approach. Here, we prove that multiple cells need
to be attached, each of dimension coinciding with the proposed M-index of
nondegenerate M-stationary points. Beyond this generic viewpoint, we study
singularities of CCOP. For that, the relation between nondegeneracy and strong
stability in the sense of Kojima (1980) is examined. We show that nondegeneracy
implies the latter, while the reverse implication is in general not true. To
fill the gap, we fully characterize the strong stability of M-stationary points
under CC-LICQ by first- and second-order information of CCOP defining
functions. Finally, we compare nondegeneracy and strong stability of
M-stationary points with second-order sufficient conditions recently introduced
in the literature.
|
The ultraconfined light of plasmonic modes put their effective wavelength
close to the mean free path of electrons inside the metal electron gas. The
Drude model, which can not take the repulsive interactions of electrons into
account, then clearly begins to show its limits. In an intermediate length
scale where a full quantum treatment is computationally prohibitive, the
semiclassical hydrodynamic model, instrinsically non-local, has proven
successful. Here we generalize the expression for the absorption volume density
and the reciprocity theorem in the framework of this hydrodynamic model. We
validate numerically these generalized theorems and show that using classical
expressions instead leads to large discrepancies.
|
We study multi-agent reinforcement learning (MARL) in infinite-horizon
discounted zero-sum Markov games. We focus on the practical but challenging
setting of decentralized MARL, where agents make decisions without coordination
by a centralized controller, but only based on their own payoffs and local
actions executed. The agents need not observe the opponent's actions or
payoffs, possibly being even oblivious to the presence of the opponent, nor be
aware of the zero-sum structure of the underlying game, a setting also referred
to as radically uncoupled in the literature of learning in games. In this
paper, we develop a radically uncoupled Q-learning dynamics that is both
rational and convergent: the learning dynamics converges to the best response
to the opponent's strategy when the opponent follows an asymptotically
stationary strategy; when both agents adopt the learning dynamics, they
converge to the Nash equilibrium of the game. The key challenge in this
decentralized setting is the non-stationarity of the environment from an
agent's perspective, since both her own payoffs and the system evolution depend
on the actions of other agents, and each agent adapts her policies
simultaneously and independently. To address this issue, we develop a
two-timescale learning dynamics where each agent updates her local Q-function
and value function estimates concurrently, with the latter happening at a
slower timescale.
|
Recently normalizing flows (NFs) have demonstrated state-of-the-art
performance on modeling 3D point clouds while allowing sampling with arbitrary
resolution at inference time. However, these flow-based models still require
long training times and large models for representing complicated geometries.
This work enhances their representational power by applying mixtures of NFs to
point clouds. We show that in this more general framework each component learns
to specialize in a particular subregion of an object in a completely
unsupervised fashion. By instantiating each mixture component with a
comparatively small NF we generate point clouds with improved details compared
to single-flow-based models while using fewer parameters and considerably
reducing the inference runtime. We further demonstrate that by adding data
augmentation, individual mixture components can learn to specialize in a
semantically meaningful manner. We evaluate mixtures of NFs on generation,
autoencoding and single-view reconstruction based on the ShapeNet dataset.
|
Over the past several decades, single photon-based path interference in
double-slit experiment has been well demonstrated for the particle nature of
photons satisfying complementarity theory, where the quantum mechanical
interpretation of the single photon interference fringe is given by Born rule
for complex amplitudes in measurements. Unlike most conventional methods using
entangled photon pairs, here, a classical coherent light source is directly
applied for the proof of the same self-interference phenomenon. Instead of
double slits, we use a Mach-Zehnder interferometer as usual, where the
resulting self-interference fringe of coherent single photons is exactly the
same as that of the coherent photon ensemble of the laser, demonstrating that
the classical coherence based on the wave nature is rooted in the single photon
self-interference. This unexpected result seems to contradict our common
understanding of decoherence phenomena caused by multi-wave interference among
bandwidth distributed photons in coherence optics.
|
Let G be a simple complex algebraic group and let K be a reductive subgroup
of G such that the coordinate ring of G/K is a multiplicity free G-module. We
consider the G-algebra structure of C[G/K], and study the decomposition into
irreducible summands of the product of irreducible G-submodules in C[G/K]. When
the spherical roots of G/K generate a root system of type A we propose a
conjectural decomposition rule, which relies on a conjecture of Stanley on the
multiplication of Jack symmetric functions. With the exception of one case, we
show that the rule holds true whenever the root system generated by the
spherical roots of G/K is direct sum of subsystems of rank one.
|
We study the decay phase of solar flares in several spectral bands using a
method based on that successfully applied to white light flares observed on an
M4 dwarf. We selected and processed 102 events detected in the Sun-as-a-star
flux obtained with SDO/AIA images in the 1600~{\AA} and 304~{\AA} channels and
54 events detected in the 1700~{\AA} channel. The main criterion for the
selection of time profiles was a slow, continuous flux decay without
significant new bursts. The obtained averaged time profiles were fitted with
analytical templates, using different time intervals, that consisted of a
combination of two independent exponents or a broken power law. The average
flare profile observed in the 1700~{\AA} channel decayed more slowly than the
average flare profile observed on the M4 dwarf. As the 1700~{\AA} emission is
associated with a similar temperature to that usually ascribed to M dwarf
flares, this implies that the M dwarf flare emission comes from a more dense
layer than solar flare emission in the 1700~{\AA} band. The cooling processes
in solar flares were best described by the two exponents model, fitted over the
intervals t1=[0, 0.5]$t_{1/2}$ and t2=[3, 10]$t_{1/2}$ where $t_{1/2}$ is time
taken for the profile to decay to half the maximum value. The broken power law
model provided a good fit to the first decay phase, as it was able to account
for the impact of chromospheric plasma evaporation, but it did not successfully
fit the second decay phase.
|
As various databases of facial expressions have been made accessible over the
last few decades, the Facial Expression Recognition (FER) task has gotten a lot
of interest. The multiple sources of the available databases raised several
challenges for facial recognition task. These challenges are usually addressed
by Convolution Neural Network (CNN) architectures. Different from CNN models, a
Transformer model based on attention mechanism has been presented recently to
address vision tasks. One of the major issue with Transformers is the need of a
large data for training, while most FER databases are limited compared to other
vision applications. Therefore, we propose in this paper to learn a vision
Transformer jointly with a Squeeze and Excitation (SE) block for FER task. The
proposed method is evaluated on different publicly available FER databases
including CK+, JAFFE,RAF-DB and SFEW. Experiments demonstrate that our model
outperforms state-of-the-art methods on CK+ and SFEW and achieves competitive
results on JAFFE and RAF-DB.
|
In this work, we provide the design and implementation of a switch-assisted
congestion control algorithm for data center networks (DCNs). In particular, we
provide a prototype of the switch-driven congestion control algorithm and
deploy it in a real data center. The prototype is based on few simple
modifications to the switch software. The modifications imposed by the
algorithm on the switch are to enable the switch to modify the TCP
receive-window field in the packet headers. By doing so, the algorithm can
enforce a pre-calculated (or target rate) to limit the sending rate at the
sources. Therefore, the algorithm requires no modifications to the TCP source
or receiver code which considered out of the DCN operators' control (e.g., in
the public cloud where the VM is maintained by the tenant). This paper
describes in detail two implementations, one as a Linux kernel module and the
second as an added feature to the well-known software switch, Open vSwitch.
Then we present evaluation results based on experiments of the deployment of
both designs in a small testbed to demonstrate the effectiveness of the
proposed technique in achieving high throughput, good fairness, and short flow
completion times for delay-sensitive flows.
|
We introduce a set of algorithms (Het-node2vec) that extend the original
node2vec node-neighborhood sampling method to heterogeneous multigraphs, i.e.
networks characterized by multiple types of nodes and edges. The resulting
random walk samples capture both the structural characteristics of the graph
and the semantics of the different types of nodes and edges. The proposed
algorithms can focus their attention on specific node or edge types, allowing
accurate representations also for underrepresented types of nodes/edges that
are of interest for the prediction problem under investigation. These rich and
well-focused representations can boost unsupervised and supervised learning on
heterogeneous graphs.
|
Understanding the internals of Integrated Circuits (ICs), referred to as
Hardware Reverse Engineering (HRE), is of interest to both legitimate and
malicious parties. HRE is a complex process in which semi-automated steps are
interwoven with human sense-making processes. Currently, little is known about
the technical and cognitive processes which determine the success of HRE.
This paper performs an initial investigation on how reverse engineers solve
problems, how manual and automated analysis methods interact, and which
cognitive factors play a role. We present the results of an exploratory
behavioral study with eight participants that was conducted after they had
completed a 14-week training. We explored the validity of our findings by
comparing them with the behavior (strategies applied and solution time) of an
HRE expert. The participants were observed while solving a realistic HRE task.
We tested cognitive abilities of our participants and collected large sets of
behavioral data from log files. By comparing the least and most efficient
reverse engineers, we were able to observe successful strategies. Moreover, our
analyses suggest a phase model for reverse engineering, consisting of three
phases. Our descriptive results further indicate that the cognitive factor
Working Memory (WM) might play a role in efficiently solving HRE problems. Our
exploratory study builds the foundation for future research in this topic and
outlines ideas for designing cognitively difficult countermeasures ("cognitive
obfuscation") against HRE.
|
We present a systematic numerical modeling investigation of magnetization
dynamics and thermal magnetic moment fluctuations of single magnetic domain
nanoparticles in a configuration applicable to enhancing inductive magnetic
resonance detection signal to noise ratio (SNR). Previous proposals for
oriented anisotropic single magnetic domain nanoparticle amplification of
magnetic flux in MRI coil focused only on the coil pick-up voltage signal
enhancement. Here we extend the analysis to the numerical evaluation of the SNR
by modeling the inherent thermal magnetic noise introduced into the detection
coil by the insertion of such anisotropic nanoparticle-filled coil core. We
utilize the Landau-Lifshitz-Gilbert equation under the Stoner-Wohlfarth single
magnetic domain (macrospin) assumption to simulate the magnetization dynamics
in such nanoparticles due to AC drive field as well as thermal noise. These
simulations are used to evaluate the nanoparticle configurations and shape
effects on enhancing SNR. Finally, we explore the effect of narrow band
filtering of the broadband magnetic moment thermal fluctuation noise on the
SNR. Our results provide the impetus for relatively simple modifications to
existing MRI systems for achieving enhanced detection SNR in scanners with
modest polarizing magnetic fields.
|
We consider the problem of constrained Markov Decision Process (CMDP) where
an agent interacts with a unichain Markov Decision Process. At every
interaction, the agent obtains a reward. Further, there are $K$ cost functions.
The agent aims to maximize the long-term average reward while simultaneously
keeping the $K$ long-term average costs lower than a certain threshold. In this
paper, we propose CMDP-PSRL, a posterior sampling based algorithm using which
the agent can learn optimal policies to interact with the CMDP. Further, for
MDP with $S$ states, $A$ actions, and diameter $D$, we prove that following
CMDP-PSRL algorithm, the agent can bound the regret of not accumulating rewards
from optimal policy by $\Tilde{O}(poly(DSA)\sqrt{T})$. Further, we show that
the violations for any of the $K$ constraints is also bounded by
$\Tilde{O}(poly(DSA)\sqrt{T})$. To the best of our knowledge, this is the first
work which obtains a $\Tilde{O}(\sqrt{T})$ regret bounds for ergodic MDPs with
long-term average constraints.
|
Network pruning is an effective approach to reduce network complexity with
acceptable performance compromise. Existing studies achieve the sparsity of
neural networks via time-consuming weight tuning or complex search on networks
with expanded width, which greatly limits the applications of network pruning.
In this paper, we show that high-performing and sparse sub-networks without the
involvement of weight tuning, termed ''lottery jackpots'', exist in pre-trained
models with unexpanded width. For example, we obtain a lottery jackpot that has
only 10% parameters and still reaches the performance of the original dense
VGGNet-19 without any modifications on the pre-trained weights on CIFAR-10.
Furthermore, we observe that the sparse masks derived from many existing
pruning criteria have a high overlap with the searched mask of our lottery
jackpot, among which, the magnitude-based pruning results in the most similar
mask with ours. Based on this insight, we initialize our sparse mask using the
magnitude-based pruning, resulting in at least 3x cost reduction on the lottery
jackpot search while achieving comparable or even better performance.
Specifically, our magnitude-based lottery jackpot removes 90% weights in
ResNet-50, while it easily obtains more than 70% top-1 accuracy using only 10
searching epochs on ImageNet. Our code is available at
https://github.com/zyxxmu/lottery-jackpots.
|
Density inhomogeneities are ubiquitous in space and astrophysical plasmas, in
particular at contact boundaries between different media. They often correspond
to regions that exhibits strong dynamics on a wide range of spatial and
temporal scales. Indeed, density inhomogeneities are a source of free energy
that can drive various instabilities such as, for instance, the
lower-hybrid-drift instability which in turn transfers energy to the particles
through wave-particle interactions and eventually heat the plasma. We aim at
quantifying the efficiency of the lower-hybrid-drift instability to accelerate
and/or heat electrons parallel to the ambient magnetic field. We combine two
complementary methods: full-kinetic and quasilinear models. We report
self-consistent evidence of electron acceleration driven by the development of
the lower-hybrid-drift instability using 3D-3V full-kinetic numerical
simulations. The efficiency of the observed acceleration cannot be explained by
standard quasilinear theory. For this reason, we develop an extended
quasilinear model able to quantitatively predict the interaction between
lower-hybrid fluctuations and electrons on long time scales, now in agreement
with full-kinetic simulations results. Finally, we apply this new, extended
quasilinear model to a specific inhomogeneous space plasma boundary: the
magnetopause of Mercury, and we discuss our quantitative predictions of
electron acceleration in support to future BepiColombo observations.
|
Deep learning and data-driven approaches have shown great potential in
scientific domains. The promise of data-driven techniques relies on the
availability of a large volume of high-quality training datasets. Due to the
high cost of obtaining data through expensive physical experiments,
instruments, and simulations, data augmentation techniques for scientific
applications have emerged as a new direction for obtaining scientific data
recently. However, existing data augmentation techniques originating from
computer vision, yield physically unacceptable data samples that are not
helpful for the domain problems that we are interested in. In this paper, we
develop new physics-informed data augmentation techniques based on
convolutional neural networks. Specifically, our generative models leverage
different physics knowledge (such as governing equations, observable
perception, and physics phenomena) to improve the quality of the synthetic
data. To validate the effectiveness of our data augmentation techniques, we
apply them to solve a subsurface seismic full-waveform inversion using
simulated CO$_2$ leakage data. Our interest is to invert for subsurface
velocity models associated with very small CO$_2$ leakage. We validate the
performance of our methods using comprehensive numerical tests. Via comparison
and analysis, we show that data-driven seismic imaging can be significantly
enhanced by using our physics-informed data augmentation techniques.
Particularly, the imaging quality has been improved by 15% in test scenarios of
general-sized leakage and 17% in small-sized leakage when using an augmented
training set obtained with our techniques.
|
The bimodal behavior of the order parameter is studied in the framework of
Boltzmann-Uehling-Uhlenbeck (BUU) transport model. In order to do that,
simplified yet accurate method of BUU model is used which allow calculation of
fluctuations in systems much larger than what was considered feasible in a
well-known and already existing model. It is observed that depending on the
projectile energy and centrality of the reaction, both entrance channel and
exit channel effects can be at the origin of the experimentally observed
bimodal behavior. Both dynamical and statistical bimodality mechanisms are
associated in the theoretical model to different time scales of the reaction,
and to different energy regimes.
|
We provide a generic algorithm for constructing formulae that distinguish
behaviourally inequivalent states in systems of various transition types such
as nondeterministic, probabilistic or weighted; genericity over the transition
type is achieved by working with coalgebras for a set functor in the paradigm
of universal coalgebra. For every behavioural equivalence class in a given
system, we construct a formula which holds precisely at the states in that
class. The algorithm instantiates to deterministic finite automata, transition
systems, labelled Markov chains, and systems of many other types. The ambient
logic is a modal logic featuring modalities that are generically extracted from
the functor; these modalities can be systematically translated into custom sets
of modalities in a postprocessing step. The new algorithm builds on an existing
coalgebraic partition refinement algorithm. It runs in time $\mathcal{O}((m+n)
\log n)$ on systems with $n$ states and $m$ transitions, and the same
asymptotic bound applies to the dag size of the formulae it constructs. This
improves the bounds on run time and formula size compared to previous
algorithms even for previously known specific instances, viz. transition
systems and Markov chains; in particular, the best previous bound for
transition systems was $\mathcal{O}(m n)$.
|
We present a new lower bound on the spectral gap of the Glauber dynamics for
the Gibbs distribution of a spectrally independent $q$-spin system on a graph
$G = (V,E)$ with maximum degree $\Delta$. Notably, for several interesting
examples, our bound covers the entire regime of $\Delta$ excluded by arguments
based on coupling with the stationary distribution. As concrete applications,
by combining our new lower bound with known spectral independence computations
and known coupling arguments:
(1) We show that for a triangle-free graph $G = (V,E)$ with maximum degree
$\Delta \geq 3$, the Glauber dynamics for the uniform distribution on proper
$k$-colorings with $k \geq (1.763\dots + \delta)\Delta$ colors has spectral gap
$\tilde{\Omega}_{\delta}(|V|^{-1})$. Previously, such a result was known either
if the girth of $G$ is at least $5$ [Dyer et.~al, FOCS 2004], or under
restrictions on $\Delta$ [Chen et.~al, STOC 2021; Hayes-Vigoda, FOCS 2003].
(2) We show that for a regular graph $G = (V,E)$ with degree $\Delta \geq 3$
and girth at least $6$, and for any $\varepsilon, \delta > 0$, the partition
function of the hardcore model with fugacity $\lambda \leq
(1-\delta)\lambda_{c}(\Delta)$ may be approximated within a
$(1+\varepsilon)$-multiplicative factor in time
$\tilde{O}_{\delta}(n^{2}\varepsilon^{-2})$. Previously, such a result was
known if the girth is at least $7$ [Efthymiou et.~al, SICOMP 2019].
(3) We show for the binomial random graph $G(n,d/n)$ with $d = O(1)$, with
high probability, an approximately uniformly random matching may be sampled in
time $O_{d}(n^{2+o(1)})$. This improves the corresponding running time of
$\tilde{O}_{d}(n^{3})$ due to [Jerrum-Sinclair, SICOMP 1989; Jerrum, 2003].
|
Myntra is an online fashion e-commerce company based in India. At Myntra, a
market leader in fashion e-commerce in India, customer experience is paramount
and a significant portion of our resources are dedicated to it. Here we
describe an algorithm that identifies eligible customers to enable preferential
product return processing for them by Myntra. We declare the group of
aforementioned eligible customers on the platform as elite customers. Our
algorithm to identify eligible/elite customers is based on sound principles of
game theory. It is simple, easy to implement and scalable.
|
We present a new end-to-end learning framework to obtain detailed and
spatially coherent reconstructions of multiple people from a single image.
Existing multi-person methods suffer from two main drawbacks: they are often
model-based and therefore cannot capture accurate 3D models of people with
loose clothing and hair; or they require manual intervention to resolve
occlusions or interactions. Our method addresses both limitations by
introducing the first end-to-end learning approach to perform model-free
implicit reconstruction for realistic 3D capture of multiple clothed people in
arbitrary poses (with occlusions) from a single image. Our network
simultaneously estimates the 3D geometry of each person and their 6DOF spatial
locations, to obtain a coherent multi-human reconstruction. In addition, we
introduce a new synthetic dataset that depicts images with a varying number of
inter-occluded humans and a variety of clothing and hair styles. We demonstrate
robust, high-resolution reconstructions on images of multiple humans with
complex occlusions, loose clothing and a large variety of poses and scenes. Our
quantitative evaluation on both synthetic and real-world datasets demonstrates
state-of-the-art performance with significant improvements in the accuracy and
completeness of the reconstructions over competing approaches.
|
In this work, we investigate the question of how knowledge about expectations
$\mathbb{E}(f_i(X))$ of a random vector $X$ translate into inequalities for
$\mathbb{E}(g(X))$ for given functions $f_i$, $g$ and a random vector $X$ whose
support is contained in some set $S\subseteq \mathbb{R}^n$. We show that there
is a connection between the problem of obtaining tight expectation inequalities
in this context and properties of convex hulls, allowing us to rewrite it as an
optimization problem. The results of these optimization problems not only
arrive at sharp bounds for $\mathbb{E}(g(X))$ but in some cases also yield
discrete probability measures where equality holds.
We develop an analytical approach that is particularly suited for studying
the Jensen gap problem when the known information are the average and variance,
as well as a numerical approach for the general case, that reduces the problem
to a convex optimization; which in a sense extends known results about the
moment problem.
|
Linear time-varying (LTV) systems are widely used for modeling real-world
dynamical systems due to their generality and simplicity. Providing stability
guarantees for LTV systems is one of the central problems in control theory.
However, existing approaches that guarantee stability typically lead to
significantly sub-optimal cumulative control cost in online settings where only
current or short-term system information is available. In this work, we propose
an efficient online control algorithm, COvariance Constrained Online Linear
Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a
large class of LTV systems while also minimizing the control cost. The proposed
method incorporates a state covariance constraint into the semi-definite
programming (SDP) formulation of the LQ optimal controller. We empirically
demonstrate the performance of COCO-LQ in both synthetic experiments and a
power system frequency control example.
|
The circular restricted three body problem, which considers the dynamics of
an infinitesimal particle in the presence of the gravitational interaction with
two massive bodies moving on circular orbits about their common center of mass,
is a very useful model for investigating the behavior of real astronomical
objects in the Solar System. In such a system, there are five Lagrangian
equilibrium points, and one important characteristic of the motion is the
existence of linearly stable equilibria at the two equilibrium points that form
equilateral triangles with the primaries, in the plane of the primaries' orbit.
We analyze the stability of motion in the restricted three body problem by
using the concept of Jacobi stability, as introduced and developed in the
Kosambi-Cartan-Chern (KCC) theory. The KCC theory is a differential geometric
approach to the variational equations describing the deviation of the whole
trajectory of a dynamical system with respect to the nearby ones. We obtain the
general result that, from the point of view of the KCC theory and of Jacobi
stability, all five Lagrangian equilibrium points of the restricted three body
problem are unstable.
|
Grain boundaries (GBs), an important constituent of polycrystalline
materials, have a wide range of manifestion and significantly affect the
properties of materials. Fully understanding the effects of GBs is stalemated
due to lack of complete knowledge of their structures and energetics. Here, for
the first time, by taking graphene as an example, we propose an analytical
energy functional of GBs in angle space. We find that an arbitrary GB can be
characterized by a geometric combination of symmetric GBs that follow the
principle of uniform distribution of their dislocation cores in straight lines.
Furthermore, we determine the elusive kinetic effects on GBs from the
difference between experimental statistics and energy-dependent thermodynamic
effects. This study not only presents an analytical energy functional of GBs
which could also be extended to other two-dimensional materials, but also sheds
light on understanding the kinetic effects of GBs in material synthesizing
processes.
|
The subclass of magnetic Cataclysmic Variables (CV), known as asynchronous
polars, are still relatively poorly understood. An asynchronous polar is a
polar in which the spin period of the white dwarf is either shorter or longer
than the binary orbital period (typically within a few percent). The
asynchronous polars have been disproportionately detected in soft gamma-ray
observations, leading us to consider the possibility that they have
intrinsically harder X-ray spectra. We compared standard and asynchronous
polars in order to examine the relationship between a CV's synchronization
status and its spectral shape. Using the entire sample of asynchronous polars,
we find that the asynchronous polars may, indeed, have harder spectra, but that
the result is not statistically significant.
|
Borrowing ideas from elliptic complex geometry, we approach M-theory
compactifications on real toric fibrations. Precisely, we explore real toric
equations rather than complex ones exploited in F-theory and related dual
models. These geometries have been built by moving real circles over real
bases. Using topological changing behaviors, we unveil certain data associated
with gauge sectors relying on affine Lie symmetries.
|
Machine Learning (ML)-based network intrusion detection systems bring many
benefits for enhancing the cybersecurity posture of an organisation. Many
systems have been designed and developed in the research community, often
achieving a close to perfect detection rate when evaluated using synthetic
datasets. However, the high number of academic research has not often
translated into practical deployments. There are several causes contributing
towards the wide gap between research and production, such as the limited
ability of comprehensive evaluation of ML models and lack of understanding of
internal ML operations. This paper tightens the gap by evaluating the
generalisability of a common feature set to different network environments and
attack scenarios. Therefore, two feature sets (NetFlow and CICFlowMeter) have
been evaluated in terms of detection accuracy across three key datasets, i.e.,
CSE-CIC-IDS2018, BoT-IoT, and ToN-IoT. The results show the superiority of the
NetFlow feature set in enhancing the ML models detection accuracy of various
network attacks. In addition, due to the complexity of the learning models,
SHapley Additive exPlanations (SHAP), an explainable AI methodology, has been
adopted to explain and interpret the classification decisions of ML models. The
Shapley values of two common feature sets have been analysed across multiple
datasets to determine the influence contributed by each feature towards the
final ML prediction.
|
This paper analyzes Floquet topological insulators resulting from the
time-harmonic irradiation of electromagnetic waves on two dimensional materials
such as graphene. We analyze the bulk and edge topologies of approximations to
the evolution of the light-matter interaction. Topologically protected
interface states are created by spatial modulations of the drive polarization
across an interface. In the high-frequency modulation regime, we obtain a
sequence of topologies that apply to different time scales. Bulk-difference
invariants are computed in detail and a bulk-interface correspondence is shown
to apply. We also analyze a high-frequency high-amplitude modulation resulting
in a large-gap effective topology topologically that remains valid only for
moderately long times.
|
Africa has amazing potential due to natural (such as dark sky) and human
resources for scientific research in astronomy and space science. At the same
time, the continent is still facing many difficulties, and its countries are
now recognising the importance of astronomy, space science and satellite
technologies for improving some of their principal socio-economic challenges.
The development of astronomy in Africa (including Ethiopia) has grown
significantly over the past few years, and never before it was more possible to
use astronomy for education, outreach, and development as it is now. However,
much still remains to be done. This paper will summarise the recent
developments in astronomy research and education in Africa and Ethiopia and
will focus on how working together on the development of science and education
can we fight poverty in the long term and increase our possibilities of
attaining the United Nations Sustainable Development Goals in future for
benefit of all.
|
Let $D$ be a compact K\"ahler manifold with trivial canonical bundle and
$\Gamma$ be a finite cyclical group of order $m$ acting on $\mathbb{C} \times
D$ by biholomorphisms, where the action on the first factor is generated by
rotation of angle $2\pi /m$. Furthermore, suppose that $\Omega_D$ is a
trivialisation of the canonical bundle such that $\Gamma$ preserves the
holomorphic form $dz \wedge \Omega_D$ on $\mathbb C \times D$, with $z$
denoting the coordinate on $\mathbb{C}$. The main result of this article is the
construction of new examples of gradient steady K\"ahler-Ricci solitons on
certain crepant resolutions of the orbifolds $\left( \mathbb{C}\times D \right)
/ \Gamma$. These new solitons converge exponentially to a Ricci-flat cylinder
$\mathbb{R} \times(\mathbb{S}^1 \times D) / \Gamma$.
|
We consider the difference-of-convex (DC) programming problems whose
objective function is level-bounded. The classical DC algorithm (DCA) is
well-known for solving this kind of problems, which returns a critical point.
Recently, de Oliveira and Tcheo incorporated the inertial-force procedure into
DCA (InDCA) for potential acceleration and preventing the algorithm from
converging to a critical point which is not d(directional)-stationary. In this
paper, based on InDCA, we propose two refined inertial DCA (RInDCA) with
enlarged inertial step-sizes for better acceleration. We demonstrate the
subsequential convergence of our refined versions to a critical point. In
addition, by assuming the Kurdyka-Lojasiewicz (KL) property of the objective
function, we establish the sequential convergence of RInDCA. Numerical
simulations on image restoration problem show the benefit of enlarged
step-size.
|
Since the Covid-19 pandemic is a global threat to health that few can fully
escape, it has given a unique opportunity to study international reactions to a
common problem. Such reactions can be partly obtained from public posts to
Twitter, allowing investigations of changes in interest over time. This study
analysed English-language Covid-19 tweets mentioning cures, treatments, or
vaccines from 1 January 2020 to 8 April 2021, seeking trends and international
differences. The results have methodological limitations but show a tendency
for countries with a lower human development index score to tweet more about
cures, although they were a minor topic for all countries. Vaccines were
discussed about as much as treatments until July 2020, when they generated more
interest because of developments in Russia. The November 2020 Pfizer-BioNTech
preliminary Phase 3 trials results generated an immediate and sustained sharp
increase, however, followed by a continuing roughly linear increase in interest
for vaccines until at least April 2021. Against this background, national
deviations from the average were triggered by country-specific news about
cures, treatments or vaccines. Nevertheless, interest in vaccines in all
countries increased in parallel to some extent, despite substantial
international differences in national regulatory approval and availability. The
results also highlight that unsubstantiated claims about alternative medicine
remedies gained traction in several countries, apparently posing a threat to
public health.
|
In this paper, we study multiple-input multiple-output (MIMO) wireless power
transfer (WPT) systems, where the energy harvester (EH) node is equipped with
multiple nonlinear rectennas. We characterize the optimal transmit strategy by
the optimal distribution of the transmit symbol vector that maximizes the
average harvested power at the EH subject to a constraint on the power budget
of the transmitter. We show that the optimal transmit strategy employs scalar
unit-norm input symbols with arbitrary phase and two beamforming vectors, which
are determined as solutions of a non-convex optimization problem. To solve this
problem, we propose an iterative algorithm based on a two-dimensional grid
search, semidefinite relaxation, and successive convex approximation. Our
simulation results reveal that the proposed MIMO WPT design significantly
outperforms two baseline schemes based on a linear EH model and a single
beamforming vector, respectively. Finally, we show that the average harvested
power grows linearly with the number of rectennas at the EH node and saturates
for a large number of TX antennas.
|
Many real-world decision-making tasks require learning casual relationships
between a set of variables. Typical causal discovery methods, however, require
that all variables are observed, which might not be realistic in practice.
Unfortunately, in the presence of latent confounding, recovering casual
relationships from observational data without making additional assumptions is
an ill-posed problem. Fortunately, in practice, additional structure among the
confounders can be expected, one such example being pervasive confounding,
which has been exploited for consistent causal estimation in the special case
of linear causal models. In this paper, we provide a proof and method to
estimate causal relationships in the non-linear, pervasive confounding setting.
The heart of our procedure relies on the ability to estimate the pervasive
confounding variation through a simple spectral decomposition of the observed
data matrix. We derive a DAG score function based on this insight, and
empirically compare our method to existing procedures. We show improved
performance on both simulated and real datasets by explicitly accounting for
both confounders and non-linear effects.
|
In the framework of mean field approach, we study topological Mott transition
in a two band model of spinless fermions on a square lattice at half filling.
We consider the combined effect of the on-site Coulomb repulsion and the
spin-orbit Rashba coupling. The ground state phase diagram is calculated as a
function of the strength of the spin-orbit Rashba coupling and the Coulomb
repulsion. The spin-orbit Rashba coupling leads to a distinct phase of matter,
the topological semimetal. We study a new type of phase transition between the
non-topological insulator and topological semimetal states. Topological phase
state is characterized by the zero energy Majorana states, which there are in
defined region of the wave vectors and are localized at the boundaries of the
sample. The region of existence of the zero energy Majorana states tends to
zero at the point of the Mott phase transition. The zero energy Majorana states
are dispersionless (they can be considered as flat bands), the Chern number and
Hall conductance are equal to zero (note in two dimensional model
|
Polar ice cores play a central role in studies of the earth's climate system
through natural archives. A pressing issue is the analysis of the oldest,
highly thinned ice core sections, where the identification of paleoclimate
signals is particularly challenging. For this, state-of-the-art imaging by
laser-ablation inductively-coupled plasma mass spectrometry (LA-ICP-MS) has the
potential to be revolutionary due to its combination of micron-scale 2D
chemical information with visual features. However, the quantitative study of
record preservation in chemical images raises new questions that call for the
expertise of the computer vision community. To illustrate this new
inter-disciplinary frontier, we describe a selected set of key questions. One
critical task is to assess the paleoclimate significance of single line
profiles along the main core axis, which we show is a scale-dependent problem
for which advanced image analysis methods are critical. Another important issue
is the evaluation of post-depositional layer changes, for which the chemical
images provide rich information. Accordingly, the time is ripe to begin an
intensified exchange among the two scientific communities of computer vision
and ice core science. The collaborative building of a new framework for
investigating high-resolution chemical images with automated image analysis
techniques will also benefit the already wide-spread application of LA-ICP-MS
chemical imaging in the geosciences.
|
A number of recent, low-redshift, lensing measurements hint at a universe in
which the amplitude of lensing is lower than that predicted from the
$\Lambda$CDM model fit to the data of the Planck CMB mission. Here we use the
auto- and cross-correlation signal of unWISE galaxies and Planck CMB lensing
maps to infer cosmological parameters at low redshift. In particular, we
consider three unWISE samples (denoted as "blue", "green" and "red") at median
redshifts $z \sim 0.6$, $1.1$ and 1.5, which fully cover the Dark Energy
dominated era. Our cross-correlation measurements, with combined significance
$S/N \sim 80$, are used to infer the amplitude of low-redshift fluctuations,
$\sigma_8$; the fraction of matter in the Universe, $\Omega_m$; and the
combination $S_8 \equiv \sigma_8 (\Omega_m / 0.3)^{0.5}$ to which these
low-redshift lensing measurements are most sensitive. The combination of blue,
green and red samples gives a value $S_8=0.784\pm 0.015$, that is fully
consistent with other low-redshift lensing measurements and in 2.4$\sigma$
tension with the CMB predictions from Planck. This is noteworthy, because CMB
lensing probes the same physics as previous galaxy lensing measurements, but
with very different systematics, thus providing an excellent complement to
previous measurements.
|
We study the secure stochastic convex optimization problem. A learner aims to
learn the optimal point of a convex function through sequentially querying a
(stochastic) gradient oracle. In the meantime, there exists an adversary who
aims to free-ride and infer the learning outcome of the learner from observing
the learner's queries. The adversary observes only the points of the queries
but not the feedback from the oracle. The goal of the learner is to optimize
the accuracy, i.e., obtaining an accurate estimate of the optimal point, while
securing her privacy, i.e., making it difficult for the adversary to infer the
optimal point. We formally quantify this tradeoff between learner's accuracy
and privacy and characterize the lower and upper bounds on the learner's query
complexity as a function of desired levels of accuracy and privacy. For the
analysis of lower bounds, we provide a general template based on information
theoretical analysis and then tailor the template to several families of
problems, including stochastic convex optimization and (noisy) binary search.
We also present a generic secure learning protocol that achieves the matching
upper bound up to logarithmic factors.
|
Augmenting the body with artificial limbs controlled concurrently to the
natural limbs has long appeared in science fiction, but recent technological
and neuroscientific advances have begun to make this vision possible. By
allowing individuals to achieve otherwise impossible actions, this movement
augmentation could revolutionize medical and industrial applications and
profoundly change the way humans interact with their environment. Here, we
construct a movement augmentation taxonomy through what is augmented and how it
is achieved. With this framework, we analyze augmentation that extends the
number of degrees-of-freedom, discuss critical features of effective
augmentation such as physiological control signals, sensory feedback and
learning, and propose a vision for the field.
|
Recent works on click-based interactive segmentation have demonstrated
state-of-the-art results by using various inference-time optimization schemes.
These methods are considerably more computationally expensive compared to
feedforward approaches, as they require performing backward passes through a
network during inference and are hard to deploy on mobile frameworks that
usually support only forward passes. In this paper, we extensively evaluate
various design choices for interactive segmentation and discover that new
state-of-the-art results can be obtained without any additional optimization
schemes. Thus, we propose a simple feedforward model for click-based
interactive segmentation that employs the segmentation masks from previous
steps. It allows not only to segment an entirely new object, but also to start
with an external mask and correct it. When analyzing the performance of models
trained on different datasets, we observe that the choice of a training dataset
greatly impacts the quality of interactive segmentation. We find that the
models trained on a combination of COCO and LVIS with diverse and high-quality
annotations show performance superior to all existing models. The code and
trained models are available at
https://github.com/saic-vul/ritm_interactive_segmentation.
|
We improve upon the two-stage sparse vector autoregression (sVAR) method in
Davis et al. (2016) by proposing an alternative two-stage modified sVAR method
which relies on time series graphical lasso to estimate sparse inverse spectral
density in the first stage, and the second stage refines non-zero entries of
the AR coefficient matrices using a false discovery rate (FDR) procedure. Our
method has the advantage of avoiding the inversion of the spectral density
matrix but has to deal with optimization over Hermitian matrices with
complex-valued entries. It significantly improves the computational time with a
little loss in forecasting performance. We study the properties of our proposed
method and compare the performance of the two methods using simulated and a
real macro-economic dataset. Our simulation results show that the proposed
modification or msVAR is a preferred choice when the goal is to learn the
structure of the AR coefficient matrices while sVAR outperforms msVAR when the
ultimate task is forecasting.
|
Artificial Intelligence (AI) and Machine Learning (ML) are pervasive in the
current computer science landscape. Yet, there still exists a lack of software
engineering experience and best practices in this field. One such best
practice, static code analysis, can be used to find code smells, i.e.,
(potential) defects in the source code, refactoring opportunities, and
violations of common coding standards. Our research set out to discover the
most prevalent code smells in ML projects. We gathered a dataset of 74
open-source ML projects, installed their dependencies and ran Pylint on them.
This resulted in a top 20 of all detected code smells, per category. Manual
analysis of these smells mainly showed that code duplication is widespread and
that the PEP8 convention for identifier naming style may not always be
applicable to ML code due to its resemblance with mathematical notation. More
interestingly, however, we found several major obstructions to the
maintainability and reproducibility of ML projects, primarily related to the
dependency management of Python projects. We also found that Pylint cannot
reliably check for correct usage of imported dependencies, including prominent
ML libraries such as PyTorch.
|
Kinetic models of biochemical systems used in the modern literature often
contain hundreds or even thousands of variables. While these models are
convenient for detailed simulations, their size is often an obstacle to
deriving mechanistic insights. One way to address this issue is to perform an
exact model reduction by finding a self-consistent lower-dimensional projection
of the corresponding dynamical system. Recently, a new algorithm CLUE has been
designed and implemented, which allows one to construct an exact linear
reduction of the smallest possible dimension such that the fixed variables of
interest are preserved. It turned out that allowing arbitrary linear
combinations (as opposed to zero-one combinations used in the prior approaches)
may yield a much smaller reduction. However, there was a drawback: some of the
new variables did not have clear physical meaning, thus making the reduced
model harder to interpret. We design and implement an algorithm that, given an
exact linear reduction, re-parametrizes it by performing an invertible
transformation of the new coordinates to improve the interpretability of the
new variables. We apply our algorithm to three case studies and show that
"uninterpretable" variables disappear entirely in all the case studies. The
implementation of the algorithm and the files for the case studies are
available at https://github.com/xjzhaang/LumpingPostiviser.
|
Heterogeneity of brain diseases is a challenge for precision
diagnosis/prognosis. We describe and validate Smile-GAN (SeMI-supervised
cLustEring-Generative Adversarial Network), a novel semi-supervised
deep-clustering method, which dissects neuroanatomical heterogeneity, enabling
identification of disease subtypes via their imaging signatures relative to
controls. When applied to MRIs (2 studies; 2,832 participants; 8,146 scans)
including cognitively normal individuals and those with cognitive impairment
and dementia, Smile-GAN identified 4 neurodegenerative patterns/axes: P1,
normal anatomy and highest cognitive performance; P2, mild/diffuse atrophy and
more prominent executive dysfunction; P3, focal medial temporal atrophy and
relatively greater memory impairment; P4, advanced neurodegeneration. Further
application to longitudinal data revealed two distinct progression pathways:
P1$\rightarrow$P2$\rightarrow$P4 and P1$\rightarrow$P3$\rightarrow$P4. Baseline
expression of these patterns predicted the pathway and rate of future
neurodegeneration. Pattern expression offered better yet complementary
performance in predicting clinical progression, compared to amyloid/tau. These
deep-learning derived biomarkers offer promise for precision diagnostics and
targeted clinical trial recruitment.
|
Spectral imaging is the acquisition of multiple images of an object at
different energy spectra. In mammography, dual-energy imaging (spectral imaging
with two energy levels) has been investigated for several applications, in
particular material decomposition, which allows for quantitative analysis of
breast composition and quantitative contrast-enhanced imaging. Material
decomposition with dual-energy imaging is based on the assumption that there
are two dominant photon interaction effects that determine linear attenuation:
the photoelectric effect and Compton scattering. This assumption limits the
number of basis materials, i.e. the number of materials that are possible to
differentiate between, to two. However, Rayleigh scattering may account for
more than 10% of the linear attenuation in the mammography energy range. In
this work, we show that a modified version of a scanning multi-slit spectral
photon-counting mammography system is able to acquire three images at different
spectra and can be used for triple-energy imaging. We further show that
triple-energy imaging in combination with the efficient scatter rejection of
the system enables measurement of Rayleigh scattering, which adds an additional
energy dependency to the linear attenuation and enables material decomposition
with three basis materials. Three available basis materials have the potential
to improve virtually all applications of spectral imaging.
|
We study existence and convergence properties of least-energy symmetric
solutions (l.e.s.s.) to the pure critical problem \begin{equation*}
(-\Delta)^su_s=|u_s|^{2^\star_s-2}u_s, \quad u_s\in D^s_0(\Omega),\quad
2^\star_s:=\frac{2N}{N-2s}, \end{equation*} where $s$ is any positive number,
$\Omega$ is either $\mathbb{R}^N$ or a smooth symmetric bounded domain, and
$D^s_0(\Omega)$ is the homogeneous Sobolev space. Depending on the kind of
symmetry considered, solutions can be sign changing. We show that, up to a
subsequence, a l.e.s.s. $u_s$ converges to a l.e.s.s. $u_{t}$ as $s$ goes to
any $t>0$. In bounded domains, this convergence can be characterized in terms
of an homogeneous fractional norm of order $t-\varepsilon$. A similar
characterization is no longer possible in unbounded domains due to scaling
invariance and an incompatibility with the functional spaces; to circumvent
these difficulties, we use a suitable rescaling and characterize the
convergence via cut-off functions. If $t$ is an integer, these results describe
in a precise way the nonlocal-to-local transition. Finally, we also include a
nonexistence result of nontrivial nonnegative solutions in a ball for any
$s>1$.
|
Despite their contributions to the financial efficiency and environmental
sustainability of industrial processes, robotic assembly and disassembly have
been understudied in the existing literature. This is in contradiction to their
importance in realizing the Fourth Industrial Revolution. More specifically,
although most of the literature has extensively discussed how to optimally
assemble or disassemble given products, the role of other factors has been
overlooked. For example, the types of robots involved in implementing the
sequence plans, which should ideally be taken into account throughout the whole
chain consisting of design, assembly, disassembly and reassembly. Isolating the
foregoing operations from the rest of the components of the relevant ecosystems
may lead to erroneous inferences toward both the necessity and efficiency of
the underlying procedures. In this paper we try to alleviate these shortcomings
by comprehensively investigating the state-of-the-art in robotic assembly and
disassembly. We consider and review various aspects of manufacturing and
remanufacturing frameworks while particularly focusing on their desirability
for supporting a circular economy.
|
Current value-based multi-agent reinforcement learning methods optimize
individual Q values to guide individuals' behaviours via centralized training
with decentralized execution (CTDE). However, such expected, i.e.,
risk-neutral, Q value is not sufficient even with CTDE due to the randomness of
rewards and the uncertainty in environments, which causes the failure of these
methods to train coordinating agents in complex environments. To address these
issues, we propose RMIX, a novel cooperative MARL method with the Conditional
Value at Risk (CVaR) measure over the learned distributions of individuals' Q
values. Specifically, we first learn the return distributions of individuals to
analytically calculate CVaR for decentralized execution. Then, to handle the
temporal nature of the stochastic outcomes during executions, we propose a
dynamic risk level predictor for risk level tuning. Finally, we optimize the
CVaR policies with CVaR values used to estimate the target in TD error during
centralized training and the CVaR values are used as auxiliary local rewards to
update the local distribution via Quantile Regression loss. Empirically, we
show that our method significantly outperforms state-of-the-art methods on
challenging StarCraft II tasks, demonstrating enhanced coordination and
improved sample efficiency.
|
For complex molecules, nuclear degrees of freedom can act as an environment
for the electronic `system' variables, allowing the theory and concepts of open
quantum systems to be applied. However, when molecular system-environment
interactions are non-perturbative and non-Markovian, numerical simulations of
the complete system-environment wave function become necessary. These many body
dynamics can be very expensive to simulate, and extracting finite-temperature
results - which require running and averaging over many such simulations -
becomes especially challenging. Here, we present numerical simulations that
exploit a recent theoretical result that allows dissipative environmental
effects at finite temperature to be extracted efficiently from a single,
zero-temperature wave function simulation. Using numerically exact
time-dependent variational matrix product states, we verify that this approach
can be applied to vibronic tunneling systems and provide insight into the
practical problems lurking behind the elegance of the theory, such as the
rapidly growing numerical demands that can appear for high temperatures over
the length of computations.
|
In this paper, we analyze the secrecy outage performance for more realistic
eavesdropping scenario of free-space optical (FSO) communications, where the
main and wiretap links are correlated. The FSO fading channels are modeled by
the well-known M\'alaga distribution. Exact expressions for the secrecy
performance metrics such as secrecy outage probability (SOP) and probability of
the non zero secrecy capacity (PNZSC) are derived, and asymptotic analysis on
the SOP is also conducted. The obtained results reveal useful insights on the
effect of channel correlation on FSO communications. Counterintuitively, it is
found that the secrecy outage performance demonstrates a non-monotonic behavior
with the increase of correlation. More specifically, there is an SNR penalty
for achieving a target SOP as the correlation increases within some range.
However, when the correlation is further increased beyond some threshold, the
SOP performance improves significantly.
|
The classical Clarke subdifferential alone is inadequate for understanding
automatic differentiation in nonsmooth contexts. Instead, we can sometimes rely
on enlarged generalized gradients called "conservative fields", defined through
the natural path-wise chain rule: one application is the convergence analysis
of gradient-based deep learning algorithms. In the semi-algebraic case, we show
that all conservative fields are in fact just Clarke subdifferentials plus
normals of manifolds in underlying Whitney stratifications.
|
Global navigation satellite systems (GNSS) are one of the utterly popular
sources for providing globally referenced positioning for autonomous systems.
However, the performance of the GNSS positioning is significantly challenged in
urban canyons, due to the signal reflection and blockage from buildings. Given
the fact that the GNSS measurements are highly environmentally dependent and
time-correlated, the conventional filtering-based method for GNSS positioning
cannot simultaneously explore the time-correlation among historical
measurements. As a result, the filtering-based estimator is sensitive to
unexpected outlier measurements. In this paper, we present a factor graph-based
formulation for GNSS positioning and real-time kinematic (RTK). The formulated
factor graph framework effectively explores the time-correlation of
pseudorange, carrier-phase, and doppler measurements, and leads to the
non-minimal state estimation of the GNSS receiver. The feasibility of the
proposed method is evaluated using datasets collected in challenging urban
canyons of Hong Kong and significantly improved positioning accuracy is
obtained, compared with the filtering-based estimator.
|
We present a self-supervised learning method to learn audio and video
representations. Prior work uses the natural correspondence between audio and
video to define a standard cross-modal instance discrimination task, where a
model is trained to match representations from the two modalities. However, the
standard approach introduces two sources of training noise. First, audio-visual
correspondences often produce faulty positives since the audio and video
signals can be uninformative of each other. To limit the detrimental impact of
faulty positives, we optimize a weighted contrastive learning loss, which
down-weighs their contribution to the overall loss. Second, since
self-supervised contrastive learning relies on random sampling of negative
instances, instances that are semantically similar to the base instance can be
used as faulty negatives. To alleviate the impact of faulty negatives, we
propose to optimize an instance discrimination loss with a soft target
distribution that estimates relationships between instances. We validate our
contributions through extensive experiments on action recognition tasks and
show that they address the problems of audio-visual instance discrimination and
improve transfer learning performance.
|
We study the topological phase transitions induced by Coulomb engineering in
three triangular-lattice Hubbard models $AB_2$, $AC_3$ and $B_2C_3$, each of
which consists of two types of magnetic atoms with opposite magnetic moments.
The energy bands are calculated using the Schwinger boson method. We find that
a topological phase transition can be triggered by the second-order
(three-site) virtual processes between the two types of magnetic atoms, the
strengths of which are controlled by the on-site Coulomb interaction $U$. This
new class of topological phase transitions have been rarely studied and may be
realized in a variety of real magnetic materials.
|
Let $G$ be a finite group and $H$ a normal subgroup of prime index $p$. Let
$V$ be an irreducible ${\mathbb F}H$-module and $U$ a quotient of the induced
${\mathbb F}G$-module $V\kern-3pt\uparrow$. We describe the structure of $U$,
which is semisimple when ${\rm char}({\mathbb F})\ne p$ and uniserial if ${\rm
char}({\mathbb F})=p$. Furthermore, we describe the division rings arising as
endomorphism algebras of the simple components of $U$. We use techniques from
noncommutative ring theory to study ${\rm End}_{{\mathbb
F}G}(V\kern-3pt\uparrow)$ and relate the right ideal structure of ${\rm
End}_{{\mathbb F}G}(V\kern-3pt\uparrow)$ to the submodule structure of
$V\kern-3pt\uparrow$.
|
As machine learning becomes more widely used for critical applications, the
need to study its implications in privacy turns to be urgent. Given access to
the target model and auxiliary information, the model inversion attack aims to
infer sensitive features of the training dataset, which leads to great privacy
concerns. Despite its success in grid-like domains, directly applying model
inversion techniques on non-grid domains such as graph achieves poor attack
performance due to the difficulty to fully exploit the intrinsic properties of
graphs and attributes of nodes used in Graph Neural Networks (GNN). To bridge
this gap, we present \textbf{Graph} \textbf{M}odel \textbf{I}nversion attack
(GraphMI), which aims to extract private graph data of the training graph by
inverting GNN, one of the state-of-the-art graph analysis tools. Specifically,
we firstly propose a projected gradient module to tackle the discreteness of
graph edges while preserving the sparsity and smoothness of graph features.
Then we design a graph auto-encoder module to efficiently exploit graph
topology, node attributes, and target model parameters for edge inference. With
the proposed methods, we study the connection between model inversion risk and
edge influence and show that edges with greater influence are more likely to be
recovered. Extensive experiments over several public datasets demonstrate the
effectiveness of our method. We also show that differential privacy in its
canonical form can hardly defend our attack while preserving decent utility.
|
The question whether the Higgs boson is connected to additional CP violation
is one of the driving forces behind precision studies at the Large Hadron
Collider. In this work, we investigate the CP structure of the top quark Yukawa
interaction-one of the most prominent places for searching for New
Physics-through Higgs boson loops in top quark pair production. We calculate
the electroweak corrections including arbitrary CP mixtures at
next-to-leading-order in the Standard Model Effective Field Theory. This
approach of probing Higgs boson degrees of freedom relies on the large
$t\bar{t}$ cross section and the excellent perturbative control. In addition,
we consider all direct probes with on-shell Higgs boson production in
association with a single top quark or top quark pair. This allows us to
contrast loop sensitivity versus on-shell sensitivity in these fundamentally
different process dynamics. We find that loop sensitivity in $t\bar{t}$
production and on-shell sensitivity in $t\bar{t}H$ and $tH$ provide
complementary handles over a wide range of parameter space.
|
Accelerated degradation testing (ADT) is one of the major approaches in
reliability engineering which allows accurate estimation of reliability
characteristics of highly reliable systems within a relatively short time. The
testing data are extrapolated through a physically reasonable statistical model
to obtain estimates of lifetime quantiles at normal use conditions. The Gamma
process is a natural model for degradation, which exhibits a monotone and
strictly increasing degradation path. In this work, optimal experimental
designs are derived for ADT with two response components. We consider the
situations of independent as well as dependent marginal responses where the
observational times are assumed to be fixed and known. The marginal degradation
paths are assumed to follow a Gamma process where a copula function is utilized
to express the dependence between both components. For the case of independent
response components the optimal design minimizes the asymptotic variance of an
estimated quantile of the failure time distribution at the normal use
conditions. For the case of dependent response components the $D$-criterion is
adopted to derive $D$-optimal designs. Further, $D$- and $c$-optimal designs
are developed when the copula-based models are reduced to bivariate binary
outcomes.
|
Negative sampling is a limiting factor w.r.t. the generalization of
metric-learned neural networks. We show that uniform negative sampling provides
little information about the class boundaries and thus propose three novel
techniques for efficient negative sampling: drawing negative samples from (1)
the top-$k$ most semantically similar classes, (2) the top-$k$ most
semantically similar samples and (3) interpolating between contrastive latent
representations to create pseudo negatives. Our experiments on CIFAR-10,
CIFAR-100 and Tiny-ImageNet-200 show that our proposed \textit{Semantically
Conditioned Negative Sampling} and Latent Mixup lead to consistent performance
improvements. In the standard supervised learning setting, on average we
increase test accuracy by 1.52\% percentage points on CIFAR-10 across various
network architectures. In the knowledge distillation setting, (1) the
performance of student networks increase by 4.56\% percentage points on
Tiny-ImageNet-200 and 3.29\% on CIFAR-100 over student networks trained with no
teacher and (2) 1.23\% and 1.72\% respectively over a \textit{hard-to-beat}
baseline (Hinton et al., 2015).
|
We use the recipe of arXiv:1003.2974 to find half-BPS near-horizon geometries
in the t$^3$ model of $N=2$, $D=4$ gauged supergravity, and explicitely
construct some new examples. Among these are black holes with noncompact
horizons, but also with spherical horizons that have conical singularities
(spikes) at one of the two poles. A particular family of them is extended to
the full black hole geometry. Applying a double-Wick rotation to the
near-horizon region, we obtain solutions with NUT charge that asymptote to
curved domain walls with AdS$_3$ world volume. These new solutions may provide
interesting testgrounds to address fundamental questions related to quantum
gravity and holography.
|
The results of speckle interferometric observations at the 4.1 m Southern
Astrophysical Research Telescope (SOAR) in 2020, as well as earlier unpublished
data, are given, totaling 1735 measurements of 1288 resolved pairs and
non-resolutions of 1177 targets. We resolved for the first time 59 new pairs or
subsystems in known binaries, mostly among nearby dwarf stars. This work
continues our long-term speckle program. Its main goal is to monitor orbital
motion of close binaries, including members of high-order hierarchies and
Hipparcos pairs in the solar neighborhood. We also report observations of 892
members of young moving groups and associations, where we resolved 103 new
pairs.
|
A unique feature of the complex band structures of moir\'e materials is the
presence of minivalleys, their hybridization, and scattering between them. Here
we investigate magneto-transport oscillations caused by scattering between
minivalleys - a phenomenon analogous to magneto-intersubband oscillations - in
a twisted double bilayer graphene sample with a twist angle of 1.94{\deg}. We
study and discuss the potential scattering mechanisms and find an
electron-phonon mechanism and valley conserving scattering to be likely.
Finally, we discuss the relevance of our findings for different materials and
twist angles.
|
Computer-Aided Design (CAD) applications are used in manufacturing to model
everything from coffee mugs to sports cars. These programs are complex and
require years of training and experience to master. A component of all CAD
models particularly difficult to make are the highly structured 2D sketches
that lie at the heart of every 3D construction. In this work, we propose a
machine learning model capable of automatically generating such sketches.
Through this, we pave the way for developing intelligent tools that would help
engineers create better designs with less effort. Our method is a combination
of a general-purpose language modeling technique alongside an off-the-shelf
data serialization protocol. We show that our approach has enough flexibility
to accommodate the complexity of the domain and performs well for both
unconditional synthesis and image-to-sketch translation.
|
We consider the linear regression model along with the process of its
$\alpha$-regression quantile, $0<\alpha<1$. We are interested mainly in the
slope components of $\alpha$-regression quantile and in their dependence on the
choice of $\alpha.$ While they are invariant to the location, and only the
intercept part of the $\alpha$-regression quantile estimates the quantile
$F^{-1}(\alpha)$ of the model errors, their dispersion depends on $\alpha$ and
is infinitely increasing as $\alpha\rightarrow 0,1$, in the same rate as for
the ordinary quantiles. We study the process of $R$-estimators of the slope
parameters over $\alpha\in[0,1]$, generated by the H\'{a}jek rank scores. We
show that this process, standardized by $f(F ^{-1}(\alpha))$ under
exponentially tailed $F$, converges to the vector of independent Brownian
bridges. The same course is true for the process of the slope components of
$\alpha$-regression quantile.
|
We derive a generalized Knizhnik-Zamolodchikov equation for the correlation
function of the intertwiners of the vector and the MacMahon representations of
Ding-Iohara-Miki algebra. These intertwiners are cousins of the refined
topological vertex which is regarded as the intertwining operator of the Fock
representation. The shift of the spectral parameter of the intertwiners is
generated by the operator which is constructed from the universal $R$ matrix.
The solutions to the generalized KZ equation are factorized into the ratio of
two point functions which are identified with generalizations of the Nekrasov
factor for supersymmetric quiver gauge theories
|
Rats and mice use their whiskers to probe the environment. By rhythmically
swiping their whiskers back and forth they can detect the existence of an
object, locate it, and identify its texture. Localization can be accomplished
by inferring the position of the whisker. Rhythmic neurons that track the phase
of the whisking cycle encode information about the azimuthal location of the
whisker. These neurons are characterized by preferred phases of firing that are
narrowly distributed. Consequently, pooling the rhythmic signal from several
upstream neurons is expected to result in a much narrower distribution of
preferred phases in the downstream population, which however has not been
observed empirically. Here, we show how spike-timing-dependent plasticity
(STDP) can provide a solution to this conundrum. We investigated the effect of
STDP on the utility of a neural population to transmit rhythmic information
downstream using the framework of a modeling study. We found that under a wide
range of parameters, STDP facilitated the transfer of rhythmic information
despite the fact that all the synaptic weights remained dynamic. As a result,
the preferred phase of the downstream neuron was not fixed, but rather drifted
in time at a drift velocity that depended on the preferred phase, thus inducing
a distribution of preferred phases. We further analyzed how the STDP rule
governs the distribution of preferred phases in the downstream population. This
link between the STDP rule and the distribution of preferred phases constitutes
a natural test for our theory.
|
Classically, Jensen's Inequality asserts that if $X$ is a compact convex set,
and $f:K\to \mathbb{R}$ is a convex function, then for any probability measure
$\mu$ on $K$, that $f(\text{bar}(\mu))\le \int f\;d\mu$, where
$\text{bar}(\mu)$ is the barycenter of $\mu$. Recently, Davidson and Kennedy
proved a noncommutative ("nc") version of Jensen's inequality that applies to
nc convex functions, which take matrix values, with probability measures
replaced by ucp maps. In the classical case, if $f$ is only a separately convex
function, then $f$ still satisfies the Jensen inequality for any probability
measure which is a product measure. We prove a noncommutative Jensen inequality
for functions which are separately nc convex in each variable. The inequality
holds for a large class of ucp maps which satisfy a noncommutative analogue of
Fubini's theorem. This class of ucp maps includes any free product of ucp maps
built from Boca's theorem, or any ucp map which is conditionally free in the
free-probabilistic sense of M{\l}otkowski. As an application to free
probability, we obtain some operator inequalities for conditionally free ucp
maps applied to free semicircular families.
|
Recent advances in neural multi-speaker text-to-speech (TTS) models have
enabled the generation of reasonably good speech quality with a single model
and made it possible to synthesize the speech of a speaker with limited
training data. Fine-tuning to the target speaker data with the multi-speaker
model can achieve better quality, however, there still exists a gap compared to
the real speech sample and the model depends on the speaker. In this work, we
propose GANSpeech, which is a high-fidelity multi-speaker TTS model that adopts
the adversarial training method to a non-autoregressive multi-speaker TTS
model. In addition, we propose simple but efficient automatic scaling methods
for feature matching loss used in adversarial training. In the subjective
listening tests, GANSpeech significantly outperformed the baseline
multi-speaker FastSpeech and FastSpeech2 models, and showed a better MOS score
than the speaker-specific fine-tuned FastSpeech2.
|
We consider auction environments in which at the time of the auction bidders
observe signals about their ex-post value. We introduce a model of novice
bidders who do not know know the joint distribution of signals and instead
build a statistical model relating others' bids to their own ex post value from
the data sets accessible from past similar auctions. Crucially, we assume that
only ex post values and bids are accessible while signals observed by bidders
in past auctions remain private. We consider steady-states in such
environments, and importantly we allow for correlation in the signal
distribution. We first observe that data-driven bidders may behave suboptimally
in classical auctions such as the second-price or first-price auctions whenever
there are correlations. Allowing for a mix of rational (or experienced) and
data-driven (novice) bidders results in inefficiencies in such auctions, and we
show the inefficiency extends to all auction-like mechanisms in which bidders
are restricted to submit one-dimensional (real-valued) bids.
|
Recent trends in Web development demonstrate an increased interest in
serverless applications, i.e. applications that utilize computational resources
provided by cloud services on demand instead of requiring traditional server
management. This approach enables better resource management while being
scalable, reliable, and cost-effective. However, it comes with a number of
organizational and technical difficulties which stem from the interaction
between the application and the cloud infrastructure, for example, having to
set up a recurring task of reuploading updated files. In this paper, we present
Kotless - a Kotlin Serverless Framework. Kotless is a cloud-agnostic toolkit
that solves these problems by interweaving the deployed application into the
cloud infrastructure and automatically generating the necessary deployment
code. This relieves developers from having to spend their time integrating and
managing their applications instead of developing them. Kotless has proven its
capabilities and has been used to develop several serverless applications
already in production. Its source code is available at
https://github.com/JetBrains/kotless, a tool demo can be found at
https://www.youtube.com/watch?v=IMSakPNl3TY
|
The intention of this research is to study and design an automated
agriculture commodity price prediction system with novel machine learning
techniques. Due to the increasing large amounts historical data of agricultural
commodity prices and the need of performing accurate prediction of price
fluctuations, the solution has largely shifted from statistical methods to
machine learning area. However, the selection of proper set from historical
data for forecasting still has limited consideration. On the other hand, when
implementing machine learning techniques, finding a suitable model with optimal
parameters for global solution, nonlinearity and avoiding curse of
dimensionality are still biggest challenges, therefore machine learning
strategies study are needed. In this research, we propose a web-based automated
system to predict agriculture commodity price. In the two series experiments,
five popular machine learning algorithms, ARIMA, SVR, Prophet, XGBoost and LSTM
have been compared with large historical datasets in Malaysia and the most
optimal algorithm, LSTM model with an average of 0.304 mean-square error has
been selected as the prediction engine of the proposed system.
|
The need for robust, secure and private machine learning is an important goal
for realizing the full potential of the Internet of Things (IoT). Federated
learning has proven to help protect against privacy violations and information
leakage. However, it introduces new risk vectors which make machine learning
models more difficult to defend against adversarial samples. In this study, we
examine the role of differential privacy and self-normalization in mitigating
the risk of adversarial samples specifically in a federated learning
environment. We introduce DiPSeN, a Differentially Private Self-normalizing
Neural Network which combines elements of differential privacy noise with
self-normalizing techniques. Our empirical results on three publicly available
datasets show that DiPSeN successfully improves the adversarial robustness of a
deep learning classifier in a federated learning environment based on several
evaluation metrics.
|
The performance of modern algorithms on certain computer vision tasks such as
object recognition is now close to that of humans. This success was achieved at
the price of complicated architectures depending on millions of parameters and
it has become quite challenging to understand how particular predictions are
made. Interpretability methods propose to give us this understanding. In this
paper, we study LIME, perhaps one of the most popular. On the theoretical side,
we show that when the number of generated examples is large, LIME explanations
are concentrated around a limit explanation for which we give an explicit
expression. We further this study for elementary shape detectors and linear
models. As a consequence of this analysis, we uncover a connection between LIME
and integrated gradients, another explanation method. More precisely, the LIME
explanations are similar to the sum of integrated gradients over the
superpixels used in the preprocessing step of LIME.
|
Solar-thermal evaporation, a traditional steam generation method for solar
desalination, has received numerous attentions in recent years due to the
significant increase in efficiency by adopting interfacial evaporation. While
most of the previous studies focus on improving the evaporation efficiency by
materials innovation and system design, the underlying mechanisms of its energy
efficiency are less explored, leading to many confusions and misunderstandings.
Herein, we clarify these mechanisms with a detailed thermal analysis model.
Using this model, we elucidate the advantages of interfacial evaporation over
the traditional evaporation method. Furthermore, we clarify the role of tuning
the solar flux and surface area on the evaporation efficiency. Moreover, we
quantitatively prove that the influence of environmental conditions on
evaporation efficiency could not be eliminated by subtracting the dark
evaporation rate from evaporation rate under solar. We also find that
interfacial evaporation in a solar still does not have the high overall solar
desalination efficiency as expected, but further improvement is possible from
the system design part. Our analysis gains insights to the thermal processes
involved in interfacial solar evaporation and offers perspectives to the
further development of interfacial solar desalination technology.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.