abstract
stringlengths 42
2.09k
|
---|
We study best approximations to compact operators between Banach spaces and
Hilbert spaces, from the point of view of Birkhoff-James orthogonality and
semi-inner-products. As an application of the present study, some distance
formulae are presented in the space of compact operators. The special case of
bounded linear functionals as compact operators is treated separately and some
applications to best approximations in reflexive, strictly convex and smooth
Banach spaces are discussed. An explicit example is presented in $ \ell_p^{n} $
spaces, where $ 1 < p < \infty, $ to illustrate the applicability of the
methods developed in this article. A comparative analysis of the results
presented in this article with the well-known classical duality principle in
approximation theory is conducted to demonstrate the advantage in the former
case, from a computational point of view.
|
We study Krasnoselskii-Mann style iterative algorithms for approximating
fixpoints of asymptotically weakly contractive mappings, with a focus on
providing generalised convergence proofs along with explicit rates of
convergence. More specifically, we define a new notion of being asymptotically
$\psi$-weakly contractive with modulus, and present a series of abstract
convergence theorems which both generalise and unify known results from the
literature. Rates of convergence are formulated in terms of our modulus of
contractivity, in conjunction with other moduli and functions which form
quantitative analogues of additional assumptions that are required in each
case. Our approach makes use of ideas from proof theory, in particular our
emphasis on abstraction and on formulating our main results in a quantitative
manner. As such, the paper can be seen as a contribution to the proof mining
program.
|
From nutrient uptake, to chemoreception, to synaptic transmission, many
systems in cell biology depend on molecules diffusing and binding to membrane
receptors. Mathematical analysis of such systems often neglects the fact that
receptors process molecules at finite kinetic rates. A key example is the
celebrated formula of Berg and Purcell for the rate that cell surface receptors
capture extracellular molecules. Indeed, this influential result is only valid
if receptors transport molecules through the cell wall at a rate much faster
than molecules arrive at receptors. From a mathematical perspective, ignoring
receptor kinetics is convenient because it makes the diffusing molecules
independent. In contrast, including receptor kinetics introduces correlations
between the diffusing molecules since, for example, bound receptors may be
temporarily blocked from binding additional molecules. In this work, we present
a modeling framework for coupling bulk diffusion to surface receptors with
finite kinetic rates. The framework uses boundary homogenization to couple the
diffusion equation to nonlinear ordinary differential equations on the
boundary. We use this framework to derive an explicit formula for the cellular
uptake rate and show that the analysis of Berg and Purcell significantly
overestimates uptake in some typical biophysical scenarios. We confirm our
analysis by numerical simulations of a many particle stochastic system.
|
Minimizing the bending energy within knot classes leads to the concept of
elastic knots which has been initiated in [von der Mosel, Asymptot. Anal.
1998]. Motivated by numerical experiments in arxiv:1804.02206
(doi:10.1090/mcom/3633) we prescribe dihedral symmetry and establish existence
of dihedrally symmetric elastic knots for knot classes admitting this type of
symmetry. Among other results we prove that the dihedral elastic trefoil is the
union of two circles that form a (planar) figure-eight. We also discuss some
generalizations and limitations regarding other symmetries and knot classes.
|
We employed the log-periodic power law singularity (LPPLS) methodology to
systematically investigate the 2020 stock market crash in the U.S. equities
sectors with different levels of total market capitalizations through four
major U.S. stock market indexes, including the Wilshire 5000 Total Market
index, the S&P 500 index, the S&P MidCap 400 index, and the Russell 2000 index,
representing the stocks overall, the large capitalization stocks, the middle
capitalization stocks and the small capitalization stocks, respectively. During
the 2020 U.S. stock market crash, all four indexes lost more than a third of
their values within five weeks, while both the middle capitalization stocks and
the small capitalization stocks have suffered much greater losses than the
large capitalization stocks and stocks overall. Our results indicate that the
price trajectories of these four stock market indexes prior to the 2020 stock
market crash have clearly featured the obvious LPPLS bubble pattern and were
indeed in a positive bubble regime. Contrary to the popular belief that the
COVID-19 led to the 2020 stock market crash, the 2020 U.S. stock market crash
was endogenous, stemming from the increasingly systemic instability of the
stock market itself. We also performed the complementary post-mortem analysis
of the 2020 U.S. stock market crash. Our analyses indicate that the 2020 U.S.
stock market crash originated from a bubble which began to form as early as
September 2018; and the bubbles in stocks with different levels of total market
capitalizations have significantly different starting time profiles. This study
not only sheds new light on the making of the 2020 U.S. stock market crash but
also creates a novel pipeline for future real-time crash detection and
mechanism dissection of any financial market and/or economic index.
|
It is important to estimate the errors of probabilistic inference algorithms.
Existing diagnostics for Markov chain Monte Carlo methods assume inference is
asymptotically exact, and are not appropriate for approximate methods like
variational inference or Laplace's method. This paper introduces a diagnostic
based on repeatedly simulating datasets from the prior and performing inference
on each. The central observation is that it is possible to estimate a symmetric
KL-divergence defined over these simulations.
|
Although distance learning presents a number of interesting educational
advantages as compared to in-person instruction, it is not without its
downsides. We first assess the educational challenges presented by distance
learning as a whole and identify 4 main challenges that distance learning
currently presents as compared to in-person instruction: the lack of social
interaction, reduced student engagement and focus, reduced comprehension and
information retention, and the lack of flexible and customizable instructor
resources. After assessing each of these challenges in-depth, we examine how
AR/VR technologies might serve to address each challenge along with their
current shortcomings, and finally outline the further research that is required
to fully understand the potential of AR/VR technologies as they apply to
distance learning.
|
An important goal of medical imaging is to be able to precisely detect
patterns of disease specific to individual scans; however, this is challenged
in brain imaging by the degree of heterogeneity of shape and appearance.
Traditional methods, based on image registration to a global template,
historically fail to detect variable features of disease, as they utilise
population-based analyses, suited primarily to studying group-average effects.
In this paper we therefore take advantage of recent developments in generative
deep learning to develop a method for simultaneous classification, or
regression, and feature attribution (FA). Specifically, we explore the use of a
VAE-GAN translation network called ICAM, to explicitly disentangle class
relevant features from background confounds for improved interpretability and
regression of neurological phenotypes. We validate our method on the tasks of
Mini-Mental State Examination (MMSE) cognitive test score prediction for the
Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort, as well as brain age
prediction, for both neurodevelopment and neurodegeneration, using the
developing Human Connectome Project (dHCP) and UK Biobank datasets. We show
that the generated FA maps can be used to explain outlier predictions and
demonstrate that the inclusion of a regression module improves the
disentanglement of the latent space. Our code is freely available on Github
https://github.com/CherBass/ICAM.
|
It is well-known that there is no spherical/topologically spherical
gravitational waves in vacuum space in general relativity. We show that a
deviation from general relativity leads to exact vacuum spherical gravitational
waves, no matter how tiny this deviation is. We also discuss the related
topics, including Vaidya-like metric in $f(R)$ gravity. We demonstrate that the
existence of spherical gravitational wave is a non perturbative property for
gravities. We investigate energy carried by this nonperturbative wave. We first
find the wave solution from investigations of Vaidya-like metric in $f(R)$
gravity, which has only one longitude polarization. We further extend it to a
metric with two transverse polarizations by directly solving the field
equation.
|
We introduce a novel primal-dual flow for affine constrained convex
optimization problem. As a modification of the standard saddle-point system,
our primal-dual flow is proved to possesses the exponential decay property, in
terms of a tailored Lyapunov function. Then a class of primal-dual methods for
the original optimization problem are obtained from numerical discretizations
of the continuous flow, and with a unified discrete Lyapunov function,
nonergodic convergence rates are established. Among those algorithms, we can
recover the (linearized) augmented Lagrangian method and the quadratic penalty
method with continuation technique. Also, new methods with a special inner
problem, that is a linear symmetric positive definite system or a nonlinear
equation which may be solved efficiently via the semi-smooth Newton method,
have been proposed as well. Especially, numerical tests on the linearly
constrained $l_1$-$l_2$ minimization show that our method outperforms the
accelerated linearized Bregman method.
|
We present 3 mm and 2 mm band simultaneously spectroscopic observations of
HCN 1-0, HCO$^{+}$ 1-0, HNC 1-0, and CS 3-2 with the IRAM 30 meter telescope,
toward a sample of 70 sources as nearby galaxies with infrared luminosities
ranging from several 10$^{5}L_{\odot}$ to more than 10$^{12}L_{\odot}$. After
combining HCN 1-0, HCO$^{+}$ 1-0 and HNC 1-0 data from literature with our
detections, relations between luminosities of dense gas tracers (HCN 1-0,
HCO$^{+}$ 1-0 and HNC 1-0) and infrared luminosities are derived, with tight
linear correlations for all tracers. Luminosities of CS 3-2 with only our
observations also show tight linear correlation with infrared luminosities. No
systematic difference is found for tracing dense molecular gas among these
tracers. Star formation efficiencies for dense gas with different tracers also
do not show any trend along different infrared luminosities. Our study also
shows that HCN/HCO$^{+}$ line ratio might not be a good indicator to diagnose
obscured AGN in galaxies.
|
We investigate a model of one-to-one matching with transferable utility and
general unobserved heterogeneity. Under a separability assumption that
generalizes Choo and Siow (2006), we first show that the equilibrium matching
maximizes a social gain function that trades off exploiting complementarities
in observable characteristics and matching on unobserved characteristics. We
use this result to derive simple closed-form formulae that identify the joint
matching surplus and the equilibrium utilities of all participants, given any
known distribution of unobserved heterogeneity. We provide efficient algorithms
to compute the stable matching and to estimate parametric versions of the
model. Finally, we revisit Choo and Siow's empirical application to illustrate
the potential of our more general approach.
|
Graph filtering is a fundamental tool in graph signal processing. Polynomial
graph filters (PGFs), defined as polynomials of a fundamental graph operator,
can be implemented in the vertex domain, and usually have a lower complexity
than frequency domain filter implementations. In this paper, we focus on the
design of filters for graphs with graph Fourier transform (GFT) corresponding
to a discrete trigonometric transform (DTT), i.e., one of 8 types of discrete
cosine transforms (DCT) and 8 discrete sine transforms (DST). In this case, we
show that multiple sparse graph operators can be identified, which allows us to
propose a generalization of PGF design: multivariate polynomial graph filter
(MPGF). First, for the widely used DCT-II (type-2 DCT), we characterize a set
of sparse graph operators that share the DCT-II matrix as their common
eigenvector matrix. This set contains the well-known connected line graph.
These sparse operators can be viewed as graph filters operating in the DCT
domain, which allows us to approximate any DCT graph filter by a MPGF, leading
to a design with more degrees of freedom than the conventional PGF approach.
Then, we extend those results to all of the 16 DTTs as well as their 2D
versions, and show how their associated sets of multiple graph operators can be
determined. We demonstrate experimentally that ideal low-pass and exponential
DCT/DST filters can be approximated with higher accuracy with similar runtime
complexity. Finally, we apply our method to transform-type selection in a video
codec, AV1, where we demonstrate significant encoding time savings, with a
negligible compression loss.
|
We derive Boltzmann equations for massive spin-1/2 fermions with local and
nonlocal collision terms from the Kadanoff--Baym equation in the
Schwinger--Keldysh formalism, properly accounting for the spin degrees of
freedom. The Boltzmann equations are expressed in terms of matrix-valued spin
distribution functions, which are the building blocks for the quasi-classical
parts of the Wigner functions. Nonlocal collision terms appear at
next-to-leading order in $\hbar$ and are sources for the polarization part of
the matrix-valued spin distribution functions. The Boltzmann equations for the
matrix-valued spin distribution functions pave the way for simulating
spin-transport processes involving spin-vorticity couplings from first
principles.
|
There is growing concern about image privacy due to the popularity of social
media and photo devices, along with increasing use of face recognition systems.
However, established image de-identification techniques are either too subject
to re-identification, produce photos that are insufficiently realistic, or
both. To tackle this, we present a novel approach for image obfuscation by
manipulating latent spaces of an unconditionally trained generative model that
is able to synthesize photo-realistic facial images of high resolution. This
manipulation is done in a way that satisfies the formal privacy standard of
local differential privacy. To our knowledge, this is the first approach to
image privacy that satisfies $\varepsilon$-differential privacy \emph{for the
person.}
|
It turns out that the standard application of the four-vector SR formalism
does not include the concept of relative velocity. Only the absolute velocity
is described by the four-vector, and even the Lorentz transformation parameters
is described by the three-dimensional velocity.
This gap in the development of the SR formalism reflects the lack of some
significant velocity subtraction operations. The differential application of
these operations leads to a relativistic acceleration.
|
We study the problem of off-policy evaluation in the multi-armed bandit model
with bounded rewards, and develop minimax rate-optimal procedures under three
settings. First, when the behavior policy is known, we show that the Switch
estimator, a method that alternates between the plug-in and importance sampling
estimators, is minimax rate-optimal for all sample sizes. Second, when the
behavior policy is unknown, we analyze performance in terms of the competitive
ratio, thereby revealing a fundamental gap between the settings of known and
unknown behavior policies. When the behavior policy is unknown, any estimator
must have mean-squared error larger -- relative to the oracle estimator
equipped with the knowledge of the behavior policy -- by a multiplicative
factor proportional to the support size of the target policy. Moreover, we
demonstrate that the plug-in approach achieves this worst-case competitive
ratio up to a logarithmic factor. Third, we initiate the study of the partial
knowledge setting in which it is assumed that the minimum probability taken by
the behavior policy is known. We show that the plug-in estimator is optimal for
relatively large values of the minimum probability, but is sub-optimal when the
minimum probability is low. In order to remedy this gap, we propose a new
estimator based on approximation by Chebyshev polynomials that provably
achieves the optimal estimation error. Numerical experiments on both simulated
and real data corroborate our theoretical findings.
|
Mixed-frequency Vector AutoRegressions (MF-VAR) model the dynamics between
variables recorded at different frequencies. However, as the number of series
and high-frequency observations per low-frequency period grow, MF-VARs suffer
from the "curse of dimensionality". We curb this curse through a regularizer
that permits various hierarchical sparsity patterns by prioritizing the
inclusion of coefficients according to the recency of the information they
contain. Additionally, we investigate the presence of nowcasting relations by
sparsely estimating the MF-VAR error covariance matrix. We study predictive
Granger causality relations in a MF-VAR for the U.S. economy and construct a
coincident indicator of GDP growth.
|
An indoor localization approach uses Wi-Fi Access Points (APs) to estimate
the Direction of Arrival (DoA) of the WiFi signals. This paper demonstrates
FIND, a tool for Fine INDoor localization based on a software-defined radio,
which receives Wi-Fi frames in the 80 MHz band with four antennas. To the best
of our knowledge, it is the first-ever prototype that extracts from such frames
data in both frequency and time domains to calculate the DoA of Wi-Fi signals
in real-time. Apart from other prototypes, we retrieve from frames
comprehensive information that could be used to DoA estimation: all preamble
fields in the time domain, Channels State Information, and signal-to-noise
ratio. Using our device, we collect a dataset for comparing different
algorithms estimating the angle of arrival in the same scenario. Furthermore,
we propose a novel calibration method, eliminating the constant phase shift
between receiving paths caused by hardware imperfections. All calibration data,
as well as a gathered dataset with various DoA in an anechoic chamber and in a
classroom, are provided to facilitate further research in the area of indoor
localization, intelligence surfaces, and multi-user transmissions in dense
deployments.
|
In this paper we describe a computational model for the simulation of
fluid-structure interaction problems based on a fictitious domain approach. We
summarize the results presented over the last years when our research evolved
from the Finite Element Immersed Boundary Method (FE-IBM) to the actual Finite
Element Distributed Lagrange Multiplier method (FE-DLM). We recall the
well-posedness of our formulation at the continuous level in a simplified
setting. We describe various time semi-discretizations that provide
unconditionally stable schemes. Finally we report the stability analysis for
the finite element space discretization where some improvements and
generalizations of the previous results are obtained.
|
Defects in solid state materials provide an ideal, robust platform for
quantum sensing. To deliver maximum sensitivity, a large ensemble of
non-interacting defects hosting coherent quantum states are required. Control
of such an ensemble is challenging due to the spatial variation in both the
defect energy levels and in any control field across a macroscopic sample. In
this work we experimentally demonstrate that we can overcome these challenges
using Floquet theory and optimal control optimization methods to efficiently
and coherently control a large defect ensemble, suitable for sensing. We apply
our methods experimentally to a spin ensemble of up to 4 $\times$ 10$^9$
nitrogen vacancy (NV) centers in diamond. By considering the physics of the
system and explicitly including the hyperfine interaction in the optimization,
we design shaped microwave control pulses that can outperform conventional
($\pi$-) pulses when applied to sensing of temperature or magnetic field, with
a potential sensitivity improvement between 11 and 78\%. Through dynamical
modelling of the behaviour of the ensemble, we shed light on the physical
behaviour of the ensemble system and propose new routes for further
improvement.
|
t-distributed stochastic neighbor embedding (t-SNE) is a well-established
visualization method for complex high-dimensional data. However, the original
t-SNE method is nonparametric, stochastic, and often cannot well prevserve the
global structure of data as it emphasizes local neighborhood. With t-SNE as a
reference, we propose to combine the deep neural network (DNN) with the
mathematical-grounded embedding rules for high-dimensional data embedding. We
first introduce a deep embedding network (DEN) framework, which can learn a
parametric mapping from high-dimensional space to low-dimensional embedding.
DEN has a flexible architecture that can accommodate different input data
(vector, image, or tensor) and loss functions. To improve the embedding
performance, a recursive training strategy is proposed to make use of the
latent representations extracted by DEN. Finally, we propose a two-stage loss
function combining the advantages of two popular embedding methods, namely,
t-SNE and uniform manifold approximation and projection (UMAP), for optimal
visualization effect. We name the proposed method Deep Recursive Embedding
(DRE), which optimizes DEN with a recursive training strategy and two-stage
losse. Our experiments demonstrated the excellent performance of the proposed
DRE method on high-dimensional data embedding, across a variety of public
databases. Remarkably, our comparative results suggested that our proposed DRE
could lead to improved global structure preservation.
|
The quiet solar corona consists of myriads of loop-like features, with
magnetic fields originating from network and internetwork regions on the solar
surface. The continuous interaction between these different magnetic patches
leads to transient brightenings or bursts that might contribute to the heating
of the solar atmosphere. However, it remains unclear whether such transients,
which are mostly observed in the EUV, play a significant role in atmospheric
heating. We revisit the open question of these bursts as a prelude to the new
high-resolution EUV imagery expected from the recently launched Solar Orbiter.
We use EUV images recorded by the SDO/AIA to investigate statistical properties
of the bursts. We detect the bursts in the 171 {\AA} filter images of AIA in an
automated way through a pixel-wise analysis by imposing different intensity
thresholds. By exploiting the high cadence (12 s) of the AIA observations, we
find that the distribution of lifetimes of these events peaks at about 120 s.
The sizes of the detected bursts are limited by the spatial resolution, which
indicates that a larger number of events might be hidden in the AIA data. We
estimate that about 100 new bursts appear per second on the whole Sun. The
detected bursts have nanoflare-like energies of $10^{24}$\,erg per event. Based
on this, we estimate that at least 100 times more events of a similar nature
would be required to account for the energy that is required to heat the
corona. When AIA observations are considered alone, the EUV bursts discussed
here therefore play no significant role in the coronal heating of the quiet
Sun. If the coronal heating of the quiet Sun is mainly bursty, then the
high-resolution EUV observations from Solar Orbiter may be able to reduce the
deficit in the number of EUV bursts seen with SDO/AIA at least partly by
detecting more such events.
|
Group testing can help maintain a widespread testing program using fewer
resources amid a pandemic. In group testing, we are given $n$ samples, one per
individual. These samples are arranged into $m < n$ pooled samples, where each
pool is obtained by mixing a subset of the $n$ individual samples. Infected
individuals are then identified using a group testing algorithm. In this paper,
we use side information (SI) collected from contact tracing (CT) within
nonadaptive/single-stage group testing algorithms. We generate CT SI data by
incorporating characteristics of disease spread between individuals. These data
are fed into two signal and measurement models for group testing, and numerical
results show that our algorithms provide improved sensitivity and specificity.
We also show how to incorporate CT SI into the design of the pooling matrix.
That said, our numerical results suggest that the utilization of SI in the
pooling matrix design does not yield significant performance gains beyond the
incorporation of SI in the group testing algorithm.
|
A search for pair production of third-generation scalar leptoquarks decaying
into a top quark and a $\tau$-lepton is presented. The search is based on a
dataset of $pp$ collisions at $\sqrt{s}=13$ TeV recorded with the ATLAS
detector during Run 2 of the Large Hadron Collider, corresponding to an
integrated luminosity of 139 fb$^{-1}$. Events are selected if they have one
light lepton (electron or muon) and at least one hadronically decaying
$\tau$-lepton, or at least two light leptons. In addition, two or more jets, at
least one of which must be identified as containing $b$-hadrons, are required.
Six final states, defined by the multiplicity and flavour of lepton candidates,
are considered in the analysis. Each of them is split into multiple event
categories to simultaneously search for the signal and constrain several
leading backgrounds. The signal-rich event categories require at least one
hadronically decaying $\tau$-lepton candidate and exploit the presence of
energetic final-state objects, which is characteristic of signal events. No
significant excess above the Standard Model expectation is observed in any of
the considered event categories, and 95% CL upper limits are set on the
production cross section as a function of the leptoquark mass, for different
assumptions about the branching fractions into $t\tau$ and $b\nu$. Scalar
leptoquarks decaying exclusively into $t\tau$ are excluded up to masses of 1.43
TeV while, for a branching fraction of 50% into $t\tau$, the lower mass limit
is 1.22 TeV.
|
We describe a robust method for determining Pipek-Mezey (PM) Wannier
functions (WF), recently introduced by J\'onsson et al. (J. Chem. Theor. Chem.
2017, 13, 460), which provide some formal advantages over the more common Boys
(also known as maximally-localized) Wannier functions. The
Broyden-Fletcher-Goldfarb-Shanno (BFGS) based PMWF solver is demonstrated to
yield dramatically faster convergence compared to the alternatives (steepest
ascent and conjugate gradient) in a variety of 1-, 2-, and 3-dimensional solids
(including some with vanishing gaps), and can be used to obtain Wannier
functions robustly in supercells with thousands of atoms. Evaluation of the PM
functional and its gradient in periodic LCAO representation used a particularly
simple definition of atomic charges obtained by Moore-Penrose pseudoinverse
projection onto the minimal atomic orbital basis. An automated "Canonicalize
Phase then Randomize" (CPR) method for generating the initial guess for WFs
contributes significantly to the robustness of the solver.
|
The system under study is the $\Lambda$-Kantowski-Sachs universe. Its
canonical quantization is provided based on a recently developed method: the
singular minisuperspace Lagrangian describing the system, is reduced to a
regular (by inserting into the dynamical equations the lapse dictated by the
quadratic constraint) possessing an explicit (though arbitrary) time
dependence; thus a time-covariant Schr\"{o}dinger equation arises.
Additionally, an invariant (under transformations $t=f(\tilde{t})$) decay
probability is defined and thus ``observers'' which correspond to different
gauge choices obtain, by default, the same results. The time of decay for a
Gaussian wave packet localized around the point $a=0$ (where $a$ the radial
scale factor) is calculated to be of the order $\sim
10^{-42}-10^{-41}\mathrm{s}$. The acquired value is near the end of the Planck
era (when comparing to a FLRW universe), during which the quantum effects are
most prominent. Some of the results are compared to those obtained by following
the well known canonical quantization of cosmological systems, i.e. the
solutions of the Wheeler-DeWitt equation.
|
A class of analytical solutions of axially symmetric vacuum initial data for
a self-gravitating system has been found. The active region of the constructed
gravitational wave is a thin torus around which the solution is conformally
flat. For higher values of gravitational wave amplitude the resulting
hypersurface contains apparent horizons.
|
Automatic garbage collection (GC) prevents certain kinds of bugs and reduces
programming overhead. GC techniques for sequential programs are based on
reachability analysis. However, testing reachability from a root set is
inadequate for determining whether an actor is garbage: Observe that an
unreachable actor may send a message to a reachable actor. Instead, it is
sufficient to check termination (sometimes also called quiescence): an actor is
terminated if it is not currently processing a message and cannot receive a
message in the future. Moreover, many actor frameworks provide all actors with
access to file I/O or external storage; without inspecting an actor's internal
code, it is necessary to check that the actor has terminated to ensure that it
may be garbage collected in these frameworks. Previous algorithms to detect
actor garbage require coordination mechanisms such as causal message delivery
or nonlocal monitoring of actors for mutation. Such coordination mechanisms
adversely affect concurrency and are therefore expensive in distributed
systems. We present a low-overhead reference listing technique (called DRL) for
termination detection in actor systems. DRL is based on asynchronous local
snapshots and message-passing between actors. This enables a decentralized
implementation and transient network partition tolerance. The paper provides a
formal description of DRL, shows that all actors identified as garbage have
indeed terminated (safety), and that all terminated actors--under certain
reasonable assumptions--will eventually be identified (liveness).
|
Single-molecule memory device based on a single-molecule magnet (SMM) is one
of the ultimate goals of semiconductor nanofabrication technologies. Here, we
study how to manipulate and readout the SMM's two spin-state of stored
information that characterized by the maximum and minimum average value of the
$Z$-component of the total spin of the SMM and the conduction-electron, which
are recognized as the information bits "$1$" and "$0$". We demonstrate that the
switching time depends on both the sequential tunneling gap $\varepsilon_{se}$
and the spin-selection-rule allowed transition-energy $\varepsilon_{trans}$,
which can be tuned by the gate voltage. In particular, when the external bias
voltage is turned off, in the cases of the unoccupied and doubly-occupied
ground eigenstates, the time derivative of the transport current can be used to
read out the SMM's two spin-state of stored information. Moreover, the
tunneling strength of and the asymmetry of the SMM-electrode coupling have a
strong influence on the switching time, but that have a slight influence on the
readout time that being on the order of nanoseconds. Our results suggest a
SMM-based memory device, and provide fundamental insight into the electrical
controllable manipulation and readout of the SMM's two spin-state of stored
information.
|
The plastic deformation mechanisms of tungsten carbide at room and elevated
temperatures influence the wear and fracture properties of WC-Co hardmetal
composite materials. The relationship between residual defect structures,
including glissile and sessile dislocations and stacking faults, and the slip
deformation activity, which produce slip traces, is not clear. Part 1 of this
study showed that {10-10} was the primary slip plane at all measured
temperatures and orientations, but secondary slip on the basal plane was
activated at 600 {\deg}C, which suggests that <a> dislocations can cross-slip
onto the basal plane at 600 {\deg}C. In the present work, Part 2, lattice
rotation axis analysis of deformed WC micropillar mid-sections was used to
discriminate <a> prismatic slip from multiple <c+a> prismatic slip in WC, which
enabled the dislocation types contributing to plastic slip to be distinguished,
independently of TEM residual defect analysis. Prismatic-oriented micropillars
deformed primarily by multiple <c+a> prismatic slip at room temperature, but by
<a> prismatic slip at 600 {\deg}C. Deformation in the near-basal oriented
pillar at 600 {\deg}C can be modelled as prismatic slip along <c> constrained
by the indenter face and pillar base. Secondary <a> basal slip, which was
observed near the top of the pillar, was activated to maintain deformation
compatibility with the indenter face. The lattice rotations, buckled pillar
shape, mechanical data, and slip traces observed in the pillar are all
consistent with this model.
|
Video classification and analysis is always a popular and challenging field
in computer vision. It is more than just simple image classification due to the
correlation with respect to the semantic contents of subsequent frames brings
difficulties for video analysis. In this literature review, we summarized some
state-of-the-art methods for multi-label video classification. Our goal is
first to experimentally research the current widely used architectures, and
then to develop a method to deal with the sequential data of frames and perform
multi-label classification based on automatic content detection of video.
|
The generation-defining Vera C. Rubin Observatory will make state-of-the-art
measurements of both the static and transient universe through its Legacy
Survey for Space and Time (LSST). With such capabilities, it is immensely
challenging to optimize the LSST observing strategy across the survey's wide
range of science drivers. Many aspects of the LSST observing strategy relevant
to the LSST Dark Energy Science Collaboration, such as survey footprint
definition, single visit exposure time and the cadence of repeat visits in
different filters, are yet to be finalized. Here, we present metrics used to
assess the impact of observing strategy on the cosmological probes considered
most sensitive to survey design; these are large-scale structure, weak lensing,
type Ia supernovae, kilonovae and strong lens systems (as well as photometric
redshifts, which enable many of these probes). We evaluate these metrics for
over 100 different simulated potential survey designs. Our results show that
multiple observing strategy decisions can profoundly impact cosmological
constraints with LSST; these include adjusting the survey footprint, ensuring
repeat nightly visits are taken in different filters and enforcing regular
cadence. We provide public code for our metrics, which makes them readily
available for evaluating further modifications to the survey design. We
conclude with a set of recommendations and highlight observing strategy factors
that require further research.
|
This paper reexamines the seminal Lagrange multiplier test for cross-section
independence in a large panel model where both the number of cross-sectional
units n and the number of time series observations T can be large. The first
contribution of the paper is an enlargement of the test with two extensions:
firstly the new asymptotic normality is derived in a simultaneous limiting
scheme where the two dimensions (n, T) tend to infinity with comparable
magnitudes; second, the result is valid for general error distribution (not
necessarily normal). The second contribution of the paper is a new test
statistic based on the sum of the fourth powers of cross-section correlations
from OLS residuals, instead of their squares used in the Lagrange multiplier
statistic. This new test is generally more powerful, and the improvement is
particularly visible against alternatives with weak or sparse cross-section
dependence. Both simulation study and real data analysis are proposed to
demonstrate the advantages of the enlarged Lagrange multiplier test and the
power enhanced test in comparison with the existing procedures.
|
In recent years, knowledge distillation has been proved to be an effective
solution for model compression. This approach can make lightweight student
models acquire the knowledge extracted from cumbersome teacher models. However,
previous distillation methods of detection have weak generalization for
different detection frameworks and rely heavily on ground truth (GT), ignoring
the valuable relation information between instances. Thus, we propose a novel
distillation method for detection tasks based on discriminative instances
without considering the positive or negative distinguished by GT, which is
called general instance distillation (GID). Our approach contains a general
instance selection module (GISM) to make full use of feature-based,
relation-based and response-based knowledge for distillation. Extensive results
demonstrate that the student model achieves significant AP improvement and even
outperforms the teacher in various detection frameworks. Specifically,
RetinaNet with ResNet-50 achieves 39.1% in mAP with GID on COCO dataset, which
surpasses the baseline 36.2% by 2.9%, and even better than the ResNet-101 based
teacher model with 38.1% AP.
|
Quadratic Unconstrained Binary Optimization models are useful for solving a
diverse range of optimization problems. Constraints can be added by
incorporating quadratic penalty terms into the objective, often with the
introduction of slack variables needed for conversion of inequalities. This
transformation can lead to a significant increase in the size and density of
the problem. Herein, we propose an efficient approach for recasting inequality
constraints that reduces the number of linear and quadratic variables.
Experimental results illustrate the efficacy.
|
The spin-$1/2$ XXZ chain is an integrable lattice model and parts of its spin
current can be protected by local conservation laws for anisotropies
$-1<\Delta<1$. In this case, the Drude weight $D(T)$ is non-zero at finite
temperatures $T$. Here we obtain analytical results for $D(T)$ at low
temperatures for zero external magnetic field and anisotropies
$\Delta=\cos(n\pi/m)$ with $n,m$ coprime integers, using the thermodynamic
Bethe ansatz. We show that to leading orders
$D(T)=D(0)-a(\Delta)T^{2K-2}-b_1(\Delta)T^2$ where $K$ is the Luttinger
parameter and the prefactor $a(\Delta)$, obtained in closed form, has a fractal
structure as function of anisotropy $\Delta$. The prefactor $b_1(\Delta)$, on
the other hand, does not have a fractal structure and can be obtained in a
standard field-theoretical approach. Including both temperature corrections, we
obtain an analytic result for the low-temperature asymptotics of the Drude
weight in the entire regime $-1<\Delta=\cos(n\pi/m)<1$.
|
Silicon-germanium heterojunction bipolar transistors (HBTs) are of interest
as low-noise microwave amplifiers due to their competitive noise performance
and low cost relative to III-V devices. The fundamental noise performance
limits of HBTs are thus of interest, and several studies report that
quasiballistic electron transport across the base is a mechanism leading to
cryogenic non-ideal IV characteristics that affects these limits. However, this
conclusion has not been rigorously tested against theoretical predictions
because prior studies modeled electron transport with empirical approaches or
approximate solutions of the Boltzmann equation. Here, we study non-diffusive
transport in narrow-base SiGe HBTs using an exact, semi-analytic solution of
the Boltzmann equation based on an asymptotic expansion approach. We find that
the computed transport characteristics are inconsistent with experiment,
implying that quasiballistic electron transport is unlikely to be the origin of
cryogenic non-ideal IV characteristics. Our work helps to identify the
mechanisms governing the lower limits of the microwave noise figure of
cryogenic HBT amplifiers.
|
The NLC2CMD Competition hosted at NeurIPS 2020 aimed to bring the power of
natural language processing to the command line. Participants were tasked with
building models that can transform descriptions of command line tasks in
English to their Bash syntax. This is a report on the competition with details
of the task, metrics, data, attempted solutions, and lessons learned.
|
We perform a detailed numerical study of diffusion in the $\varepsilon$
stadium of Bunimovich, and propose an empirical model of the local and global
diffusion for various values of $\varepsilon$ with the following conclusions:
(i) the diffusion is normal for all values of $\varepsilon \leq(0.3)$ and all
initial conditions, (ii) the diffusion constant is a parabolic function of the
momentum (i.e., we have inhomogeneous diffusion), (iii) the model describes the
diffusion very well including the boundary effects, (iv) the approach to the
asymptotic equilibrium steady state is exponential, (v) the so-called random
model (Robnik et al., 1997) is confirmed to apply very well, (vi) the diffusion
constant extracted from the distribution function in momentum space and the one
derived from the second moment agree very well. The classical transport time,
an important parameter in quantum chaos, is thus determined.
|
The group testing problem consists of determining a small set of defective
items from a larger set of items based on a number of possibly-noisy tests, and
has numerous practical applications. One of the defining features of group
testing is whether the tests are adaptive (i.e., a given test can be chosen
based on all previous outcomes) or non-adaptive (i.e., all tests must be chosen
in advance). In this paper, building on the success of binary splitting
techniques in noiseless group testing (Hwang, 1972), we introduce noisy group
testing algorithms that apply noisy binary search as a subroutine. We provide
three variations of this approach with increasing complexity, culminating in an
algorithm that succeeds using a number of tests that matches the best known
previously (Scarlett, 2019), while overcoming fundamental practical limitations
of the existing approach, and more precisely capturing the dependence of the
number of tests on the error probability. We provide numerical experiments
demonstrating that adaptive group testing strategies based on noisy binary
search can be highly effective in practice, using significantly fewer tests
compared to state-of-the-art non-adaptive strategies.
|
To realize autonomous collaborative robots, it is important to increase the
trust that users have in them. Toward this goal, this paper proposes an
algorithm which endows an autonomous agent with the ability to explain the
transition from the current state to the target state in a Markov decision
process (MDP). According to cognitive science, to generate an explanation that
is acceptable to humans, it is important to present the minimum information
necessary to sufficiently understand an event. To meet this requirement, this
study proposes a framework for identifying important elements in the
decision-making process using a prediction model for the world and generating
explanations based on these elements. To verify the ability of the proposed
method to generate explanations, we conducted an experiment using a grid
environment. It was inferred from the result of a simulation experiment that
the explanation generated using the proposed method was composed of the minimum
elements important for understanding the transition from the current state to
the target state. Furthermore, subject experiments showed that the generated
explanation was a good summary of the process of state transition, and that a
high evaluation was obtained for the explanation of the reason for an action.
|
An interpretable system for open-domain reasoning needs to express its
reasoning process in a transparent form. Natural language is an attractive
representation for this purpose -- it is both highly expressive and easy for
humans to understand. However, manipulating natural language statements in
logically consistent ways is hard: models must cope with variation in how
meaning is expressed while remaining precise. In this paper, we describe
ParaPattern, a method for building models to generate deductive inferences from
diverse natural language inputs without direct human supervision. We train
BART-based models (Lewis et al., 2020) to generate the result of applying a
particular logical operation to one or more premise statements. Crucially, we
develop a largely automated pipeline for constructing suitable training
examples from Wikipedia. We evaluate our models using out-of-domain sentence
compositions from the QASC (Khot et al., 2020) and EntailmentBank (Dalvi et
al., 2021) datasets as well as targeted perturbation sets. Our results show
that our models are substantially more accurate and flexible than baseline
systems. ParaPattern achieves 85% validity on examples of the 'substitution'
operation from EntailmentBank without the use of any in-domain training data,
matching the performance of a model fine-tuned for EntailmentBank. The full
source code for our method is publicly available.
|
We present a uniform (and unambiguous) procedure for scaling the matter
fields in implementing the conformal method to parameterize and construct
solutions of Einstein constraint equations with coupled matter sources. The
approach is based on a phase space representation of the space-time matter
fields after a careful $n+1$ decomposition into spatial fields $B$ and
conjugate momenta $\Pi_B$, which are specified directly and are conformally
invariant quantities. We show that if the Einstein-matter field theory is
specified by a Lagrangian which is diffeomorphism invariant and involves no
dependence on derivatives of the space-time metric in the matter portion of the
Lagrangian, then fixing $B$ and $\Pi_B$ results in conformal constraint
equations that, for constant-mean curvature initial data, semi-decouple just as
they do for the vacuum Einstein conformal constraint equations. We prove this
result by establishing a structural property of the Einstein momentum
constraint that is independent of the conformal method: For an Einstein-matter
field theory which satisfies the conditions just stated, if $B$ and $\Pi_B$
satisfy the matter Euler-Lagrange equations, then (in suitable form) the
right-hand side of the momentum constraint on each spatial slice depends only
on $B$ and $\Pi_B$ and is independent of the space-time metric. We discuss the
details of our construction in the special cases of the following models:
Einstein-Maxwell-charged scalar field, Einstein-Proca, Einstein-perfect fluid,
and Einstein-Maxwell-charged dust. In these examples we find that our technique
gives a theoretical basis for scaling rules, such as those for
electromagnetism, that have worked pragmatically in the past, but also
generates new equations with advantageous features for perfect fluids that
allow direct specification of total rest mass and total charge in any spatial
region.
|
We obtain an array of consistency results concerning trees and stationary
reflection at double successors of regular cardinals $\kappa$, updating some
classical constructions in the process. This includes models of
$\mathsf{CSR}(\kappa^{++})\wedge \mathsf{TP}(\kappa^{++})$ (both with and
without $\mathsf{AP}(\kappa^{++})$) and models of the conjunctions
$\mathsf{SR}(\kappa^{++}) \wedge \mathsf{wTP}(\kappa^{++}) \wedge
\mathsf{AP}(\kappa^{++})$ and $\neg \mathsf{AP}(\kappa^{++}) \wedge
\mathsf{SR}(\kappa^{++})$ (the latter was originally obtained in joint work by
Krueger and the first author \cite{GilKru:8fold}, and is here given using
different methods). Analogs of these results with the failure of
$\mathsf{SH}(\kappa^{++})$ are given as well. Finally, we obtain all of our
results with an arbitrarily large $2^\kappa$, applying recent joint work by
Honzik and the third author.
|
Nonlinear distortion of an OFDM signal is a serious problem when it comes to
energy-efficient Power Amplifier(PA) utilization. Typically, Peak-to-Average
Power Ratio(PAPR) reduction algorithms and digital predistortion algorithms are
used independently to fight the same phenomenon. This paper proposes an
Amplifier-Coupled Tone Reservation (ACTR)algorithm for the reduction of
nonlinear distortion power, utilizing knowledge on thep redistorted PA
characteristic. The optimization problem is defined. Its convexity is proved. A
computationally-efficient solution is presented. Finally, its performance is
compared against two state-of-the-art TR algorithms by means of simulations and
measurements. The results show the proposed solution is advantageous, both in
terms of nonlinear distortion power and the required number of computations.
|
We proposed a convolutional neural network for vertex classification on
3-dimensional dental meshes, and used it to detect teeth margins. An expanding
layer was constructed to collect statistic values of neighbor vertex features
and compute new features for each vertex with convolutional neural networks. An
end-to-end neural network was proposed to take vertex features, including
coordinates, curvatures and distance, as input and output each vertex
classification label. Several network structures with different parameters of
expanding layers and a base line network without expanding layers were designed
and trained by 1156 dental meshes. The accuracy, recall and precision were
validated on 145 dental meshes to rate the best network structures, which were
finally tested on another 144 dental meshes. All networks with our expanding
layers performed better than baseline, and the best one achieved an accuracy of
0.877 both on validation dataset and test dataset.
|
Dynamic graph representation learning is a task to learn node embeddings over
dynamic networks, and has many important applications, including knowledge
graphs, citation networks to social networks. Graphs of this type are usually
large-scale but only a small subset of vertices are related in downstream
tasks. Current methods are too expensive to this setting as the complexity is
at best linear-dependent on both the number of nodes and edges. In this paper,
we propose a new method, namely Dynamic Personalized PageRank Embedding
(\textsc{DynamicPPE}) for learning a target subset of node representations over
large-scale dynamic networks. Based on recent advances in local node embedding
and a novel computation of dynamic personalized PageRank vector (PPV),
\textsc{DynamicPPE} has two key ingredients: 1) the per-PPV complexity is
$\mathcal{O}(m \bar{d} / \epsilon)$ where $m,\bar{d}$, and $\epsilon$ are the
number of edges received, average degree, global precision error respectively.
Thus, the per-edge event update of a single node is only dependent on $\bar{d}$
in average; and 2) by using these high quality PPVs and hash kernels, the
learned embeddings have properties of both locality and global consistency.
These two make it possible to capture the evolution of graph structure
effectively. Experimental results demonstrate both the effectiveness and
efficiency of the proposed method over large-scale dynamic networks. We apply
\textsc{DynamicPPE} to capture the embedding change of Chinese cities in the
Wikipedia graph during this ongoing COVID-19 pandemic
(https://en.wikipedia.org/wiki/COVID-19_pandemic). Our results show that these
representations successfully encode the dynamics of the Wikipedia graph.
|
The ability to map and estimate the activity of radiological source
distributions in unknown three-dimensional environments has applications in the
prevention and response to radiological accidents or threats as well as the
enforcement and verification of international nuclear non-proliferation
agreements. Such a capability requires well-characterized detector response
functions, accurate time-dependent detector position and orientation data, a
digitized representation of the surrounding 3D environment, and appropriate
image reconstruction and uncertainty quantification methods. We have previously
demonstrated 3D mapping of gamma-ray emitters with free-moving detector systems
on a relative intensity scale using a technique called Scene Data Fusion (SDF).
Here we characterize the detector response of a multi-element gamma-ray imaging
system using experimentally benchmarked Monte Carlo simulations and perform 3D
mapping on an absolute intensity scale. We present experimental reconstruction
results from hand-carried and airborne measurements with point-like and
distributed sources in known configurations, demonstrating quantitative SDF in
complex 3D environments.
|
We study the prime-to-$p$ Hecke action on the projective limit of the sets of
connected components of Shimura varieties with fixed parahoric or Bruhat--Tits
level at $p$. In particular, we construct infinitely many Shimura varieties for
CM unitary groups in odd variables for which the considering actions are not
transitive. We prove this result by giving negative examples on the question of
Bruhat--Colliot-Th\'el\`ene--Sansuc--Tits or its variant, which is related to
the weak approximation on tori over $\mathbb{Q}$.
|
We formulate a nonlinear optimal control problem for intra-day operation of a
natural gas pipeline network that includes storage reservoirs. The dynamics of
compressible gas flow through pipes, compressors, reservoirs, and wells are
considered. In particular, a reservoir is modeled as a rigid, hollow container
that stores gas under isothermal conditions and uniform density, and a well is
modeled as a vertical pipe. For each pipe, flow dynamics are described by a
coupled partial differential equation (PDE) system in density and mass flux
variables, with momentum dissipation modeled using the Darcy-Wiesbach friction
approximation. Compressors are modeled as scaling up the pressure of gas
between inlet and outlet. The governing equations for all network components
are spatially discretized and assembled into a nonlinear differential-algebraic
equation (DAE) system, which synthesizes above-ground pipeline and subsurface
reservoir dynamics into a single reduced-order model. We seek to maximize an
objective function that quantifies economic profit and network efficiency
subject to the flow equations and inequalities that represent operating
limitations. The problem is solved using a primal-dual interior point solver
and the solutions are validated in computational experiments and simulations on
several pipeline test networks to demonstrate the effectiveness of the proposed
methodology.
|
There is still a limited understanding of the necessary skill, talent, and
expertise to manage digital technologies as a crucial enabler of the hospitals
ability to adequately sense and respond to patient needs and wishes, i.e.,
patient agility. Therefore, this investigates how hospital departments can
leverage a digital dy-namic capability to enable the departments patient
agility. This study embraces the dynamic capabilities theory, develops a
research model, and tests it accordingly using data from 90 clinical hospital
departments from the Netherlands through an online survey. The model's
hypothesized relationships are tested using structural equation modeling (SEM).
The outcomes demonstrate the significance of digital dynamic capability in
developing patient sensing and responding capabili-ties that, in turn,
positively influence patient service performance. Outcomes are very relevant
for the hospital practice now, as hospitals worldwide need to trans-form
healthcare delivery processes using digital technologies and increase clinical
productivity.
|
The knowledge of a deep learning model may be transferred to a student model,
leading to intellectual property infringement or vulnerability propagation.
Detecting such knowledge reuse is nontrivial because the suspect models may not
be white-box accessible and/or may serve different tasks. In this paper, we
propose ModelDiff, a testing-based approach to deep learning model similarity
comparison. Instead of directly comparing the weights, activations, or outputs
of two models, we compare their behavioral patterns on the same set of test
inputs. Specifically, the behavioral pattern of a model is represented as a
decision distance vector (DDV), in which each element is the distance between
the model's reactions to a pair of inputs. The knowledge similarity between two
models is measured with the cosine similarity between their DDVs. To evaluate
ModelDiff, we created a benchmark that contains 144 pairs of models that cover
most popular model reuse methods, including transfer learning, model
compression, and model stealing. Our method achieved 91.7% correctness on the
benchmark, which demonstrates the effectiveness of using ModelDiff for model
reuse detection. A study on mobile deep learning apps has shown the feasibility
of ModelDiff on real-world models.
|
Affine rank minimization problem is the generalized version of low rank
matrix completion problem where linear combinations of the entries of a low
rank matrix are observed and the matrix is estimated from these measurements.
We propose a trainable deep neural network by unrolling a popular iterative
algorithm called the singular value thresholding (SVT) algorithm to perform
this generalized matrix completion which we call Learned SVT (LSVT). We show
that our proposed LSVT with fixed layers (say T) reconstructs the matrix with
lesser mean squared error (MSE) compared with that incurred by SVT with fixed
(same T) number of iterations and our method is much more robust to the
parameters which need to be carefully chosen in SVT algorithm.
|
For a connected reductive group $G$ defined over $\mathbb{F}_q$ and equipped
with the induced Frobenius endomorphism $F$, we study the relation among the
following three $\mathbb{Z}$-algebras: (i) the $\mathbb{Z}$-model
$\mathsf{E}_G$ of endomorphism algebras of Gelfand-Graev representations of
$G^F$; (ii) the Grothendieck group $\mathsf{K}_{G^\ast}$ of the category of
representations of $G^{\ast F^\ast}$ over $\overline{\mathbb{F}_q}$
(Deligne-Lusztig dual side); (iii) the ring $\mathsf{B}_{G^\vee}$ of the scheme
$(T^\vee/\!\!/ W)^{F^\vee}$ over $\mathbb{Z}$ (Langlands dual side). The
comparison between (i) and (iii) is motivated by recent advances in the local
Langlands program.
|
Whilst contrastive learning has recently brought notable benefits to deep
clustering of unlabelled images by learning sample-specific discriminative
visual features, its potential for explicitly inferring class decision
boundaries is less well understood. This is because its instance discrimination
strategy is not class sensitive, therefore, the clusters derived on the
resulting sample-specific feature space are not optimised for corresponding to
meaningful class decision boundaries. In this work, we solve this problem by
introducing Semantic Contrastive Learning (SCL). SCL imposes explicitly
distance-based cluster structures on unlabelled training data by formulating a
semantic (cluster-aware) contrastive learning objective. Moreover, we introduce
a clustering consistency condition to be satisfied jointly by both instance
visual similarities and cluster decision boundaries, and concurrently
optimising both to reason about the hypotheses of semantic ground-truth classes
(unknown/unlabelled) on-the-fly by their consensus. This semantic contrastive
learning approach to discovering unknown class decision boundaries has
considerable advantages to unsupervised learning of object recognition tasks.
Extensive experiments show that SCL outperforms state-of-the-art contrastive
learning and deep clustering methods on six object recognition benchmarks,
especially on the more challenging finer-grained and larger datasets.
|
Let $H$ be a digraph possibly with loops and $D$ a loopless digraph whose
arcs are colored with the vertices of $H$ ($D$ is said to be an $H-$colored
digraph). If $W=(x_{0},\ldots,x_{n})$ is an open walk in $D$ and $i\in
\{1,\ldots,n-1\}$, we say that there is an obstruction on $x_{i}$ whenever
$(color(x_{i-1},x_{i}),color(x_{i},x_{i+1}))\notin A(H)$. A $(k,l,H)$-kernel by
walks in an $H$-colored digraph $D$ ($k\geq 2$, $l\geq 1$), is a subset $S$ of
vertices of $D$, such that, for every pair of different vertices in $S$, every
walk between them has at least $k-1$ obstructions, and for every $x\in
V(D)\setminus S$ there exists an $xS$-walk with at most $l-1$ obstructions.
This concept generalize the concepts of kernel, $(k,l)$-kernel, kernel by
monochromatic paths, and kernel by $H$-walks. If $D$ is an $H$-colored digraph,
an $H$-class partition is a partition $\mathscr{F}$ of $A(D)$ such that, for
every $\{(u,v),(v,w)\}\subseteq A(D)$, $(color(u,v),color(v,w))\in A(H)$ iff
there exists $F\in \mathscr{F}$ such that $\{(u,v),(v,w)\}\subseteq F$. The
$H$-class digraph relative to $\mathscr{F}$, denoted by $C_{\mathscr{F}}(D)$,
is the digraph such that $V(C_{\mathscr{F}}(D))=\mathscr{F}$, and $(F,G)\in
A(C_{\mathscr{F}}(D))$ iff there exist $(u,v)\in F$ and $(v,w)\in G$ with
$\{u,v,w\}\subseteq V(D)$.
We will show sufficient conditions on $\mathscr{F}$ and $C_{\mathscr{F}}(D)$
to guarantee the existence of $(k,l,H)$-kernels by walks in $H$-colored
digraphs, and we will show that some conditions are tight. For instance, we
will show that if an $H$-colored digraph $D$ has an $H$-class partition in
which every class induces a strongly connected digraph, and has a
obstruction-free vertex, then for every $k\geq 2$, $D$ has a $(k,k-1,H)$-kernel
by walks. Despite finding $(k,l)$-kernels is a $NP$-complete problem, some
hypothesis presented in this paper can be verified in polynomial time.
|
We use an iteration procedure propped up by a a classical form of the maximum
principle to show the existence of solutions to a nonlinear Poisson equation
with Dirichlet boundary conditions. These methods can be applied to the case of
special unbounded domains, and can be adapted to show the existence of
nontrivial solutions to systems, which we show via some examples.
|
The rapid growth of the e-commerce market in Indonesia, making various
e-commerce companies appear and there has been high competition among them.
Marketing intelligence is an important activity to measure competitive
position. One element of marketing intelligence is to assess customer
satisfaction. Many Indonesian customers express their sense of satisfaction or
dissatisfaction towards the company through social media. Hence, using social
media data provides a new practical way to measure marketing intelligence
effort. This research performs sentiment analysis using the naive bayes
classifier classification method with TF-IDF weighting. We compare the
sentiments towards of top-3 e-commerce sites visited companies, are Bukalapak,
Tokopedia, and Elevenia. We use Twitter data for sentiment analysis because
it's faster, cheaper, and easier from both the customer and the researcher
side. The purpose of this research is to find out how to process the huge
customer sentiment Twitter to become useful information for the e-commerce
company, and which of those top-3 e-commerce companies has the highest level of
customer satisfaction. The experiment results show the method can be used to
classify customer sentiments in social media Twitter automatically and Elevenia
is the highest e-commerce with customer satisfaction.
|
Marchenko methods are based on integral representations which express Green's
functions for virtual sources and/or receivers in the subsurface in terms of
the reflection response at the surface. An underlying assumption is that inside
the medium the wave field can be decomposed into downgoing and upgoing waves
and that evanescent waves can be neglected. We present a new derivation of
Green's function representations which circumvents these assumptions, both for
the acoustic and the elastodynamic situation. These representations form the
basis for research into new Marchenko methods which have the potential to
handle refracted and evanescent waves and to more accurately image steeply
dipping reflectors.
|
We investigate fast and communication-efficient algorithms for the classic
problem of minimizing a sum of strongly convex and smooth functions that are
distributed among $n$ different nodes, which can communicate using a limited
number of bits. Most previous communication-efficient approaches for this
problem are limited to first-order optimization, and therefore have
\emph{linear} dependence on the condition number in their communication
complexity. We show that this dependence is not inherent:
communication-efficient methods can in fact have sublinear dependence on the
condition number. For this, we design and analyze the first
communication-efficient distributed variants of preconditioned gradient descent
for Generalized Linear Models, and for Newton's method. Our results rely on a
new technique for quantizing both the preconditioner and the descent direction
at each step of the algorithms, while controlling their convergence rate. We
also validate our findings experimentally, showing fast convergence and reduced
communication.
|
We propose to accelerate existing linear bandit algorithms to achieve
per-step time complexity sublinear in the number of arms $K$. The key to
sublinear complexity is the realization that the arm selection in many linear
bandit algorithms reduces to the maximum inner product search (MIPS) problem.
Correspondingly, we propose an algorithm that approximately solves the MIPS
problem for a sequence of adaptive queries yielding near-linear preprocessing
time complexity and sublinear query time complexity. Using the proposed MIPS
solver as a sub-routine, we present two bandit algorithms (one based on UCB,
and the other based on TS) that achieve sublinear time complexity. We
explicitly characterize the tradeoff between the per-step time complexity and
regret, and show that our proposed algorithms can achieve $O(K^{1-\alpha(T)})$
per-step complexity for some $\alpha(T) > 0$ and $\widetilde O(\sqrt{T})$
regret, where $T$ is the time horizon. Further, we present the theoretical
limit of the tradeoff, which provides a lower bound for the per-step time
complexity. We also discuss other choices of approximate MIPS algorithms and
other applications to linear bandit problems.
|
Lokshtanov et al.~[STOC 2017] introduced \emph{lossy kernelization} as a
mathematical framework for quantifying the effectiveness of preprocessing
algorithms in preserving approximation ratios. \emph{$\alpha$-approximate
reduction rules} are a central notion of this framework. We propose that
carefully crafted $\alpha$-approximate reduction rules can yield improved
approximation ratios in practice, while being easy to implement as well. This
is distinctly different from the (theoretical) purpose for which Lokshtanov et
al. designed $\alpha$-approximate Reduction Rules. As evidence in support of
this proposal we present a new 2-approximate reduction rule for the
\textsc{Dominating Set} problem. This rule, when combined with an approximation
algorithm for \textsc{Dominating Set}, yields significantly better
approximation ratios on a variety of benchmark instances as compared to the
latter algorithm alone.
The central thesis of this work is that $\alpha$-approximate reduction rules
can be used as a tool for designing approximation algorithms which perform
better in practice. To the best of our knowledge, ours is the first exploration
of the use of $\alpha$-approximate reduction rules as a design technique for
practical approximation algorithms. We believe that this technique could be
useful in coming up with improved approximation algorithms for other
optimization problems as well.
|
This paper studies the convergence of three temporal semi-discretizations for
a backward semilinear stochastic evolution equation. For general terminal value
and general coefficient with Lipschitz continuity, the convergence of three
Euler type temporal semi-discretizations is established without regularity
assumption on the solution. Moreover, the third temporal semi-discretization is
applied to a stochastic linear quadratic control problem, and an explicit
convergence rate is derived.
|
Discrete mechanics is presented as an alternative to the equations of fluid
mechanics, in particular to the Navier-Stokes equation. The derivation of the
discrete equation of motion is built from the intuitions of Galileo, the
principles of Galilean equivalence and relativity. Other more recent concepts
such as the equivalence between mass and energy and the Helmholtz-Hodge
decomposition complete the formal framework used to write a fundamental law of
motion such as the conservation of accelerations, the intrinsic acceleration of
the material medium, and the sum of the accelerations applied to it. The two
scalar and vector potentials of the acceleration resulting from the
decomposition into two contributions, to curl-free and to divergence-free,
represent the energies per unit of mass of compression and shear.
The solutions obtained by the incompressible Navier-Stokes equation and the
discrete equation of motion are the same, with constant physical properties.
This new formulation of the equation of motion makes it possible to
significantly modify the treatment of surface discontinuities, thanks to the
intrinsic properties established from the outset for a discrete geometrical
description directly linked to the decomposition of acceleration. The treatment
of the jump conditions of density, viscosity and capillary pressure is
explained in order to understand the two-phase flows. The choice of the
examples retained, mainly of the exact solutions of the continuous equations,
serves to show that the treatment of the conditions of jumps does not affect
the precision of the method of resolution.
|
Federated Learning (FL) enables multiple distributed clients (e.g., mobile
devices) to collaboratively train a centralized model while keeping the
training data locally on the client. Compared to traditional centralized
machine learning, FL offers many favorable features such as offloading
operations which would usually be performed by a central server and reducing
risks of serious privacy leakage. However, Byzantine clients that send
incorrect or disruptive updates due to system failures or adversarial attacks
may disturb the joint learning process, consequently degrading the performance
of the resulting model. In this paper, we propose to mitigate these failures
and attacks from a spatial-temporal perspective. Specifically, we use a
clustering-based method to detect and exclude incorrect updates by leveraging
their geometric properties in the parameter space. Moreover, to further handle
malicious clients with time-varying behaviors, we propose to adaptively adjust
the learning rate according to momentum-based update speculation. Extensive
experiments on 4 public datasets demonstrate that our algorithm achieves
enhanced robustness comparing to existing methods under both cross-silo and
cross-device FL settings with faulty/malicious clients.
|
Ensemble Kalman inversion is a parallelizable derivative-free method to solve
inverse problems. The method uses an ensemble that follows the Kalman update
formula iteratively to solve an optimization problem. The ensemble size is
crucial to capture the correct statistical information in estimating the
unknown variable of interest. Still, the ensemble is limited to a size smaller
than the unknown variable's dimension for computational efficiency. This study
proposes a strategy to correct the sampling error due to a small ensemble size,
which improves the performance of the ensemble Kalman inversion. This study
validates the efficiency and robustness of the proposed strategy through a
suite of numerical tests, including compressive sensing, image deblurring,
parameter estimation of a nonlinear dynamical system, and a PDE-constrained
inverse problem.
|
On edge devices, data scarcity occurs as a common problem where transfer
learning serves as a widely-suggested remedy. Nevertheless, transfer learning
imposes a heavy computation burden to resource-constrained edge devices.
Existing task allocation works usually assume all submitted tasks are equally
important, leading to inefficient resource allocation at a task level when
directly applied in Multi-task Transfer Learning (MTL). To address these
issues, we first reveal that it is crucial to measure the impact of tasks on
overall decision performance improvement and quantify \emph{task importance}.
We then show that task allocation with task importance for MTL (TATIM) is a
variant of the NP-complete Knapsack problem, where the complicated computation
to solve this problem needs to be conducted repeatedly under varying contexts.
To solve TATIM with high computational efficiency, we propose a Data-driven
Cooperative Task Allocation (DCTA) approach. Finally, we evaluate the
performance of DCTA by not only a trace-driven simulation, but also a new
comprehensive real-world AIOps case study that bridges model and practice via a
new architecture and main components design within the AIOps system. Extensive
experiments show that our DCTA reduces 3.24 times of processing time, and saves
48.4\% energy consumption compared with the state-of-the-art when solving
TATIM.
|
The one-dimensional Bose-Hubbard model in large-$U$ limit has been studied
via reducing and mapping the Hamiltonian to a simpler one. The eigenstates and
eigenvalues have been obtained exactly in the subspaces with fixed numbers of
single- and double-occupancies but without multiple-occupancies, and the
thermodynamic properties of the system have been calculated further. These
eigenstates and eigenvalues also enable us to develop a new perturbation
treatment of the model, with which the ground-state energy has been calculated
exactly to first order in $1/U$.
|
In magnetic Cataclysmic Variables (mCVs), X-ray radiation originates from the
shock heated multi-temperature plasma in the post-shock region near the white
dwarf surface. These X-rays are modified by a complex distribution of absorbers
in the pre-shock region. The presence of photo-ionized lines and warm absorber
features in the soft X-ray spectra of these mCVs suggests that these absorbers
are ionized. We developed the ionized complex absorber model zxipab, which is
represented by a power-law distribution of ionized absorbers in the pre-shock
flow. Using the ionized absorber model zxipab along with a cooling flow model
and a reflection component, we model the broadband Chandra/HETG and NuSTAR
spectra of two IPs: NY Lup and V1223 Sgr. We find that this model describes
well many of the H and He like emission lines from medium Z elements, which
arises from the collisionally excited plasma. However the model fails to
account for some of the He like triplets from medium Z elements, which points
towards its photo-ionization origin. We do not find a compelling evidence for a
blackbody component to model the soft excess seen in the residuals of the
Chandra/HETG spectra, which could be due to the uncertainties in estimation of
the interstellar absorption of these sources using Chandra/HETG data and/or
excess fluxes seen in some photo-ionized emission lines which are not accounted
by the cooling flow model. We describe the implications of this model with
respect to the geometry of the pre-shock region in these two IPs.
|
This text is written based on the author's publications during the period
from 1991 to 2001. The work is devoted to the theory of Markov intertwining
operators and joinings of measure-preserving group actions, as well as to their
applications to study asymptotic properties of dynamical systems. Special
attention is paid to Rokhlin's problems on multiple mixing and multiple
spectrum. The development of these topics over the past twenty years has not
been discussed. In fact many results on joinings have frozen in time, many
questions have remained open without losing their relevance, but probably have
ceased to excite interest due to difficulties. For example, it is not known
whether the minimal self-joinings of order 2 imply all orders? Is there a
non-trivial pairwise independent joining for a weakly mixing system of zero
entropy? What can be said about such joinings for transformations with small
local rank? These questions are ripe for a long time, and the author reminds
the reader about them, combining his story with numerous partial and related
results.
|
Let $ \sigma$ be a partition of the set of all primes and $\mathfrak{F}$ be a
hereditary formation. We described all formations $\mathfrak{F}$ for which the
$\mathfrak{F}$-hypercenter and the intersection of weak
$K$-$\mathfrak{F}$-subnormalizers of all Sylow subgroups coincide in every
group. In particular the formation of all $\sigma$-nilpotent groups has this
property. With the help of our results we solve a particular case of
L.A.~Shemetkov's problem about the intersection of $\mathfrak{F}$-maximal
subgroups and the $\mathfrak{F}$-hypercenter. As corollaries we obtained P.
Hall's and R. Baer's classical results about the hypercenter. We proved that
the non-$\sigma$-nilpotent graph of a group is connected and its diameter is at
most 3.
|
Andrews' $(k, i)$-singular overpartition function $\overline{C}_{k, i}(n)$
counts the number of overpartitions of $n$ in which no part is divisible by $k$
and only parts $\equiv \pm i\pmod{k}$ may be overlined. In recent times,
divisibility of $\overline{C}_{3\ell, \ell}(n)$, $\overline{C}_{4\ell,
\ell}(n)$ and $\overline{C}_{6\ell, \ell}(n)$ by $2$ and $3$ are studied for
certain values of $\ell$. In this article, we study divisibility of
$\overline{C}_{3\ell, \ell}(n)$, $\overline{C}_{4\ell, \ell}(n)$ and
$\overline{C}_{6\ell, \ell}(n)$ by primes $p\geq 5$. For all positive integer
$\ell$ and prime divisors $p\geq 5$ of $\ell$, we prove that
$\overline{C}_{3\ell, \ell}(n)$, $\overline{C}_{4\ell, \ell}(n)$ and
$\overline{C}_{6\ell, \ell}(n)$ are almost always divisible by arbitrary powers
of $p$. For $s\in \{3, 4, 6\}$, we next show that the set of those $n$ for
which $\overline{C}_{s\cdot\ell, \ell}(n) \not\equiv 0\pmod{p_i^k}$ is
infinite, where $k$ is a positive integer satisfying $p_i^{k-1}\geq \ell$. We
further improve a result of Gordon and Ono on divisibility of $\ell$-regular
partitions by powers of certain primes. We also improve a result of Ray and
Chakraborty on divisibility of $\ell$-regular overpartitions by powers of
certain primes.
|
Affordance detection refers to identifying the potential action possibilities
of objects in an image, which is an important ability for robot perception and
manipulation. To empower robots with this ability in unseen scenarios, we
consider the challenging one-shot affordance detection problem in this paper,
i.e., given a support image that depicts the action purpose, all objects in a
scene with the common affordance should be detected. To this end, we devise a
One-Shot Affordance Detection (OS-AD) network that firstly estimates the
purpose and then transfers it to help detect the common affordance from all
candidate images. Through collaboration learning, OS-AD can capture the common
characteristics between objects having the same underlying affordance and learn
a good adaptation capability for perceiving unseen affordances. Besides, we
build a Purpose-driven Affordance Dataset (PAD) by collecting and labeling 4k
images from 31 affordance and 72 object categories. Experimental results
demonstrate the superiority of our model over previous representative ones in
terms of both objective metrics and visual quality. The benchmark suite is at
ProjectPage.
|
Federated learning has emerged as a popular technique for distributing
machine learning (ML) model training across the wireless edge. In this paper,
we propose two timescale hybrid federated learning (TT-HF), a
semi-decentralized learning architecture that combines the conventional
device-to-server communication paradigm for federated learning with
device-to-device (D2D) communications for model training. In TT-HF, during each
global aggregation interval, devices (i) perform multiple stochastic gradient
descent iterations on their individual datasets, and (ii) aperiodically engage
in consensus procedure of their model parameters through cooperative,
distributed D2D communications within local clusters. With a new general
definition of gradient diversity, we formally study the convergence behavior of
TT-HF, resulting in new convergence bounds for distributed ML. We leverage our
convergence bounds to develop an adaptive control algorithm that tunes the step
size, D2D communication rounds, and global aggregation period of TT-HF over
time to target a sublinear convergence rate of O(1/t) while minimizing network
resource utilization. Our subsequent experiments demonstrate that TT-HF
significantly outperforms the current art in federated learning in terms of
model accuracy and/or network energy consumption in different scenarios where
local device datasets exhibit statistical heterogeneity. Finally, our numerical
evaluations demonstrate robustness against outages caused by fading channels,
as well favorable performance with non-convex loss functions.
|
The ALMA Spectroscopic Survey in the Hubble Ultra Deep Field (ASPECS) Band 6
scan (212-272 GHz) covers potential [CII] emission in galaxies at $6\leq z
\leq8$ throughout a 2.9 arcmin$^2$ area. By selecting on known Lyman-$\alpha$
emitters (LAEs) and photometric dropout galaxies in the field, we perform
targeted searches down to a 5$\sigma$ [CII] luminosity depth
$L_{\mathrm{[CII]}}\sim2.0\times10^8$ L$_{\odot}$, corresponding roughly to
star formation rates (SFRs) of $10$-$20$ M$_{\odot}$ yr$^{-1}$ when applying a
locally calibrated conversion for star-forming galaxies, yielding zero
detections. While the majority of galaxies in this sample are characterized by
lower SFRs, the resulting upper limits on [CII] luminosity in these sources are
consistent with the current literature sample of targeted ALMA observations of
$z=6$-$7$ LAEs and Lyman-break galaxies (LBGs), as well as the locally
calibrated relations between $L_{\mathrm{[CII]}}$ and SFR -- with the exception
of a single [CII]-deficient, UV luminous LBG. We also perform a blind search
for [CII]-bright galaxies that may have been missed by optical selections,
resulting in an upper limit on the cumulative number density of [CII] sources
with $L_{\mathrm{[CII]}}>2.0\times10^8$ L$_{\odot}$ ($5\sigma $) to be less
than $1.8\times10^{-4}$ Mpc$^{-3}$ (90% confidence level). At this luminosity
depth and volume coverage, we present an observed evolution of the [CII]
luminosity function from $z=6$-$8$ to $z\sim0$ by comparing the ASPECS
measurement to literature results at lower redshift.
|
We construct two-dimensional non-commutative topological quantum field
theories (TQFTs), one for each Hecke algebra corresponding to a finite Coxeter
system. These TQFTs associate an invariant to each ciliated surface, which is a
Laurent polynomial for punctured surfaces. There is a graphical way to compute
the invariant using minimal colored graphs. We give explicit formulas in terms
of the Schur elements of the Hecke algebra and prove positivity properties for
the invariants when the Coxeter group is of classical type, or one of the
exceptional types $H_3$, $E_6$ and $E_7$.
|
Let $\mathbf{P} \subset [H_0,H]$ be a set of primes, where $\log H_0 \geq
(\log H)^{2/3 + \epsilon}$. Let $\mathscr{L} = \sum_{p \in \mathbf{P}} 1/p$.
Let $N$ be such that $\log H \leq (\log N)^{1/2-\epsilon}$. We show there
exists a subset $\mathscr{X} \subset (N, 2N]$ of density close to $1$ such that
all the eigenvalues of the linear operator $$(A_{|\mathscr{X}} f)(n) =
\sum_{\substack{p \in \mathbf{P} : p | n \\ n, n \pm p \in \mathscr{X}}} f(n
\pm p) \; - \sum_{\substack{p \in\mathbf{P} \\ n, n \pm p \in \mathscr{X}}}
\frac{f(n \pm p)}{p}$$ are $O(\sqrt{\mathscr{L}})$. This bound is optimal up to
a constant factor. In other words, we prove that a graph describing
divisibility by primes is a strong local expander almost everywhere, and indeed
within a constant factor of being "locally Ramanujan" (a.e.).
Specializing to $f(n) = \lambda(n)$ with $\lambda(n)$ the Liouville function,
and using an estimate by Matom\"aki, Radziwi{\l}{\l} and Tao on the average of
$\lambda(n)$ in short intervals, we derive that \[\frac{1}{\log x} \sum_{n\leq
x} \frac{\lambda(n) \lambda(n+1)}{n} = O\Big(\frac{1}{\sqrt{\log \log
x}}\Big),\]
improving on a result of Tao's. We also prove that $\sum_{N<n\leq 2 N}
\lambda(n) \lambda(n+1)=o(N)$ at almost all scales with a similar error term,
improving on a result by Tao and Ter\"av\"ainen. (Tao and Tao-Ter\"av\"ainen
followed a different approach, based on entropy, not expansion; significantly,
we can take a much larger value of $H$, and thus consider many more primes.)
We can also prove sharper results with ease. For instance: let $S_{N,k}$ the
set of all $N<n\leq 2N$ such that $\Omega(n) = k$. Then, for any fixed value of
$k$ with $k = \log \log N + O(\sqrt{\log \log N})$ (that is, any "popular"
value of $k$) the average of $\lambda(n+1)$ over $S_{N,k}$ is $o(1)$ at almost
all scales.
|
Coronavirus disease (COVID-19) pandemic has changed various aspects of
people's lives and behaviors. At this stage, there are no other ways to control
the natural progression of the disease than adopting mitigation strategies such
as wearing masks, watching distance, and washing hands. Moreover, at this time
of social distancing, social media plays a key role in connecting people and
providing a platform for expressing their feelings. In this study, we tap into
social media to surveil the uptake of mitigation and detection strategies, and
capture issues and concerns about the pandemic. In particular, we explore the
research question, "how much can be learned regarding the public uptake of
mitigation strategies and concerns about COVID-19 pandemic by using natural
language processing on Reddit posts?" After extracting COVID-related posts from
the four largest subreddit communities of North Carolina over six months, we
performed NLP-based preprocessing to clean the noisy data. We employed a custom
Named-entity Recognition (NER) system and a Latent Dirichlet Allocation (LDA)
method for topic modeling on a Reddit corpus. We observed that 'mask', 'flu',
and 'testing' are the most prevalent named-entities for "Personal Protective
Equipment", "symptoms", and "testing" categories, respectively. We also
observed that the most discussed topics are related to testing, masks, and
employment. The mitigation measures are the most prevalent theme of discussion
across all subreddits.
|
Motivated by the mathematical beauty and the recent experimental realizations
of fractal systems, we study the spin-$1/2$ antiferromagnetic Heisenberg model
on a Sierpi\'nski gasket. The fractal porous feature generates new kinds of
frustration to exhibit exotic quantum states. Using advanced tensor network
techniques, we identify a quantum gapless-spin-liquid ground state in
fractional spatial dimension. This fractal spin system also demonstrates
nontrivial non-local properties. While the extremely short-range correlation
causes a highly degenerate spin form factor, the entanglement in this fractal
system suggests scaling behaviors significantly different from those in integer
dimensions. We also study the dynamic structure factor and clearly identify the
gapless excitation with a stable corner excitation emerged from the
ground-state entanglement. Our results unambiguously point out multiple
essential properties of this fractal spin system, and open a new route to
explore spin liquid and frustrated magnetism.
|
Injection locking of diode lasers is commonly used to amplify low power laser
light, but is extremely sensitive to perturbations in the laser current and
temperature. To counter such perturbations, active stabilization is often
applied to the current of the injection locked diode. We observe that the diode
laser's polarization extinction ratio (PER) greatly increases when injection
locked, and therefore the PER provides a measure of injection lock quality. We
report robust active stabilization of a diode laser injection lock based on the
PER, demonstrating the technique at 399 nm wavelength where injection locking
is typically less stable than at longer wavelengths. The PER provides a
feedback error signal that is compatible with standard PID servo controllers,
requires no additional optical components beyond the optical isolator typically
used in injection locking, and enables a large feedback bandwidth.
|
Layered ternary transition-metal chalcogenides have been focused as a vein of
exploration for superconductors. In this study, TiGeTe$_{6}$ single crystals
were synthesized and characterized by structural and valence state analyses and
electrical transport measurements. The transport properties were measured under
various pressures up to 71 GPa. The activation energy gets smaller as the
applied pressure increases, and a signature of a pressure-induced metallization
was observed under around 8.4 GPa. Under 13 GPa, pressure-induced
superconductivity was discovered in this compound for the first time, with
successive drops at 3 K and 6 K in the resistance, indicating the presence of
multiple superconducting transitions. The superconducting transition
temperature kept increasing as we further applied the pressure to the
TiGeTe$_{6}$ single crystal in the performed pressure range, reaching as high
as 8.1 K under 71 GPa.
|
In this paper we study the existence and uniqueness of the random periodic
solution for a stochastic differential equation with a one-sided Lipschitz
condition (also known as monotonicity condition) and the convergence of its
numerical approximation via the backward Euler-Maruyama method. The existence
of the random periodic solution is shown as the limits of the pull-back flows
of the SDE and discretized SDE respectively. We establish a convergence rate of
the strong error for the backward Euler-Maruyama method and obtain the weak
convergence result for the approximation of the periodic measure.
|
This report is dedicated to a short motivation and description of our
contribution to the AAPM DL-Sparse-View CT Challenge (team name:
"robust-and-stable"). The task is to recover breast model phantom images from
limited view fanbeam measurements using data-driven reconstruction techniques.
The challenge is distinctive in the sense that participants are provided with a
collection of ground truth images and their noiseless, subsampled sinograms (as
well as the associated limited view filtered backprojection images), but not
with the actual forward model. Therefore, our approach first estimates the
fanbeam geometry in a data-driven geometric calibration step. In a subsequent
two-step procedure, we design an iterative end-to-end network that enables the
computation of near-exact solutions.
|
Accurately forecasting air quality is critical to protecting general public
from lung and heart diseases. This is a challenging task due to the complicated
interactions among distinct pollution sources and various other influencing
factors. Existing air quality forecasting methods cannot effectively model the
diffusion processes of air pollutants between cities and monitoring stations,
which may suddenly deteriorate the air quality of a region. In this paper, we
propose HighAir, i.e., a hierarchical graph neural network-based air quality
forecasting method, which adopts an encoder-decoder architecture and considers
complex air quality influencing factors, e.g., weather and land usage.
Specifically, we construct a city-level graph and station-level graphs from a
hierarchical perspective, which can consider city-level and station-level
patterns, respectively. We design two strategies, i.e., upper delivery and
lower updating, to implement the inter-level interactions, and introduce
message passing mechanism to implement the intra-level interactions. We
dynamically adjust edge weights based on wind direction to model the
correlations between dynamic factors and air quality. We compare HighAir with
the state-of-the-art air quality forecasting methods on the dataset of Yangtze
River Delta city group, which covers 10 major cities within 61,500 km2. The
experimental results show that HighAir significantly outperforms other methods.
|
Computationally efficient and accurate quantum mechanical approximations to
solve the many-electron Schr\"odinger equation are at the heart of
computational materials science. In that respect the coupled cluster hierarchy
of methods plays a central role in molecular quantum chemistry because of its
systematic improvability and computational efficiency. In this hierarchy,
coupled cluster singles and doubles (CCSD) is one of the most important steps
in moving towards chemical accuracy and, in recent years, its scope has
successfully been expanded to the study of insulating surfaces and solids.
Here, we show that CCSD theory can also be applied to real metals. In so doing,
we overcome the limitation of needing extremely large supercells to capture
long range electronic correlation effects. An effective Hamiltonian can be
found using the transition structure factor--a map of electronic excitations
from the Hartree--Fock wavefunction--which has fewer finite size effects than
conventional periodic boundary conditions. This not only paves the way of
applying coupled cluster methods to real metals but also reduces the
computational cost by two orders of magnitude compared to previous methods. Our
applications to phases of lithium and silicon show a resounding success in
reaching the thermodynamic limit, taking the first step towards a truly
universal quantum chemical treatment of solids.
|
We perform an extensive analysis of the statistics of axion masses and
interactions in compactifications of type IIB string theory, and we show that
black hole superradiance excludes some regions of Calabi-Yau moduli space.
Regardless of the cosmological model, a theory with an axion whose mass falls
in a superradiant band can be probed by the measured properties of
astrophysical black holes, unless the axion self-interaction is large enough to
disrupt formation of a condensate. We study a large ensemble of
compactifications on Calabi-Yau hypersurfaces, with $1 \leq h^{1,1} \leq 491$
closed string axions, and determine whether the superradiance conditions on the
masses and self-interactions are fulfilled. The axion mass spectrum is largely
determined by the K\"ahler parameters, for mild assumptions about the
contributing instantons, and takes a nearly-universal form when $h^{1,1} \gg
1$. When the K\"ahler moduli are taken at the tip of the stretched K\"ahler
cone, the fraction of geometries excluded initially grows with $h^{1,1}$, to a
maximum of $\approx 0.5$ at $h^{1,1} \approx 160$, and then falls for larger
$h^{1,1}$. Further inside the K\"ahler cone, the superradiance constraints are
far weaker, but for $h^{1,1} \gg 100$ the decay constants are so small that
these geometries may be in tension with astrophysical bounds, depending on the
realization of the Standard Model.
|
Writers such as journalists often use automatic tools to find relevant
content to include in their narratives. In this paper, we focus on supporting
writers in the news domain to develop event-centric narratives. Given an
incomplete narrative that specifies a main event and a context, we aim to
retrieve news articles that discuss relevant events that would enable the
continuation of the narrative. We formally define this task and propose a
retrieval dataset construction procedure that relies on existing news articles
to simulate incomplete narratives and relevant articles. Experiments on two
datasets derived from this procedure show that state-of-the-art lexical and
semantic rankers are not sufficient for this task. We show that combining those
with a ranker that ranks articles by reverse chronological order outperforms
those rankers alone. We also perform an in-depth quantitative and qualitative
analysis of the results that sheds light on the characteristics of this task.
|
This paper proposes a multi-task learning network with phoneme-aware and
channel-wise attentive learning strategies for text-dependent Speaker
Verification (SV). In the proposed structure, the frame-level multi-task
learning along with the segment-level adversarial learning is adopted for
speaker embedding extraction. The phoneme-aware attentive pooling is exploited
on frame-level features in the main network for speaker classifier, with the
corresponding posterior probability for the phoneme distribution in the
auxiliary subnet. Further, the introduction of Squeeze and Excitation
(SE-block) performs dynamic channel-wise feature recalibration, which improves
the representational ability. The proposed method exploits speaker
idiosyncrasies associated with pass-phrases, and is further improved by the
phoneme-aware attentive pooling and SE-block from temporal and channel-wise
aspects, respectively. The experiments conducted on RSR2015 Part 1 database
confirm that the proposed system achieves outstanding results for textdependent
SV.
|
In this article we apply proper splittings of matrices to develop an
iterative process to approximate solutions of matrix equations of the form TX =
W. Moreover, by using the partial order induced by positive semidefinite
matrices, we obtain equivalent conditions to the convergence of this process.
We also include some speed comparison results of the convergence of this
method. In addition, for all matrix T we propose a proper splitting based on
the polar decomposition of T.
|
Adversarial attack arises due to the vulnerability of deep neural networks to
perceive input samples injected with imperceptible perturbations. Recently,
adversarial attack has been applied to visual object tracking to evaluate the
robustness of deep trackers. Assuming that the model structures of deep
trackers are known, a variety of white-box attack approaches to visual tracking
have demonstrated promising results. However, the model knowledge about deep
trackers is usually unavailable in real applications. In this paper, we propose
a decision-based black-box attack method for visual object tracking. In
contrast to existing black-box adversarial attack methods that deal with static
images for image classification, we propose IoU attack that sequentially
generates perturbations based on the predicted IoU scores from both current and
historical frames. By decreasing the IoU scores, the proposed attack method
degrades the accuracy of temporal coherent bounding boxes (i.e., object
motions) accordingly. In addition, we transfer the learned perturbations to the
next few frames to initialize temporal motion attack. We validate the proposed
IoU attack on state-of-the-art deep trackers (i.e., detection based,
correlation filter based, and long-term trackers). Extensive experiments on the
benchmark datasets indicate the effectiveness of the proposed IoU attack
method. The source code is available at
https://github.com/VISION-SJTU/IoUattack.
|
The amplitude (Higgs) mode near the two-dimensional superfluid-Mott glass
quantum phase transition is studied. We map the Bose-Hubbard Hamiltonian of
disordered interacting bosons onto an equivalent classical XY model in (2+1)
dimensions and compute the scalar susceptibility of the order parameter
amplitude via Monte Carlo simulation. Analytic continuation of the scalar
susceptibilities from imaginary to real frequency to obtain the spectral
densities is performed by a modified maximum entropy technique. Our results
show that the introduction of disorder into the system leads to unconventional
dynamical behavior of the Higgs mode that violates naive scaling,despite the
underlying thermodynamics of the transition being of conventional power-law
type. The computed spectral densities exhibit a broad, non-critical response
for all energies, and a momentum-independent dispersion for long-wavelengths,
indicating strong evidence for the localization of the Higgs mode for all
dilutions.
|
If $H$ is a Hilbert space, the Stiefel manifold $St(n,H)$ is formed by all
the independent $n$-tuples in $H$. In this article, we contribute to the
topological study of Stiefel manifolds by proving density and
path-connectedness-related results. Regarding the density aspect, we generalize
the fact that $St(n,H)$ is dense in $H^n$ and prove that $St(n,H) \cap S$ is
dense in $S$ whenever $S \subseteq H^n$ is connected by polynomial paths of
finite degree to some $\Theta \in St(n,H) \cap S$. We provide special examples
of such sets $S$ in the context of finite-dimensional continuous frames (we set
$H := L^2(X,\mu;\mathbb{F})$ and we identify $St(n,H)$ with
$\mathcal{F}_{(X,\mu),n}^\mathbb{F}$) which are constructed from the inverse
image of singletons by some familiar linear and pseudo-quadratic functions. In
the second part devoted to path-connectedness, we prove that the intersection
of translates of $St(n,H)$ is path-connected under a condition on the
codimension of the span of the components of the translating $n$-tuples. These
results are also a contribution to the topological theory of Hilbert space
frames which is presently an active area of research.
|
Massive black holes often exist within dwarf galaxies, and both simulations
and observations have shown that a substantial fraction of these may be
off-center with respect to their hosts. We trace the evolution of off-center
massive black holes (MBHs) in dwarf galaxies using cosmological hydrodynamical
simulations, and show that the reason for off-center locations is mainly due to
galaxy-galaxy mergers. We calculate dynamical timescales and show that
off-center MBHs are unlikely to sink to their galaxys' centers within a Hubble
time, due to the shape of the hosts' potential wells and low stellar densities.
These wandering MBHs are unlikely to be detected electromagnetically, nor is
there a measurable dynamical effect on the galaxy's stellar population. We
conclude that off-center MBHs may be common in dwarfs, especially if the mass
of the MBH is small or the stellar mass of the host galaxy is large. However
detecting them is extremely challenging, because their accretion luminosities
are very low and they do not measurably alter the dynamics of their host
galaxies.
|
We discuss the interplay of wave package decoherence and decoherence induced
by quantum gravity via interactions with spacetime foam for high energy
astrophysical neutrinos. In this context we point out a compelling consequence
of the expectation that quantum gravity should break global symmetries, namely
that quantum-gravity induced decoherence can provide both a powerful tool for
the search for new particles, including totally decoupled backgrounds
interacting only gravitationally, and at the same time a window into the
intricacies of black hole information processing.
|
Semi-Supervised Learning (SSL) has seen success in many application domains,
but this success often hinges on the availability of task-specific unlabeled
data. Knowledge distillation (KD) has enabled effective optimization of compact
neural nets, achieving the best results when the knowledge of an expensive
network is distilled via fresh task-specific unlabeled data. However,
task-specific unlabeled data can be challenging to find, especially for NLP. We
investigate the use of generative models in synthesizing unlabeled data and
present a simple and general framework called "generate, annotate, and learn
(GAL)". A language model (LM) is used to synthesize in-domain unlabeled data.
Then, a classifier is used to annotate such data. Finally, synthetically
generated and annotated data is used to advance SSL, KD, and few-shot learning
on NLP and tabular tasks. To obtain a strong task-specific LM, we either
fine-tune a large LM on inputs from a specific task, or prompt a large LM with
a few input examples and conditionally generate more unlabeled examples. It
also yields a new state-of-the-art for 6-layer transformers on the GLUE
leaderboard. Finally, self-training with GAL offers large gains on four tabular
tasks from the UCI repository.
|
This chapter is written for the welfare of the society, questioning and
enlightening the effects of the increment or decrement in the percentage of
quality of air causing pollution due to the rise in the traffic post lockdown
due to COVID 19 in metro cities, specifically in Delhi. In this chapter, we
address the question about people's preference in moving in the shared taxis to
their workplaces or their reluctance and denial of the idea of moving in the
shared vehicle because of the fear of getting infected. The sensitivity of the
situation will compel the people to move in a single occupied vehicle (SOV).
The rise in the number of vehicles on the roads will result in traffic jams and
different kinds of pollution where people battling with the pandemic will
inevitably get exposed to other health related issues. We use a BPR (Bureau of
Public Roads) model to combat this issue endangering the environment and public
health. We exploit the BPR function to relate average travel time to the
estimated number of commuters travelling by car. We collect mode share data
from the NITI Ayog, the State Resource Centre and other authentic sources,
which gives unique figures of the impact of shared mobility in India and how,
in its absence, various sectors will get affected. Using the given data and the
BPR, we evaluate increased vehicle volumes on the road if different portions of
transit and carpool users switch to single occupancy vehicles and its effect on
multiple other factors. Based on the study of densely populated city, Delhi, we
predict that cities with significant transit ridership are at risk for extreme
traffic and pollution unless transit systems can resume safe with effective
protocols.
|
Motivated by packet routing in computer networks, online queuing systems are
composed of queues receiving packets at different rates. Repeatedly, they send
packets to servers, each of them treating only at most one packet at a time. In
the centralized case, the number of accumulated packets remains bounded (i.e.,
the system is \textit{stable}) as long as the ratio between service rates and
arrival rates is larger than $1$. In the decentralized case, individual
no-regret strategies ensures stability when this ratio is larger than $2$. Yet,
myopically minimizing regret disregards the long term effects due to the
carryover of packets to further rounds. On the other hand, minimizing long term
costs leads to stable Nash equilibria as soon as the ratio exceeds
$\frac{e}{e-1}$. Stability with decentralized learning strategies with a ratio
below $2$ was a major remaining question. We first argue that for ratios up to
$2$, cooperation is required for stability of learning strategies, as selfish
minimization of policy regret, a \textit{patient} notion of regret, might
indeed still be unstable in this case. We therefore consider cooperative queues
and propose the first learning decentralized algorithm guaranteeing stability
of the system as long as the ratio of rates is larger than $1$, thus reaching
performances comparable to centralized strategies.
|
In this paper we study the transformation of surface envelope solitons
travelling over a bottom step in water of a finite depth. Using the
transformation coefficients earlier derived in the linear approximation, we
find the parameters of transmitted pulses and subsequent evolution of the
pulses in the course of propagation. Relying on the weakly nonlinear theory,
the analytic formulae are derived which describe the maximum attainable wave
amplitude in the neighbourhood of the step and in the far zone. Solitary waves
may be greatly amplified (within the weakly nonlinear theory formally even
without a limit) when propagating from relatively shallow water to the deeper
domain due to the constructive interference between the newly emerging envelope
solitons and the residual quasi-linear waves. The theoretical results are in a
good agreement with the data of direct numerical modelling of soliton
transformation. In particular, more than double wave amplification is
demonstrated in the performed simulations.
|
The dramatic growth of big datasets presents a new challenge to data storage
and analysis. Data reduction, or subsampling, that extracts useful information
from datasets is a crucial step in big data analysis. We propose an orthogonal
subsampling (OSS) approach for big data with a focus on linear regression
models. The approach is inspired by the fact that an orthogonal array of two
levels provides the best experimental design for linear regression models in
the sense that it minimizes the average variance of the estimated parameters
and provides the best predictions. The merits of OSS are three-fold: (i) it is
easy to implement and fast; (ii) it is suitable for distributed parallel
computing and ensures the subsamples selected in different batches have no
common data points; and (iii) it outperforms existing methods in minimizing the
mean squared errors of the estimated parameters and maximizing the efficiencies
of the selected subsamples. Theoretical results and extensive numerical results
show that the OSS approach is superior to existing subsampling approaches. It
is also more robust to the presence of interactions among covariates and, when
they do exist, OSS provides more precise estimates of the interaction effects
than existing methods. The advantages of OSS are also illustrated through
analysis of real data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.