abstract
stringlengths 42
2.09k
|
---|
This paper investigates the problem of correcting multiple criss-cross
insertions and deletions in arrays. More precisely, we study the unique
recovery of $n \times n$ arrays affected by $t$-criss-cross deletions defined
as any combination of $t_r$ row and $t_c$ column deletions such that $t_r + t_c
= t$ for a given $t$. We show an equivalence between correcting $t$-criss-cross
deletions and $t$-criss-cross insertions and show that a code correcting
$t$-criss-cross insertions/deletions has redundancy at least $tn + t \log n -
\log(t!)$. Then, we present an existential construction of $t$-criss-cross
insertion/deletion correcting code with redundancy bounded from above by $tn +
\mathcal{O}(t^2 \log^2 n)$. The main ingredients of the presented code
construction are systematic binary $t$-deletion correcting codes and Gabidulin
codes. The first ingredient helps locating the indices of the inserted/deleted
rows and columns, thus transforming the insertion/deletion-correction problem
into a row/column erasure-correction problem which is then solved using the
second ingredient.
|
The concept of mean inactivity time plays a crucial role in reliability, risk
theory and life testing. In this regard, we introduce a weighted mean
inactivity time function by considering a non-negative weight function. Based
on this function, we provide expressions for the variance of transformed random
variable and the weighted generalized cumulative entropy. The latter concept is
an important measure of uncertainty which is shift-dependent and is of interest
in certain applied contexts, such as reliability or mathematical neurobiology.
Moreover, based on the comparison of mean inactivity times of a certain
function of two lifetime random variables, we introduce and study a new
stochastic order in terms of the weighted mean inactivity time function.
Several characterizations and preservation properties of the new order under
shock models, random maxima and renewal theory are discussed.
|
Prepositional supersense annotation is time-consuming and requires expert
training. Here, we present two sensible methods for obtaining prepositional
supersense annotations by eliciting surface substitution and similarity
judgments. Four pilot studies suggest that both methods have potential for
producing prepositional supersense annotations that are comparable in quality
to expert annotations.
|
Deep learning methods have reached state-of-the-art performance in cardiac
image segmentation. Currently, the main bottleneck towards their effective
translation into clinics requires assuring continuous high model performance
and segmentation results. In this work, we present a novel learning framework
to monitor the performance of heart segmentation models in the absence of
ground truth. Formulated as an anomaly detection problem, the monitoring
framework allows deriving surrogate quality measures for a segmentation and
allows flagging suspicious results. We propose two different types of quality
measures, a global score and a pixel-wise map. We demonstrate their use by
reproducing the final rankings of a cardiac segmentation challenge in the
absence of ground truth. Results show that our framework is accurate, fast, and
scalable, confirming it is a viable option for quality control monitoring in
clinical practice and large population studies.
|
We introduce a non-standard model for percolation on the integer lattice
$\mathbb Z^2$. Randomly assign to each vertex $a \in \mathbb Z^2$ a potential,
denoted $\phi_a$, chosen independently and uniformly from the interval $[0,
1]$. For fixed $\epsilon \in [0,1]$, draw a directed edge from vertex $a$ to a
nearest-neighbor vertex $b$ if $\phi_b < \phi_a + \epsilon$, yielding a
directed subgraph of the infinite directed graph $\overrightarrow{G}$ whose
vertex set is $\mathbb Z^2$, with nearest-neighbor edge set. We define notions
of weak and strong percolation for our model, and observe that when $\epsilon =
0$ the model fails to percolate weakly, while for $\epsilon = 1$ it percolates
strongly. We show that there is a positive $\epsilon_0$ so that for $0 \le
\epsilon \le \epsilon_0$, the model fails to percolate weakly, and that when
$\epsilon > p_\text{site}$, the critical probability for standard site
percolation in $\mathbb Z^2$, the model percolates strongly. We study the
number of infinite strongly connected clusters occurring in a typical
configuration. We show that for these models of percolation on directed graphs,
there are some subtle issues that do not arise for undirected percolation.
Although our model does not have the finite energy property, we are able to
show that, as in the standard model, the number of infinite strongly connected
clusters is almost surely 0, 1 or $\infty$.
|
This paper describes a method for using Transformer-based Language Models
(TLMs) to understand public opinion from social media posts. In this approach,
we train a set of GPT models on several COVID-19 tweet corpora that reflect
populations of users with distinctive views. We then use prompt-based queries
to probe these models to reveal insights into the biases and opinions of the
users. We demonstrate how this approach can be used to produce results which
resemble polling the public on diverse social, political and public health
issues. The results on the COVID-19 tweet data show that transformer language
models are promising tools that can help us understand public opinions on
social media at scale.
|
In 2017 April, the Event Horizon Telescope (EHT) observed the near-horizon
region around the supermassive black hole at the core of the M87 galaxy. These
1.3 mm wavelength observations revealed a compact asymmetric ring-like source
morphology. This structure originates from synchrotron emission produced by
relativistic plasma located in the immediate vicinity of the black hole. Here
we present the corresponding linear-polarimetric EHT images of the center of
M87. We find that only a part of the ring is significantly polarized. The
resolved fractional linear polarization has a maximum located in the southwest
part of the ring, where it rises to the level of about 15%. The polarization
position angles are arranged in a nearly azimuthal pattern. We perform
quantitative measurements of relevant polarimetric properties of the compact
emission and find evidence for the temporal evolution of the polarized source
structure over one week of EHT observations. The details of the polarimetric
data reduction and calibration methodology are provided. We carry out the data
analysis using multiple independent imaging and modeling techniques, each of
which is validated against a suite of synthetic data sets. The gross
polarimetric structure and its apparent evolution with time are insensitive to
the method used to reconstruct the image. These polarimetric images carry
information about the structure of the magnetic fields responsible for the
synchrotron emission. Their physical interpretation is discussed in an
accompanying publication.
|
The dense plasma focus is a plasma discharge powered by a capacitor bank.
Standard diagnostics include measurement of the time derivative of the current
through and the voltage across its connections with the capacitor bank.
Interpretation of this diagnostic data often involves some assumptions
regarding the representation of the dense plasma focus as a time varying
inductance. One of the characteristic features of the current derivative
waveform is a relatively sharp dip and an associated sharp voltage spike. This
has often been interpreted as a result of a rapid rise in the time varying
inductance of the plasma. Sometimes, an anomalous plasma impedance is invoked.
This Letter discusses instances where such interpretation creates conceptual
difficulties. A first principles approach to the representation of the dense
plasma focus as a circuit element reveals some fundamental problems with the
traditional representation of plasma focus as a time varying inductance. The
anomalous impedance is shown to be necessary to account for the difference in
the motional impedance implied by a time-varying inductance in the circuit
element representation and a first principles description based on Poynting's
Theorem. Dynamo effects that convert post-stagnation local motion of plasma
into 3-dimensional magnetic fields are shown to contribute to the effective
inductance of the plasma focus and resolve the observed conceptual difficulties
|
We discuss the solvability of a fairly general class of systems of perturbed
Hammerstein integral equations with functional terms that depend on several
parameters. The nonlinearities and the functionals are allowed to depend on the
components of the system and their derivatives. The results are applicable to
systems of nonlocal second order ordinary differential equations subject to
functional boundary conditions, this is illustrated in an example. Our approach
is based on the classical fixed point index.
|
We deal with the as yet unresolved exponential stability problem for a
stretched Euler-Bernoulli beam on a star-shaped geometric graph with three
identical edges. The edges are hinged with respect to the boundary vertices.
The inner vertex is capable of both translation and rotation, the latter of
which is subject to a combination of elastic and frictional effects. We present
detailed results on the asymptotic location and structure of the spectrum of
the linear operator associated with the spectral problem in Hilbert space.
Within this framework it is shown that the eigenvectors have the property of
forming an unconditional or Riesz basis, which makes it possible to directly
deduce the exponential stability of the corresponding $C_0$-semigroup. As an
aside it is shown that the particular choice of connectivity conditions ensures
the exponential stability even when the elasticity acting on the slopes of the
edges is absent.
|
In classical set theory, there are many equivalent ways to introduce
ordinals. In a constructive setting, however, the different notions split
apart, with different advantages and disadvantages for each. We consider three
different notions of ordinals in homotopy type theory, and show how they relate
to each other: A notation system based on Cantor normal forms, a refined notion
of Brouwer trees (inductively generated by zero, successor and countable
limits), and wellfounded extensional orders. For Cantor normal forms, most
properties are decidable, whereas for wellfounded extensional transitive
orders, most are undecidable. Formulations for Brouwer trees are usually
partially decidable. We demonstrate that all three notions have properties
expected of ordinals: their order relations, although defined differently in
each case, are all extensional and wellfounded, and the usual arithmetic
operations can be defined in each case. We connect these notions by
constructing structure preserving embeddings of Cantor normal forms into
Brouwer trees, and of these in turn into wellfounded extensional orders. We
have formalised most of our results in cubical Agda.
|
Given a closed connected spin manifold M with non-negative and somewhere
positive scalar curvature, we show that the Dirac operator twisted with any
flat Hilbert module bundle is invertible.
|
In a recent Letter [Phys. Rev. Lett. 125, 180604 (2020)], we introduced a
closed-form analytic expression for the average bipartite von Neumann
entanglement entropy of many-body eigenstates of random quadratic Hamiltonians.
Namely, of Hamiltonians whose single-particle eigenstates have random
coefficients in the position basis. A paradigmatic Hamiltonian for which the
expression is valid is the quadratic Sachdev-Ye-Kitaev (SYK2) model in its
Dirac fermion formulation. Here we show that the applicability of our result is
much broader. Most prominently, it is also relevant for local Hamiltonians such
as the three-dimensional (3D) Anderson model at weak disorder. Moreover, it
describes the average entanglement entropy in Hamiltonians without
particle-number conservation, such as the SYK2 model in the Majorana fermion
formulation and the 3D Anderson model with additional terms that break
particle-number conservation. We extend our analysis to the average bipartite
second R\'enyi entanglement entropy of eigenstates of the same quadratic
Hamiltonians, which is derived analytically and tested numerically. We
conjecture that our results for the entanglement entropies of many-body
eigenstates apply to quadratic Hamiltonians whose single-particle eigenstates
exhibit quantum chaos, to which we refer as quantum-chaotic quadratic
Hamiltonians.
|
The paper proposes an optimal management strategy for a system composed by a
battery and a photovoltaic power plant. This integrated system is called to
deliver the photovoltaic power and to simultaneously provide droop-based
primary frequency regulation to the main grid. The battery state-of-energy is
controlled by power offset signals, which are determined using photovoltaic
energy generation forecasts and predictions of the energy required to operate
frequency regulation. A two level control architecture is developed. A
day-ahead planning algorithm schedules the energy profile which is traded at
the day-ahead market and defines the primary control reserve that the
integrated system is able to provide in the considered day. During the day
operations, a second level algorithm corrects the dispatched plan using updated
information, in order to guarantee a continuous and reliable service. Both
control algorithms take into account the uncertainties of the photovoltaic
generation and of the frequency dynamics using stochastic optimization.
|
One of the most widely used methods for solving large-scale stochastic
optimization problems is distributed asynchronous stochastic gradient descent
(DASGD), a family of algorithms that result from parallelizing stochastic
gradient descent on distributed computing architectures (possibly)
asychronously. However, a key obstacle in the efficient implementation of DASGD
is the issue of delays: when a computing node contributes a gradient update,
the global model parameter may have already been updated by other nodes several
times over, thereby rendering this gradient information stale. These delays can
quickly add up if the computational throughput of a node is saturated, so the
convergence of DASGD may be compromised in the presence of large delays. Our
first contribution is that, by carefully tuning the algorithm's step-size,
convergence to the critical set is still achieved in mean square, even if the
delays grow unbounded at a polynomial rate. We also establish finer results in
a broad class of structured optimization problems (called variationally
coherent), where we show that DASGD converges to a global optimum with
probability $1$ under the same delay assumptions. Together, these results
contribute to the broad landscape of large-scale non-convex stochastic
optimization by offering state-of-the-art theoretical guarantees and providing
insights for algorithm design.
|
Anatomical motion and deformation pose challenges to the understanding of the
delivered dose distribution during radiotherapy treatments. Hence, deformable
image registration (DIR) algorithms are increasingly used to map contours and
dose distributions from one image set to another. However, the lack of
validation tools slows their clinical adoption, despite their commercial
availability. This work presents a novel water-equivalent deformable dosimeter
that simultaneously measures the dose distribution and tracks deformation
vector fields (DVF). The dosimeter in made of an array of 19 scintillating
fiber detectors embedded in a cylindrical elastomer matrix. It is imaged by two
pairs of stereoscopic cameras tracking the position and angulation of the
scintillators, while measuring the dose. The resulting system provides a
precision of 0.3 mm on DVF measurements. The dosimeter was irradiated with
5$\times$3, 4$\times$3 and 3$\times$3 cm$^2$ 6 MV photon beams in both fixed
and deformed conditions. The measured DVF was compared to the one computed with
a DIR algorithm (Plastimatch). The deviations between the computed and measured
DVFs was below 1.5 mm. As for dose measurements, the dosimeter acquired the
dose distribution in fixed and deformed conditions within 1\% of the treatment
planning system calculation and complementary dose validation using the
Hyperscint dosimetry system. Using the demonstrated qualities of scintillating
detectors, we developed a real-time, water-equivalent deformable dosimeter.
Given it's sensor tracking position precision and dose measurements accuracy,
the developed detector is a promising tools for the validation of DIR
algorithms as well as dose distribution measurements under fixed and deformed
conditions.
|
Sparse Principal Component Analysis (SPCA) is widely used in data processing
and dimension reduction; it uses the lasso to produce modified principal
components with sparse loadings for better interpretability. However, sparse
PCA never considers an additional grouping structure where the loadings share
similar coefficients (i.e., feature grouping), besides a special group with all
coefficients being zero (i.e., feature selection). In this paper, we propose a
novel method called Feature Grouping and Sparse Principal Component Analysis
(FGSPCA) which allows the loadings to belong to disjoint homogeneous groups,
with sparsity as a special case. The proposed FGSPCA is a subspace learning
method designed to simultaneously perform grouping pursuit and feature
selection, by imposing a non-convex regularization with naturally adjustable
sparsity and grouping effect. To solve the resulting non-convex optimization
problem, we propose an alternating algorithm that incorporates the
difference-of-convex programming, augmented Lagrange and coordinate descent
methods. Additionally, the experimental results on real data sets show that the
proposed FGSPCA benefits from the grouping effect compared with methods without
grouping effect.
|
The resource constraints and accuracy requirements for Internet of Things
(IoT) memory chips need three-dimensional (3D) monolithic integrated circuits,
of which the increasing stack layers (currently more than 176) also cause
excessive energy consumption and increasing wire length. In this paper, a novel
3D wireless network on chips (3DWiNoCs) model transmitting signal directly to
the destination in arbitrary layer is proposed and characterized. However, due
to the the reflection and refraction characteristics in each layer, the complex
and diverse wireless paths in 3DWiNoC add great difficulty to the channel
characterization. To facilitate the modeling in massive layer NoC situation,
both boundary-less model boundary-constrained 3DWiNoC model are proposed, of
which the channel gain can be obtained by a computational efficient approximate
algorithm. These 3DWiNoC models with approximation algorithm can well
characterize the 3DWiNoC channel in aspect of complete reflection and
refraction characteristics, and avoid massive wired connections, high power
consumption of cross-layer communication and high-complexity of 3DWiNoC channel
characterization. Numerical results show that: 1) The difference rate between
the two models is lower than 0.001% (signal transmit through 20 layers); 2) the
channel gain decreases sharply if refract time increases; and 3) the
approximate algorithm can achieve an acceptable accuracy (error rate lower than
0.1%).
|
We present Atacama Large Millimeter/submillimeter Array (ALMA) observations
of $\mathrm{^{13}CO(J=1-0)}$ line and 104 GHz continuum emission from NGC 604,
a giant HII region (GHR) in the nearby spiral galaxy M33. Our high spatial
resolution images ( 3.2"$\times$ 2.4", corresponding to $13 \times 10$ pc
physical scale) allow us to detect fifteen molecular clouds. We find spatial
offsets between the $^{13}CO$ and 104 GHz continuum emission and also detect
continuum emission near the centre of the GHR. The identified molecular clouds
have sizes ranging from 5-21 pc, linewidths of 0.3-3.0 $\mathrm{kms^{-1}}$ and
luminosity-derived masses of (0.4-80.5) $\times 10^3$ M$_{\bigodot}$. These
molecular clouds are in near virial equilibrium, with a spearman correlation
coefficient of 0.98. The linewidth-size relationship for these clouds is offset
from the corresponding relations for the Milky Way and for NGC 300, although
this may be an artefact of the dendrogram process.
|
Despite that deep neural networks (DNNs) have achieved enormous success in
many domains like natural language processing (NLP), they have also been proven
to be vulnerable to maliciously generated adversarial examples. Such inherent
vulnerability has threatened various real-world deployed DNNs-based
applications. To strength the model robustness, several countermeasures have
been proposed in the English NLP domain and obtained satisfactory performance.
However, due to the unique language properties of Chinese, it is not trivial to
extend existing defenses to the Chinese domain. Therefore, we propose AdvGraph,
a novel defense which enhances the robustness of Chinese-based NLP models by
incorporating adversarial knowledge into the semantic representation of the
input. Extensive experiments on two real-world tasks show that AdvGraph
exhibits better performance compared with previous work: (i) effective - it
significantly strengthens the model robustness even under the adaptive attacks
setting without negative impact on model performance over legitimate input;
(ii) generic - its key component, i.e., the representation of connotative
adversarial knowledge is task-agnostic, which can be reused in any
Chinese-based NLP models without retraining; and (iii) efficient - it is a
light-weight defense with sub-linear computational complexity, which can
guarantee the efficiency required in practical scenarios.
|
CoSi single crystal is a known realization of a chiral topological semimetal
with simultaneously broken mirror and inversion symmetries. In addition to the
symmetry-induced spin-orbit coupling, surface ferromagnetism is known in
nominally diamagnetic CoSi structures, which appears due to the distorted bonds
and ordered vacancies near the surface. We experimentally investigate electron
transport through a thin CoSi flake at high current density. Surprisingly, we
demonstrate $dV/dI(I)$ curves which are qualitatively similar to ones for
ferromagnetic multilayers with characteristic $dV/dI$ magnon peaks and
unconventional magnetic field evolution of the peaks' positions. We understand
these observations as a result of current-induced spin polarization due to the
significant spin-orbit coupling in CoSi. Scattering of non-equilibrium
spin-polarized carriers within the surface ferromagnetic layer is responsible
for the precessing spin-wave excitations, so the observed magnon modes are the
joint effect of surface ferromagnetism and spin-orbit coupling in a CoSi chiral
topological semimetal. Thus, thin CoSi flakes behave as magnetic conductors
with broken inversion symmetry, which is important for different spintronic
phenomena.
|
The communication technology revolution in this era has increased the use of
smartphones in the world of transportation. In this paper, we propose to
leverage IoT device data, capturing passengers' smartphones' Wi-Fi data in
conjunction with weather conditions to predict the expected number of
passengers waiting at a bus stop at a specific time using deep learning models.
Our study collected data from the transit bus system at James Madison
University (JMU) in Virginia, USA. This paper studies the correlation between
the number of passengers waiting at bus stops and weather conditions.
Empirically, an experiment with several bus stops in JMU, was utilized to
confirm a high precision level. We compared our Deep Neural Network (DNN) model
against two baseline models: Linear Regression (LR) and a Wide Neural Network
(WNN). The gap between the baseline models and DNN was 35% and 14% better Mean
Squared Error (MSE) scores for predictions in favor of the DNN compared to LR
and WNN, respectively.
|
Despite rapid progress, current deep learning methods face a number of
critical challenges. These include high energy consumption, catastrophic
forgetting, dependance on global losses, and an inability to reason
symbolically. By combining concepts from information bottleneck theory and
vector-symbolic architectures, we propose and implement a novel information
processing architecture, the 'Bridge network.' We show this architecture
provides unique advantages which can address the problem of global losses and
catastrophic forgetting. Furthermore, we argue that it provides a further basis
for increasing energy efficiency of execution and the ability to reason
symbolically.
|
Federated Learning (FL) is a promising framework that has great potentials in
privacy preservation and in lowering the computation load at the cloud. FedAvg
and FedProx are two widely adopted algorithms. However, recent work raised
concerns on these two methods: (1) their fixed points do not correspond to the
stationary points of the original optimization problem, and (2) the common
model found might not generalize well locally.
In this paper, we alleviate these concerns. Towards this, we adopt the
statistical learning perspective yet allow the distributions to be
heterogeneous and the local data to be unbalanced. We show, in the general
kernel regression setting, that both FedAvg and FedProx converge to the
minimax-optimal error rates. Moreover, when the kernel function has a finite
rank, the convergence is exponentially fast. Our results further analytically
quantify the impact of the model heterogeneity and characterize the federation
gain - the reduction of the estimation error for a worker to join the federated
learning compared to the best local estimator. To the best of our knowledge, we
are the first to show the achievability of minimax error rates under FedAvg and
FedProx, and the first to characterize the gains in joining FL. Numerical
experiments further corroborate our theoretical findings on the statistical
optimality of FedAvg and FedProx and the federation gains.
|
Conventional indirect dark matter (DM) searches look for an excess in the
electromagnetic emission from the sky that cannot be attributed to known
astrophysical sources. Here, we argue that the photon polarisation is an
important feature to understand new physics interactions and can be exploited
to improve our sensitivity to DM. In particular, circular polarisation can be
generated from Beyond the Standard Model interactions if they violate parity
and there is an asymmetry in the number of particles which participate in the
interaction. In this work, we consider a simplified model for fermionic
(Majorana) DM and study the circularly polarised gamma rays below 10 GeV from
the scattering of cosmic ray electrons on DM. We calculate the differential
flux of positive and negative polarised photons from the Galactic Center and
show that the degree of circular polarization can reach up to 90%. Finally,
once collider and DM constraints have been taken into account, we estimate the
required sensitivity from future experiments to detect this signal finding
that, although a distinctive peak will be present in the photon flux spectrum,
a near future observation is unlikely. However, different sources or models not
considered in this work could provide higher intensity fluxes, leading to a
possible detection by e-ASTROGAM. In the event of a discovery, we argue that
the polarisation fraction is a valuable characterisation feature of the new
sector.
|
Inspired by the studies on the influence of transition metal impurities in
high Tc superconductors and what is already known about nonmagnetic suppression
of Tc in unconventional superconductors, we set out to investigate the behavior
of the nonmagnetic disordered elastic scattering for a realistic 2D anisotropic
high Tc superconductor with line nodes and a Fermi surface in the tight-binding
approximation. For this purpose, we performed a detailed self-consistent 2D
numerical study of the disordered averaged scattering matrix with nonmagnetic
impurities and a singlet line nodes order parameter, varying the concentration
and the strength of the impurities potential in the Born, intermediate and
unitary limits. In a high Tc anisotropic superconductor with a tight binding
dispersion law averaging over the Fermi surface, including hopping parameters
and an order parameter in agreement with experimental data, the tight-binding
approximation reflects the anisotropic effects. In this study, we also included
a detailed visualization of the behavior of the scattering matrix with
different sets of physical parameters involved in the nonmagnetic disorder,
which allowed us to model the dressed scattering behavior in different regimes
for very low and high energies. With this study, we demonstrate that the
scattering elastic matrix is affected by the non-magnetic disorder, as well as
the importance of an order parameter and a Fermi surface in agreement with
experiments when studying this effect in unconventional superconductors.
|
Let $X$ be a complex space and $M$ a pure Hodge module with strict support
$X$. We introduce a kind of coherent subsheaf $S(M,\varphi)$ of M. Saito's
$S(M)$ which is a combination of $S(M)$ and the multiplier ideal sheaf
$\mathscr{I}(\varphi)$. An $L^2$-resolution of $S(M,\varphi)$ is constructed.
This generalizes MacPherson's conjecture on the $L^2$-representation of the
Grauert-Riemenschneider sheaf. Various vanishing theorems for $S(M)$ (Saito's
vanishing, Kawamata-Viehweg vanishing and some new ones like Nadel vanishing,
partial vanishing) are proved via standard differential geometric arguments.
Some applications on the relative version of Fujita's conjecture are presented.
|
Exposing a solution to a temperature gradient can lead to the accumulation of
particles on either the cold or warm side. This phenomenon, known as
thermophoresis, has been discovered more than a century ago, and yet its
microscopic origin is still debated. Here, we show that thermophoresis can be
observed in any system such that the transitions between different internal
states are modulated by temperature and such that different internal states
have different transport properties. We establish thermophoresis as a genuine
non-equilibrium effect, whereby a system of currents in real and internal space
that is consistent with the thermodynamic necessity of transporting heat from
warm to cold regions. Our approach also provides an expression for the Soret
coefficient, which decides whether particles accumulate on the cold or on the
warm side, that is associated with the correlation between the energies of the
internal states and their transport properties, that instead remain
system-specific quantities. Finally, we connect our results to previous
approaches based on close-to-equilibrium energetics. Our thermodynamically
consistent approach thus encompasses and generalizes previous findings.
|
The assessment of program functionality can generally be accomplished with
straight-forward unit tests. However, assessing the design quality of a program
is a much more difficult and nuanced problem. Design quality is an important
consideration since it affects the readability and maintainability of programs.
Assessing design quality and giving personalized feedback is very time
consuming task for instructors and teaching assistants. This limits the scale
of giving personalized feedback to small class settings. Further, design
quality is nuanced and is difficult to concisely express as a set of rules. For
these reasons, we propose a neural network model to both automatically assess
the design of a program and provide personalized feedback to guide students on
how to make corrections. The model's effectiveness is evaluated on a corpus of
student programs written in Python. The model has an accuracy rate from 83.67%
to 94.27%, depending on the dataset, when predicting design scores as compared
to historical instructor assessment. Finally, we present a study where students
tried to improve the design of their programs based on the personalized
feedback produced by the model. Students who participated in the study improved
their program design scores by 19.58%.
|
A growing area of research in epidemiology is the identification of
health-related sibling spillover effects, or the effect of one individual's
exposure on their sibling's outcome. The health and health care of family
members may be inextricably confounded by unobserved factors, rendering
identification of spillover effects within families particularly challenging.
We demonstrate a gain-score regression method for identifying
exposure-to-outcome spillover effects within sibling pairs in a linear fixed
effects framework. The method can identify the exposure-to-outcome spillover
effect if only one sibling's exposure affects the other's outcome; and it
identifies the difference between the spillover effects if both siblings'
exposures affect the others' outcomes. The method fails in the presence of
outcome-to-exposure spillover and outcome-to-outcome spillover. Analytic
results and Monte Carlo simulations demonstrate the method and its limitations.
To exercise this method, we estimate the spillover effect of a child's preterm
birth on an older sibling's literacy skills, measured by the Phonological
Awarenesses Literacy Screening-Kindergarten test. We analyze 20,010 sibling
pairs from a population-wide, Wisconsin-based (United States) birth cohort.
Without covariate adjustment, we estimate that preterm birth modestly decreases
an older sibling's test score (-2.11 points; 95% confidence interval: -3.82,
-0.40 points). In conclusion, gain-scores are a promising strategy for
identifying exposure-to-outcome spillovers in sibling pairs while controlling
for sibling-invariant unobserved confounding in linear settings.
|
We propose a supervised machine learning algorithm, decision trees, to
analyze molecular dynamics output. The approach aims to identify the
predominant geometric features which correlate with trajectories that
transition between two arbitrarily defined states. The data-based algorithm
aims to identify such features in an approach which is unbiased by human
"chemical intuition". We demonstrate the method by analyzing proton exchange
reactions in formic acid (FA) solvated in small water clusters. The simulations
were performed with ab initio molecular dynamics combined with a method for
generating rare events, specifically path sampling. Our machine learning
analysis identified mechanistic descriptions of the proton transfer reaction
for the different water clusters.
|
In a recent Letter, Dornheim et al. [PRL 125, 085001 (2020)] have
investigated the nonlinear density response of the uniform electron gas in the
warm dense matter regime. More specifically, they have studied the cubic
response function at the first harmonic, which cannot be neglected in many
situations of experimental relevance. In this work, we go one step further and
study the full spectrum of excitations at the higher harmonics of the original
perturbation based on extensive new ab initio path integral Monte Carlo (PIMC)
simulations. We find that the dominant contribution to the density response
beyond linear response theory is given by the quadratic response function at
the second harmonic in the moderately nonlinear regime. Furthermore, we show
that the nonlinear density response is highly sensitive to exchange-correlation
effects, which makes it a potentially valuable new tool of diagnostics. To this
end, we present a new theoretical description of the nonlinear electronic
density response based on the recent effective static approximation to the
local field correction [PRL 125, 235001 (2020)], which accurately reproduces
our PIMC data with negligible computational cost.
|
Message passing Graph Neural Networks (GNNs) provide a powerful modeling
framework for relational data. However, the expressive power of existing GNNs
is upper-bounded by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test,
which means GNNs that are not able to predict node clustering coefficients and
shortest path distances, and cannot differentiate between different d-regular
graphs. Here we develop a class of message passing GNNs, named Identity-aware
Graph Neural Networks (ID-GNNs), with greater expressive power than the 1-WL
test. ID-GNN offers a minimal but powerful solution to limitations of existing
GNNs. ID-GNN extends existing GNN architectures by inductively considering
nodes' identities during message passing. To embed a given node, ID-GNN first
extracts the ego network centered at the node, then conducts rounds of
heterogeneous message passing, where different sets of parameters are applied
to the center node than to other surrounding nodes in the ego network. We
further propose a simplified but faster version of ID-GNN that injects node
identity information as augmented node features. Altogether, both versions of
ID-GNN represent general extensions of message passing GNNs, where experiments
show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy
improvement on challenging node, edge, and graph property prediction tasks; 3%
accuracy improvement on node and graph classification benchmarks; and 15% ROC
AUC improvement on real-world link prediction tasks. Additionally, ID-GNNs
demonstrate improved or comparable performance over other task-specific graph
networks.
|
Lithium niobate (LN), an outstanding and versatile material, has influenced
our daily life for decades: from enabling high-speed optical communications
that form the backbone of the Internet to realizing radio-frequency filtering
used in our cell phones. This half-century-old material is currently embracing
a revolution in thin-film LN integrated photonics. The success of manufacturing
wafer-scale, high-quality, thin films of LN on insulator (LNOI), accompanied
with breakthroughs in nanofabrication techniques, have made high-performance
integrated nanophotonic components possible. With rapid development in the past
few years, some of these thin-film LN devices, such as optical modulators and
nonlinear wavelength converters, have already outperformed their legacy
counterparts realized in bulk LN crystals. Furthermore, the nanophotonic
integration enabled ultra-low-loss resonators in LN, which unlocked many novel
applications such as optical frequency combs and quantum transducers. In this
Review, we cover -- from basic principles to the state of the art -- the
diverse aspects of integrated thin-film LN photonics, including the materials,
basic passive components, and various active devices based on electro-optics,
all-optical nonlinearities, and acousto-optics. We also identify challenges
that this platform is currently facing and point out future opportunities. The
field of integrated LNOI photonics is advancing rapidly and poised to make
critical impacts on a broad range of applications in communication, signal
processing, and quantum information.
|
Infrared divergences in perturbative gravitational scattering amplitudes have
been recently argued to be governed by the two-point function of the
supertranslation Goldstone mode on the celestial sphere. We show that the form
of this celestial two-point function simply derives from an effective action
that also controls infrared divergences in the symplectic structure of General
Relativity with asymptotically flat boundary conditions. This effective action
finds its natural place in a path integral formulation of a celestial conformal
field theory, as we illustrate by re-deriving the infrared soft factors in
terms of celestial correlators. Our analysis relies on a well-posed action
principle close to spatial infinity introduced by Comp\`ere and Dehouck.
|
In this paper, we prove the existence of full dimensional tori for
$d$-dimensional nonlinear Schr$\ddot{\mbox{o}}$dinger equation with periodic
boundary conditions \begin{equation*}\label{L1} \sqrt{-1}u_{t}+\Delta
u+V*u\pm\epsilon |u|^2u=0,\hspace{12pt}x\in\mathbb{T}^d,\quad d\geq 1,
\end{equation*} where $V*$ is the convolution potential. Here the radius of the
invariant torus satisfies a slower decay, i.e. \begin{equation*}\label{031601}
I_{\textbf n}\sim e^{-r\ln^{\sigma}\left\|\textbf n\right\|},\qquad \mbox{as}\
\left\|\textbf n\right\|\rightarrow\infty, \end{equation*}for any $\sigma>2$
and $r\geq 1$. This result confirms a conjecture by Bourgain [J. Funct. Anal.
229 (2005), no. 1, 62-94].
|
Estimating camera wearer's body pose from an egocentric view (egopose) is a
vital task in augmented and virtual reality. Existing approaches either use a
narrow field of view front facing camera that barely captures the wearer, or an
extruded head-mounted top-down camera for maximal wearer visibility. In this
paper, we tackle the egopose estimation from a more natural human vision span,
where camera wearer can be seen in the peripheral view and depending on the
head pose the wearer may become invisible or has a limited partial view. This
is a realistic visual field for user-centric wearable devices like glasses
which have front facing wide angle cameras. Existing solutions are not
appropriate for this setting, and so, we propose a novel deep learning system
taking advantage of both the dynamic features from camera SLAM and the body
shape imagery. We compute 3D head pose, 3D body pose, the figure/ground
separation, all at the same time while explicitly enforcing a certain geometric
consistency across pose attributes. We further show that this system can be
trained robustly with lots of existing mocap data so we do not have to collect
and annotate large new datasets. Lastly, our system estimates egopose in real
time and on the fly while maintaining high accuracy.
|
Transition-metal chalcogenides (TMCs) materials have attracted increasing
interest both for fundamental research and industrial applications. Among all
these materials, two-dimensional (2D) compounds with honeycomb-like structure
possess exotic electronic structures. Here, we report a systematic study of TMC
monolayer AgTe fabricated by direct depositing Te on the surface of Ag(111) and
annealing. Few intrinsic defects are observed and studied by scanning tunneling
microscopy, indicating that there are two kinds of AgTe domains and they can
form gliding twin-boundary. Then, the monolayer AgTe can serve as the template
for the following growth of Te film. Meanwhile, some Te atoms are observed in
the form of chains on the top of the bottom Te film. Our findings in this work
might provide insightful guide for the epitaxial growth of 2D materials for
study of novel physical properties and for future quantum devices.
|
In quantum electrodynamics with charged chiral fermions, a background
electric field is the source of the chiral anomaly which creates a chirally
imbalanced state of fermions. This chiral state is realized through the
production of entangled pairs of right-moving fermions and left-moving
antifermions (or vice versa, depending on the orientation of the electric
field). Here we show that the statistical Gibbs entropy associated with these
pairs is equal to the entropy of entanglement between the right-moving
particles and left-moving antiparticles. We then derive an asymptotic expansion
for the entanglement entropy in terms of the cumulants of the multiplicity
distribution of produced particles and explain how to re-sum this asymptotic
expansion. Finally, we study the time dependence of the entanglement entropy in
a specific time-dependent pulsed background electric field, the so-called
"Sauter pulse", and illustrate how our re-summation method works in this
specific case. We also find that short pulses (such as the ones created by high
energy collisions) result in an approximately thermal distribution for the
produced particles.
|
This article outlines a novel interpretation of quantum theory: the Q-based
interpretation. The core idea underlying this interpretation, recently
suggested for quantum field theories by Drummond and Reid [2020], is to
interpret the phase space function Q -- a transform of the better known Wigner
function -- as a proper probability distribution, roughly analogous to the
probability distribution \rho in classical statistical mechanics.
Here I motivate the Q-based interpretation, investigate whether it is
empirically adequate, and outline some of its key conceptual features. I argue
that the Q-based interpretation is attractive in that it promises having no
measurement problem, is conceptually parsimonious and has the potential to
apply elegantly to relativistic and field-theoretic contexts.
|
In this paper, we establish a structure theorem for projective klt pairs
$(X,\Delta)$ with nef anti-log canonical divisor; specifically, we prove that,
up to replacing $X$ with a finite quasi-\'etale cover, $X$ admits a locally
trivial rationally connected fibration onto a projective klt variety with
numerically trivial canonical divisor. This structure theorem generalizes
previous works for smooth projective varieties and reduces several structure
problems to the singular Beauville-Bogomolov decomposition for Calabi-Yau
varieties. As an application, projective varieties of klt Calabi-Yau type,
which naturally appear as an outcome of the Log Minimal Model Program, are
decomposed into building block varieties: rationally connected varieties and
Calabi-Yau varieties.
|
Physical systems that dissipate, mix and develop turbulence also irreversibly
transport statistical density. In statistical physics, laws for these processes
have a mathematical form and tractability that depends on whether the
description is classical or quantum mechanical. Here, we establish a theory for
density transport in any classical dynamical system that is analogous to the
density matrix formulation of quantum mechanics. Defining states in terms of a
classical density matrix leads to generalizations of Liouville's theorem and
Liouville's equation, establishing an alternative computationally-tractable
basis for nonequilibrium statistical mechanics. The formalism is complete with
classical commutators and anti-commutators that embed measures of local
instability and chaos and are directly related to Poisson brackets when the
dynamics are Hamiltonian. It also recovers the traditional Liouville equation
and the Liouville theorem by imposing trace preservation or Hamiltonian
dynamics. Applying to systems that are driven, transient, dissipative, regular,
and chaotic, this formalism has the potential for broad applications.
|
Hyper-parameters of time series models play an important role in time series
analysis. Slight differences in hyper-parameters might lead to very different
forecast results for a given model, and therefore, selecting good
hyper-parameter values is indispensable. Most of the existing generic
hyper-parameter tuning methods, such as Grid Search, Random Search, Bayesian
Optimal Search, are based on one key component - search, and thus they are
computationally expensive and cannot be applied to fast and scalable
time-series hyper-parameter tuning (HPT). We propose a self-supervised learning
framework for HPT (SSL-HPT), which uses time series features as inputs and
produces optimal hyper-parameters. SSL-HPT algorithm is 6-20x faster at getting
hyper-parameters compared to other search based algorithms while producing
comparable accurate forecasting results in various applications.
|
Microquasars with high-mass companion stars are promising very-high-energy
(VHE; 0.1-100 TeV) gamma-ray emitters, but their behaviors above 10 TeV are
poorly known. Using the High Altitude Water Cherenkov (HAWC) observatory, we
search for excess gamma-ray emission coincident with the positions of known
high-mass microquasars (HMMQs). No significant emission is observed for LS
5039, Cygnus X-1, Cygnus X-3, and SS 433 with 1,523 days of HAWC data. We set
the most stringent limit above 10 TeV obtained to date on each individual
source. Under the assumption that HMMQs produce gamma rays via a common
mechanism, we have performed source-stacking searches, considering two
different scenarios: I) gamma-ray luminosity is a fraction $\epsilon_\gamma$ of
the microquasar jet luminosity, and II) very-high-energy gamma rays are
produced by relativistic electrons up-scattering the radiation field of the
companion star in a magnetic field $B$. We obtain $\epsilon_\gamma < 5.4\times
10^{-6}$ for scenario I, which tightly constrains models that suggest
observable high-energy neutrino emission by HMMQs. In the case of scenario II,
the non-detection of VHE gamma rays yields a strong magnetic field, which
challenges synchrotron radiation as the dominant mechanism of the microquasar
emission between 10 keV and 10 MeV.
|
In this reply, we address the comment [arXiv:2105.14908] to our recent paper
[arXiv:2105.09328], where we argued that the Thakurta metric does not describe
cosmological black holes. We clarify that the mass growth of Thakurta black
holes is due to an influx of energy (i.e. accretion), which, by definition, is
not a feature of geometry. The conclusions of [arXiv:2105.09328] are
independent of the interpretation of this energy flux. We show that the average
energy density of primordial Thakurta black holes scales as $a^{-2}$ and
requires an unrealistic and fine-tuned energy transfer from a smooth dark
matter component to the primordial black hole sector.
|
Quantum Optical Coherence Tomography (Q-OCT) uses quantum properties of light
to provide several advantages over its classical counterpart, OCT: it achieves
a twice better axial resolution with the same spectral bandwidth and it is
immune to even orders of dispersion. Since these features are very sought-after
in OCT imaging, many hardware and software techniques have been created to
mimic the quantum behaviour of light and achieve these features using
traditional OCT systems. The most recent, purely algorithmic scheme - an
improved version of Intensity Correlation Spectral Domain OCT named ICA-SD-OCT
showed even-order dispersion cancellation and reduction of artefacts. The true
capabilities of this method were unfortunately severely undermined, both in
terms of its relation to Q-OCT and in terms of its main performance parameters.
In this work, we provide experimental demonstrations as well as numerical and
analytical arguments to show that ICA-SD-OCT is a true classical equivalent of
Q-OCT, more specifically its Fourier domain version, and therefore it enables a
true two-fold axial resolution improvement. We believe that clarification of
all the misconceptions about this very promising algorithm will highlight the
great value of this method for OCT and consequently lead to its practical
applications for resolution- and quality-enhanced OCT imaging.
|
Short-read DNA sequencing instruments can yield over 1e+12 bases per run,
typically composed of reads 150 bases long. Despite this high throughput, de
novo assembly algorithms have difficulty reconstructing contiguous genome
sequences using short reads due to both repetitive and difficult-to-sequence
regions in these genomes. Some of the short read assembly challenges are
mitigated by scaffolding assembled sequences using paired-end reads. However,
unresolved sequences in these scaffolds appear as "gaps". Here, we introduce
GapPredict, a tool that uses a character-level language model to predict
unresolved nucleotides in scaffold gaps. We benchmarked GapPredict against the
state-of-the-art gap-filling tool Sealer, and observed that the former can fill
65.6% of the sampled gaps that were left unfilled by the latter, demonstrating
the practical utility of deep learning approaches to the gap-filling problem in
genome sequence assembly.
|
We study the Cauchy problem for a class of third order linear anisotropic
evolution equations with complex valued lower order terms depending both on
time and space variables. Under suitable decay assumptions for $|x| \to \infty$
on these coefficients, we prove a well posedness result in Gevrey-type spaces.
|
In this work, we have considered the recently proposed new Tsallis Agegraphic
Dark Energy model (NTADE) (Mod. Phys. Lett. A 34, 1950086, 2019) within the
framework of a flat Friedmann-Robertson-Walker(FRW)
Universe by taking various values of the parameter $\delta$. The NTADE model
shows the current phase transition of the Universe from
decelerated to accelerated phase. The NTADE EoS parameter shows a rich
behaviour as it can be quintessence-like or phantom-like depending on the value
of $\delta$. For discriminating the NTADE model from $\Lambda$CDM, we have
plotted the statefinder parameters $r(z)$, $s(z)$ and $(r, s)$, $(r, q)$ pair.
The NTADE model shows distinct evolutionary trajectories of their evolution in
($ r, s$) and ($ r, q$) plane. An analysis using the snap parameter and
the $\omega_{D}-\omega_{D}^{'}$ pair dynamical analysis have also been
performed.
|
Functional magnetic resonance imaging (fMRI) is a non-invasive and in-vivo
imaging technique essential for measuring brain activity. Functional
connectivity is used to study associations between brain regions either at rest
or while study subjects perform tasks. In this paper, we propose a rigorous
definition of task-evoked functional connectivity at the population level
(ptFC). Importantly, our proposed ptFC is interpretable in the context of
task-fMRI studies. An algorithm for estimating ptFC is provided. We present the
performance of the proposed algorithm compared to existing functional
connectivity estimation approaches using simulations. Lastly, we apply the
proposed framework to estimate task-evoked functional connectivity in a
motor-task study from the Human Connectome Project. We show that the proposed
algorithm identifies associations regions of the brain related to the
performance of motor tasks as expected.
|
Sazdanovic and Yip defined a categorification of Stanley's chromatic function
called the chromatic symmetric homology. In this paper we prove that (as
conjectured by Chandler, Sazdanovic, Stella and Yip), if a graph $G$ is
non-planar, then its chromatic symmetric homology in bidegree (1,0) contains
$\mathbb{Z}_2$-torsion. Our proof follows a recursive argument based on
Kuratowsky's theorem.
|
Detecting similar code fragments, usually referred to as code clones, is an
important task. In particular, code clone detection can have significant uses
in the context of vulnerability discovery, refactoring and plagiarism
detection. However, false positives are inevitable and always require manual
reviews. In this paper, we propose Twin-Finder+, a novel closed-loop approach
for pointer-related code clone detection that integrates machine learning and
symbolic execution techniques to achieve precision. Twin-Finder+ introduces a
formal verification mechanism to automate such manual reviews process. Our
experimental results show Twin-Finder+ that can remove 91.69% false positives
in average. We further conduct security analysis for memory safety using
real-world applications, Links version 2.14 and libreOffice-6.0.0.1.
Twin-Finder+ is able to find 6 unreported bugs in Links version 2.14 and one
public patched bug in libreOffice-6.0.0.1.
|
Lithium niobate on insulator (LNOI), as an emerging and promising optical
integration platform, faces shortages of on-chip active devices including
lasers and amplifiers. Here, we report the fabrication on-chip erbium-doped
LNOI waveguide amplifiers based on electron beam lithography and inductively
coupled plasma reactive ion etching. A net internal gain of ~30 dB/cm in
communication band was achieved in the fabricated waveguide amplifiers under
the pump of a 974-nm continuous laser. This work develops new active devices on
LNOI and will promote the development of LNOI integrated photonics.
|
The scientific image integrity area presents a challenging research
bottleneck, the lack of available datasets to design and evaluate forensic
techniques. Its data sensitivity creates a legal hurdle that prevents one to
rely on real tampered cases to build any sort of accessible forensic benchmark.
To mitigate this bottleneck, we present an extendable open-source library that
reproduces the most common image forgery operations reported by the research
integrity community: duplication, retouching, and cleaning. Using this library
and realistic scientific images, we create a large scientific forgery image
benchmark (39,423 images) with an enriched ground-truth. In addition, concerned
about the high number of retracted papers due to image duplication, this work
evaluates the state-of-the-art copy-move detection methods in the proposed
dataset, using a new metric that asserts consistent match detection between the
source and the copied region. The dataset and source-code will be freely
available upon acceptance of the paper.
|
Visual interpretability of Convolutional Neural Networks (CNNs) has gained
significant popularity because of the great challenges that CNN complexity
imposes to understanding their inner workings. Although many techniques have
been proposed to visualize class features of CNNs, most of them do not provide
a correspondence between inputs and the extracted features in specific layers.
This prevents the discovery of stimuli that each layer responds better to. We
propose an approach to visually interpret CNN features given a set of images by
creating corresponding images that depict the most informative features of a
specific layer. Exploring features in this class-agnostic manner allows for a
greater focus on the feature extractor of CNNs. Our method uses a
dual-objective activation maximization and distance minimization loss, without
requiring a generator network nor modifications to the original model. This
limits the number of FLOPs to that of the original network. We demonstrate the
visualization quality on widely-used architectures.
|
Exploding granules have drawn renewed interest because of their interaction
with the magnetic field. Especially the newly forming downflow lanes developing
in their centre seem to be eligible candidates for the intensification of
magnetic fields. We analyse spectroscopic data from two different instruments
in order to study the intricate velocity pattern within the newly forming
downflow lanes in detail. We aim to examine general properties of a number of
exploding granules. To gain a better understanding of the formation process of
the developing intergranular lane in exploding granules, we study the temporal
evolution and height dependence of the line-of-sight velocities at their
formation location. Additionally, we search for evidence that exploding
granules act as acoustic sources. We investigated the evolution of several
exploding granules using data taken with the Interferometric Bidimensional
Spectrometer and the Imaging Magnetograph eXperiment. Velocities for different
heights of the solar atmosphere were determined by computing bisectors of the
Fe I 6173.0{\AA} and the Fe I 5250.2{\AA} lines. We performed a wavelet
analysis to study the intensity and velocity oscillations within and around
exploding granules. We also compared our findings with predictions of numerical
simulations. We found that exploding granules have significantly longer
lifetimes than regular granules. Exploding granules larger than 3.8 arcsec form
an independent intergranular lane during their decay phase, while smaller
granules usually fade away or disappear into the intergranular area. For all
exploding granules that form a new intergranular downflow lane, we find a
temporal height-dependent shift with respect to the maximum of the downflow
velocity. Our suggestion that this results from a complex atmospheric structure
within the newly forming downflow lane is supported by the simulations.
|
Designing agents that acquire knowledge autonomously and use it to solve new
tasks efficiently is an important challenge in reinforcement learning.
Knowledge acquired during an unsupervised pre-training phase is often
transferred by fine-tuning neural network weights once rewards are exposed, as
is common practice in supervised domains. Given the nature of the reinforcement
learning problem, we argue that standard fine-tuning strategies alone are not
enough for efficient transfer in challenging domains. We introduce Behavior
Transfer (BT), a technique that leverages pre-trained policies for exploration
and that is complementary to transferring neural network weights. Our
experiments show that, when combined with large-scale pre-training in the
absence of rewards, existing intrinsic motivation objectives can lead to the
emergence of complex behaviors. These pre-trained policies can then be
leveraged by BT to discover better solutions than without pre-training, and
combining BT with standard fine-tuning strategies results in additional
benefits. The largest gains are generally observed in domains requiring
structured exploration, including settings where the behavior of the
pre-trained policies is misaligned with the downstream task.
|
The American Physical Society calls on its members to improve the diversity
of physics by supporting an inclusive culture that encourages women and Black,
Indigenous, and people of color to become physicists. In the current
educational system, it is unlikely for a student to become a physicist if they
do not share the same attitudes about what it means to learn and do physics as
those held by most professional physicists. Evidence shows college physics
courses and degree programs do not support students in developing these
attitudes. Rather physics education filters out students who do not enter
college physics courses with these attitudes. To better understand the role of
attitudes in the lack of diversity in physics, we investigated the intersecting
relationships between racism and sexism in inequities in student attitudes
about learning and doing physics using a critical quantitative framework. The
analyses used hierarchical linear models to examine students attitudes as
measured by the Colorado learning attitudes about science survey. The data came
from the LASSO database and included 2170 students in 46 calculus-based
mechanics courses and 2503 students in 49 algebra-based mechanics courses
taught at 18 institutions. Like prior studies, we found that attitudes either
did not change or slightly decreased for most groups. Results identified large
differences across intersecting race and gender groups representing educational
debts society owes these students. White students, particularly White men in
calculus-based courses, tended to have more expert-like attitudes than any
other group of students. Instruction that addresses society's educational debts
can help move physics toward an inclusive culture supportive of diverse
students and professionals.
|
Efficient control of a magnetization without an application of the external
magnetic fields is the ultimate goal of spintronics. We demonstrate, that in
monolayers of $\text{CrI}_3$, magnetization can be switched all optically, by
application of the resonant pulses of circularly polarized light. This happens
because of the efficient coupling of the lattice magnetization with bright
excitonic transition. $\text{CrI}_3$ is thus perspective functional material
with high potential for applications in the domains of spintronics and
ultra-fast magnetic memory.
|
Phenotype transition takes place in many biological processes such as
differentiation, and understanding how a cell reprograms its global gene
expression profile is a problem of rate theories. A cell phenotype transition
accompanies with switching of expression rates of clusters of genes, analogous
to domain flipping in an Ising system. Here through analyzing single cell RNA
sequencing data in the framework of transition path theory, we set to study how
such a genome-wide expression program switching proceeds in three different
cell transition processes. For each process after reconstructing a Markov
transition model in the cell state space, we formed an ensemble of shortest
paths connecting the initial and final cell states, reconstructed a reaction
coordinate describing the transition progression, and inferred the gene
regulation network (GRN) along the reaction coordinate. In all three processes
we observed common pattern that the frustration of gene regulatory network
(GRN), defined as overall confliction between the regulation received by genes
and their expression states, first increases then decreases when approaching a
new phenotype. The results support a mechanism of concerted silencing of genes
that are active in the initial phenotype and activation of genes that are
active in the final phenotype.
|
With the advent of continuous health monitoring with wearable devices, users
now generate their unique streams of continuous data such as minute-level step
counts or heartbeats. Summarizing these streams via scalar summaries often
ignores the distributional nature of wearable data and almost unavoidably leads
to the loss of critical information. We propose to capture the distributional
nature of wearable data via user-specific quantile functions (QF) and use these
QFs as predictors in scalar-on-quantile-function-regression (SOQFR). As an
alternative approach, we also propose to represent QFs via user-specific
L-moments, robust rank-based analogs of traditional moments, and use L-moments
as predictors in SOQFR (SOQFR-L). These two approaches provide two mutually
consistent interpretations: in terms of quantile levels by SOQFR and in terms
of L-moments by SOQFR-L. We also demonstrate how to deal with multi-modal
distributional data via Joint and Individual Variation Explained (JIVE) using
L-moments. The proposed methods are illustrated in a study of association of
digital gait biomarkers with cognitive function in Alzheimer's disease (AD).
Our analysis shows that the proposed methods demonstrate higher predictive
performance and attain much stronger associations with clinical cognitive
scales compared to simple distributional summaries.
|
We present H-TD2: Hybrid Temporal Difference Learning for Taxi Dispatch, a
model-free, adaptive decision-making algorithm to coordinate a large fleet of
automated taxis in a dynamic urban environment to minimize expected customer
waiting times. Our scalable algorithm exploits the natural transportation
network company topology by switching between two behaviors: distributed
temporal-difference learning computed locally at each taxi and infrequent
centralized Bellman updates computed at the dispatch center. We derive a regret
bound and design the trigger condition between the two behaviors to explicitly
control the trade-off between computational complexity and the individual taxi
policy's bounded sub-optimality; this advances the state of the art by enabling
distributed operation with bounded-suboptimality. Additionally, unlike recent
reinforcement learning dispatch methods, this policy estimation is adaptive and
robust to out-of-training domain events. This result is enabled by a two-step
modelling approach: the policy is learned on an agent-agnostic, cell-based
Markov Decision Process and individual taxis are coordinated using the learned
policy in a distributed game-theoretic task assignment. We validate our
algorithm against a receding horizon control baseline in a Gridworld
environment with a simulated customer dataset, where the proposed solution
decreases average customer waiting time by 50% over a wide range of parameters.
We also validate in a Chicago city environment with real customer requests from
the Chicago taxi public dataset where the proposed solution decreases average
customer waiting time by 26% over irregular customer distributions during a
2016 Major League Baseball World Series game.
|
Image virtual try-on replaces the clothes on a person image with a desired
in-shop clothes image. It is challenging because the person and the in-shop
clothes are unpaired. Existing methods formulate virtual try-on as either
in-painting or cycle consistency. Both of these two formulations encourage the
generation networks to reconstruct the input image in a self-supervised manner.
However, existing methods do not differentiate clothing and non-clothing
regions. A straight-forward generation impedes virtual try-on quality because
of the heavily coupled image contents. In this paper, we propose a Disentangled
Cycle-consistency Try-On Network (DCTON). The DCTON is able to produce
highly-realistic try-on images by disentangling important components of virtual
try-on including clothes warping, skin synthesis, and image composition. To
this end, DCTON can be naturally trained in a self-supervised manner following
cycle consistency learning. Extensive experiments on challenging benchmarks
show that DCTON outperforms state-of-the-art approaches favorably.
|
Model-free off-policy actor-critic methods are an efficient solution to
complex continuous control tasks. However, these algorithms rely on a number of
design tricks and hyperparameters, making their application to new domains
difficult and computationally expensive. This paper creates an evolutionary
approach that automatically tunes these design decisions and eliminates the
RL-specific hyperparameters from the Soft Actor-Critic algorithm. Our design is
sample efficient and provides practical advantages over baseline approaches,
including improved exploration, generalization over multiple control
frequencies, and a robust ensemble of high-performance policies. Empirically,
we show that our agent outperforms well-tuned hyperparameter settings in
popular benchmarks from the DeepMind Control Suite. We then apply it to less
common control tasks outside of simulated robotics to find high-performance
solutions with minimal compute and research effort.
|
In this paper, we estimate the high dimensional precision matrix under the
weak sparsity condition where many entries are nearly zero. We study a
Lasso-type method for high dimensional precision matrix estimation and derive
general error bounds under the weak sparsity condition. The common
irrepresentable condition is relaxed and the results are applicable to the weak
sparse matrix. As applications, we study the precision matrix estimation for
the heavy-tailed data, the non-paranormal data, and the matrix data with the
Lasso-type method.
|
We construct a model of type theory enjoying parametricity from an arbitrary
one. A type in the new model is a semi-cubical type in the old one,
illustrating the correspondence between parametricity and cubes.
Our construction works not only for parametricity, but also for similar
interpretations of type theory and in fact similar interpretations of any
generalized algebraic theory. To be precise we consider a functor forgetting
unary operations and equations defining them recursively in a generalized
algebraic theory. We show that it has a right adjoint.
We use techniques from locally presentable category theory, as well as from
quotient inductive-inductive types.
|
The broad range of requirements of Internet of Things applications has lead
to the development of several dedicated communication technologies, each
tailored to meet a specific feature set. A solution combining different
wireless technologies in one device, can overcome the disadvantages of any
individual technology. The design of such Multiple Radio Access Technology
solutions based on the diverse characteristics of the technologies offers
interesting opportunities. In this work we analyze the potential of combining
LoRaWAN and NB-IoT in a Multi-RAT solution for IoT. To that end we evaluate key
IoT node requirements in function of payload size and link quality: (1) energy
efficiency, (2) coverage, (3) payload size, (4) latency performance, (5)
Quality of Service, and (6) cost efficiency. Our theoretical assessment and
experimental validation of these IoT features show the merits of a Multi-RAT
solution. Notably, energy consumption in use cases with only sporadic large
payload requirements, can be improved by a factor of at least 4 with respect to
either single-mode technologies. Moreover, latency-critical messages can get
delivered on time and coverage can be extended elegantly where needed.
|
In this article we establish an asymptotic formula for the number of rational
points, with bounded denominators, within a given distance to a compact
submanifold $\mathcal{M}$ of $\mathbb{R}^M$ with a certain curvature condition.
Our result generalises earlier work of Huang for hypersurfaces [J.-J. Huang,
The density of rational points near hypersurfaces, Duke Math. J. 169 (2020),
2045--2077.], as our curvature condition reduces to Gaussian curvature being
bounded away from $0$ when $M - dim \mathcal{M} = 1$. An interesting feature of
our result is that the asymptotic formula holds beyond the conjectured range of
the distance to $\mathcal{M}$. Furthermore, we obtain an upper bound for the
number of rational points on $\mathcal{M}$ with additional power saving to the
bound in the analogue of Serre's dimension growth conjecture for compact
submanifolds of $\mathbb{R}^M$ when $M - dim \mathcal{M} > 1$.
|
In recent years, locomotion mechanisms exhibited by vertebrate animals have
been the inspiration for the improvement in the performance of robotic systems.
These mechanisms include the adaptability of their locomotion to any change
registered in the environment through their biological sensors. In this regard,
we aim to replicate such kind of adaptability in legged robots through a
Spiking Central Pattern Generator. This Spiking Central Pattern Generator
generates different locomotion (rhythmic) patterns which are driven by an
external stimulus, that is, the output of a Force Sensitive Resistor connected
to the robot to provide feedback. The Spiking Central Pattern Generator
consists of a network of five populations of Leaky Integrate-and-Fire neurons
designed with a specific topology in such a way that the rhythmic patterns can
be generated and driven by the aforementioned external stimulus. Therefore, the
locomotion of the end robotic platform (any-legged robot) can be adapted to the
terrain by using any sensor as input. The Spiking Central Pattern Generator
with adaptive learning has been numerically validated at software and hardware
level, using the Brian 2 simulator and the SpiNNaker neuromorphic platform for
the latest. In particular, our experiments clearly show an adaptation in the
oscillation frequencies between the spikes produced in the populations of the
Spiking Central Pattern Generator while the input stimulus varies. To validate
the robustness and adaptability of the Spiking Central Pattern Generator, we
have performed several tests by variating the output of the sensor. These
experiments were carried out in Brian 2 and SpiNNaker; both implementations
showed a similar behavior with a Pearson correlation coefficient of 0.905.
|
The content of two additional Ward identities exhibited by the $U(1)$ Higgs
model is exploited. These novel Ward identities can be derived only when a pair
of local composite operators providing a gauge invariant setup for the Higgs
particle and the massive vector boson is introduced in the theory from the
beginning. Among the results obtained from the above mentioned Ward identities,
we underline a new exact relationship between the stationary condition for the
vacuum energy, the vanishing of the tadpoles and the vacuum expectation value
of the gauge invariant scalar operator. We also present a characterization of
the two-point correlation function of the composite operator corresponding to
the vector boson in terms of the two-point function of the elementary gauge
fields. Finally, a discussion on the connection between the cartesian and the
polar parametrization of the complex scalar field is presented in the light of
the Equivalence Theorem. The latter can in the current case be understood in
the language of a constrained cohomology, which also allows to rewrite the
action in terms of the aforementioned gauge invariant operators. We also
comment on the diminished role of the global $U(1)$ symmetry and its breaking.
|
A quantum stabilizer code over GF$(q)$ corresponds to a classical additive
code over GF$(q^2)$ that is self-orthogonal with respect to a symplectic inner
product. We study the decoding of quantum low-density parity-check (LDPC) codes
over binary finite fields GF$(q=2^l)$ by the sum-product algorithm, also known
as belief propagation (BP). Conventionally, a message in a nonbinary BP for
quantum codes over GF$(2^l)$ represents a probability vector over GF$(2^{2l})$,
inducing high decoding complexity. In this paper, we explore the property of
the symplectic inner product and show that scalar messages suffice for BP
decoding of nonbinary quantum codes, rather than vector messages necessary for
the conventional BP. Consequently, we propose a BP decoding algorithm for
quantum codes over GF$(2^l)$ by passing scalar messages so that it has low
computation complexity. The algorithm is specified in log domain by using
log-likelihood ratios (LLRs) of the channel statistics to have a low
implementation cost. Moreover, techniques such as message normalization or
offset can be naturally applied in this algorithm to mitigate the effects of
short cycles to improve BP performance. This is important for nonbinary quantum
codes since they may have more short cycles compared to binary quantum codes.
Several computer simulations are provided to demonstrate these advantages. The
scalar-based strategy can also be used to improve the BP decoding of classical
linear codes over GF$(2^l)$ with many short cycles.
|
Collective excitations in topologically non-trivial systems have attracted
considerable attention in recent years. Here we study plasmons in the
Su-Schrieffer-Heeger model whose low-energy electronic band is only partially
filled, such that the system is metallic. Using the random phase approximation,
we calculate the intra- and inter-band polarization functions and determine the
bulk plasmonic dispersion from the dielectric function within the random phase
approximation. We find that the sub-lattice basis states strongly affect the
polarization functions and therefore control the system's plasmonic
excitations. By varying the real-space separation of these local orbitals, one
can thus selectively enhance or suppress the plasmonic energies via a tunable
trade-off between intra-band and inter-band screening processes. Specifically,
this mechanism can be used to stabilize undamped high energy plasmons that have
already been reported in related models. We propose scenarios on how to control
and observe these effects in experiments.
|
Totally symmetric sets are a recently introduced tool for studying
homomorphisms between groups. In this paper, we give full classifications of
totally symmetric sets in certain families of groups and bound their sizes in
others. As a consequence, we derive restrictions on possible homomorphisms
between these groups. One sample application of our results is that any
homomorphism of a braid group to a direct product of solvable groups must have
cyclic image.
|
We study in this work the 2D dynamics of an experimental system of
disk-shaped rotors, fluidized by a turbulent upflow. Our experiments show a
complex chirality behavior. In particular, as average kinetic energy increases,
the system evolves from positive chirality (one vortex rotating in the same
direction as particles spin), to complex chirality (several vortexes of both
signs) and negative chirality (one vortex in opposite sense to particle spin).
We find that these transitions are determined by the combined action of heat
dissipation at the boundaries and statistical correlations between particles
spin and translational velocities. Moreover, we show that the decay to negative
chirality is produced as a consequence of particles spin syncronization.
Therefore, we elucidate a control mechanism of chirality, via the adjustment of
spin in a system of active rotors.
|
A polynomial threshold function (PTF) $f:\mathbb{R}^n \rightarrow \mathbb{R}$
is a function of the form $f(x) = \mathsf{sign}(p(x))$ where $p$ is a
polynomial of degree at most $d$. PTFs are a classical and well-studied
complexity class with applications across complexity theory, learning theory,
approximation theory, quantum complexity and more. We address the question of
designing pseudorandom generators (PRG) for polynomial threshold functions
(PTFs) in the gaussian space: design a PRG that takes a seed of few bits of
randomness and outputs a $n$-dimensional vector whose distribution is
indistinguishable from a standard multivariate gaussian by a degree $d$ PTF.
Our main result is a PRG that takes a seed of $d^{O(1)}\log ( n /
\varepsilon)\log(1/\varepsilon)/\varepsilon^2$ random bits with output that
cannot be distinguished from $n$-dimensional gaussian distribution with
advantage better than $\varepsilon$ by degree $d$ PTFs. The best previous
generator due to O'Donnell, Servedio, and Tan (STOC'20) had a quasi-polynomial
dependence (i.e., seedlength of $d^{O(\log d)}$) in the degree $d$. Along the
way we prove a few nearly-tight structural properties of restrictions of PTFs
that may be of independent interest.
|
Bound-states-in-the-continuum (BIC)is a wave-mechanical concept that
generates resonances with vanishing spectral linewidths. It has many practical
applications in Optics, such as narrow-band filters, mirror-less lasing, and
nonlinear harmonic generation. As true BIC optical modes non-radiative and
confined to the near field of nanostructures, they cannot be excited using
propagating light. As a result, their direct experimental observation has been
elusive. Rather than using light, we demonstrate probing BIC modes on arrays of
silicon nanoantennas using a focused beam of electrons in a tranmission
electron microscope. By combining cathodoluminescence (CL) and monochromated
electron energy-loss spectroscopy (EELS) with controlled nanofabrication, we
provide direct experimental evidence of "true" BIC modes, and demonstrate a BIC
mode in the visible spectrum at 720 nm. The ability to observe and quantify
these guided resonances with a spatial precision more than two orders of
magnitude higher than previous far-field measurements allows the probing of
individual elements in the nano-antenna arrays. The high-resolution
experimental results are supported by numerical simulations as well as
multipolar decomposition analysis, allowing us to demonstrate that the coherent
interaction length of the quasi-BIC resonance requires at least 6 neighboring
antenna elements, achieving over 60 times higher emissivity than for
unpatterned silicon.
|
Let $G$ be a finite abelian group viewed a $\mathbb{Z}$-module and let
$\mathcal{G} = (V, E)$ be a simple graph. In this paper, we consider a graph
$\Gamma(G)$ called as a \textit{group-annihilator} graph. The vertices of
$\Gamma(G)$ are all elements of $G$ and two distinct vertices $x$ and $y$ are
adjacent in $\Gamma(G)$ if and only if $[x : G][y : G]G = \{0\}$, where $x,
y\in G$ and $[x : G] = \{r\in\mathbb{Z} : rG \subseteq \mathbb{Z}x\}$ is an
ideal of a ring $\mathbb{Z}$. We discuss in detail the graph structure realised
by the group $G$. Moreover, we study the creation sequence, hyperenergeticity
and hypoenergeticity of group-annihilator graphs. Finally, we conclude the
paper with a discussion on Laplacian eigen values of the group-annhilator
graph. We show that the Laplacian eigen values are representatives of orbits of
the group action: $Aut(\Gamma(G)) \times G \rightarrow G$.
|
In this paper, we outline an approach for automatic generation of challenging
road networks for virtual testing of an automated lane keep system. Based on a
set of control points, we construct a parametric curve that represents a road
network, which defines the dynamic driving task an automated lane keep system
equipped vehicle has to perform. Changing control points has global influence
on the resulting road geometry. Our approach uses search to find a set of
control points that results in a challenging road geometry, eventually forcing
the vehicle to leave the intended path. We evaluated our approach in three
different search-configurations regarding test efficiency and practical
applicability for automatic virtual testing of an automated lane keep system.
|
LHAASO detected 12 gamma-ray sources above 100 TeV which are the possible
origins of Galactic cosmic-rays. We summarize the neutrino measurements by
IceCube and ANTARES in the vicinity of LHAASO sources to constrain the
contribution of hadronic gamma-rays in these sources. We find that the current
observations constrain that the hadronic gamma-rays contribute no more than
~60% of the gamma-rays from Crab Nebula. Gamma-rays from two LHAASO sources,
LHAASO J1825-1326 and LHAASO J1907+0626, are dominated by leptonic components
up to ~200 TeV, under the hypotheses in the analysis by IceCube. The
uncertainties of the constraint on the hadronic gamma-ray emission are
discussed. We also constrain the total 100 TeV gamma-ray emission from TeV PWNe
relying on the remarkable sensitivity of LHAASO at that energies.
|
CAV platooning technology has received considerable attention in the past few
years, driven by the next generation smart transportation systems. Unlike most
of the existing platooning methods that focus on linear vehicle dynamics of
CAVs, this paper considers nonlinear vehicle dynamics and develops fully
distributed optimization based CAV platooning control schemes via the model
predictive control (MPC) approach for a possibly heterogeneous CAV platoon. The
nonlinear vehicle dynamics leads to several major difficulties in distributed
algorithm development and control analysis and design. Specifically, the
underlying MPC optimization problem is nonconvex and densely coupled. Further,
the closed loop dynamics becomes a time-varying nonlinear system subject to
external perturbations, making closed loop stability analysis rather
complicated. To overcome these difficulties, we formulate the underlying MPC
optimization problem as a locally coupled, albeit nonconvex, optimization
problem and develop a sequential convex programming based fully distributed
scheme for a general MPC horizon. Such a scheme can be effectively implemented
for real-time computing using operator splitting methods. To analyze the closed
loop stability, we apply various tools from global implicit function theorems,
stability of linear time-varying systems, and Lyapunov theory for
input-to-state stability to show that the closed loop system is locally
input-to-state stable uniformly in all small coefficients pertaining to the
nonlinear dynamics. Numerical tests on homogeneous and heterogeneous CAV
platoons demonstrate the effectiveness of the proposed fully distributed
schemes and CAV platooning control.
|
The Continual Learning (CL) problem involves performing well on a sequence of
tasks under limited compute. Current algorithms in the domain are either slow,
offline or sensitive to hyper-parameters. La-MAML, an optimization-based
meta-learning algorithm claims to be better than other replay-based,
prior-based and meta-learning based approaches. According to the MER paper [1],
metrics to measure performance in the continual learning arena are Retained
Accuracy (RA) and Backward Transfer-Interference (BTI). La-MAML claims to
perform better in these values when compared to the SOTA in the domain. This is
the main claim of the paper, which we shall be verifying in this report.
|
There is a big difference in the tone of color of skin between dark and light
skinned people. Despite this fact, most face recognition tasks almost all
classical state-of-the-art models are trained on datasets containing an
overwhelming majority of light skinned face images. It is tedious to collect a
huge amount of data for dark skinned faces and train a model from scratch. In
this paper, we apply transfer learning on VGGFace to check how it works on
recognising dark skinned mainly Ethiopian faces. The dataset is of low quality
and low resource. Our experimental results show above 95\% accuracy which
indicates that transfer learning in such settings works.
|
In this paper we begin mapping out the space of rank-2 $\mathcal{N}=2$
superconformal field theories (SCFTs) in four dimensions. This represents an
ideal set of theories which can be potentially classified using purely quantum
field-theoretic tools, thus providing a precious case study to probe the
completeness of the current understanding of SCFTs, primarily derived from
string theory constructions. Here, we collect and systematize a large amount of
field theoretic data characterizing each theory. We also provide a detailed
description of each case and determine the theories' Coulomb, Higgs and Mixed
branch stratification. The theories naturally organize themselves into series
connected by RG flows but which have gaps suggesting that our current
understanding is not complete.
|
Let the group $G$ act transitively on the finite set $\Omega$. We show that
random Schreier graphs on $O(\log|\Omega|)$ elements are expanders with high
probability, magnifying a famous theorem of Alon and Roichman. On the other
side, depending on the particular action of $G$ on $\Omega$, we give a lower
bound on the number of elements which are necessary to provide expansion. We
apply this method to estimate the spectral gap in the case where $G$ is
nilpotent.
|
We present lambda layers -- an alternative framework to self-attention -- for
capturing long-range interactions between an input and structured contextual
information (e.g. a pixel surrounded by other pixels). Lambda layers capture
such interactions by transforming available contexts into linear functions,
termed lambdas, and applying these linear functions to each input separately.
Similar to linear attention, lambda layers bypass expensive attention maps, but
in contrast, they model both content and position-based interactions which
enables their application to large structured inputs such as images. The
resulting neural network architectures, LambdaNetworks, significantly
outperform their convolutional and attentional counterparts on ImageNet
classification, COCO object detection and COCO instance segmentation, while
being more computationally efficient. Additionally, we design LambdaResNets, a
family of hybrid architectures across different scales, that considerably
improves the speed-accuracy tradeoff of image classification models.
LambdaResNets reach excellent accuracies on ImageNet while being 3.2 - 4.4x
faster than the popular EfficientNets on modern machine learning accelerators.
When training with an additional 130M pseudo-labeled images, LambdaResNets
achieve up to a 9.5x speed-up over the corresponding EfficientNet checkpoints.
|
Beamforming technology is widely used in millimeter wave systems to combat
path losses, and beamformers are usually selected from a predefined codebook.
Unfortunately, the traditional codebook design neglects the beam squint effect,
and this will cause severe performance degradation when the bandwidth is large.
In this letter, we consider that a codebook with fixed size is adopted in the
wideband beamforming system. First, we analyze how beam squint affects system
performance when all beams have the same width. The expression of average
spectrum efficiency is derived based on the ideal beam pattern. Next, we
formulate the optimization problem to design the optimal codebook. Simulation
results demonstrate that the proposed codebook deals with beam squint by
spreading the beam coverage and significantly mitigates the performance
degradation.
|
While the predictive performance of modern statistical dependency parsers
relies heavily on the availability of expensive expert-annotated treebank data,
not all annotations contribute equally to the training of the parsers. In this
paper, we attempt to reduce the number of labeled examples needed to train a
strong dependency parser using batch active learning (AL). In particular, we
investigate whether enforcing diversity in the sampled batches, using
determinantal point processes (DPPs), can improve over their diversity-agnostic
counterparts. Simulation experiments on an English newswire corpus show that
selecting diverse batches with DPPs is superior to strong selection strategies
that do not enforce batch diversity, especially during the initial stages of
the learning process. Additionally, our diversityaware strategy is robust under
a corpus duplication setting, where diversity-agnostic sampling strategies
exhibit significant degradation.
|
We present BIEBER (Byte-IdEntical Binary parsER), the first system to model
and regenerate a full working parser from instrumented program executions. To
achieve this, BIEBER exploits the regularity (e.g., header fields and
array-like data structures) that is commonly found in file formats. Key
generalization steps derive strided loops that parse input file data and
rewrite concrete loop bounds with expressions over input file header bytes.
These steps enable BIEBER to generalize parses of specific input files to
obtain parsers that operate over input files of arbitrary size. BIEBER also
incrementally and efficiently infers a decision tree that reads file header
bytes to route input files of different types to inferred parsers of the
appropriate type. The inferred parsers and decision tree are expressed in an
IR; separate backends (C and Perl in our prototype) can translate the IR into
the same language as the original program (for a safer drop-in replacement), or
automatically port to a different language. An empirical evaluation shows that
BIEBER can successfully regenerate parsers for six file formats (waveform audio
[1654 files], MT76x0 .BIN firmware containers [5 files], OS/2 1.x bitmap images
[9 files], Windows 3.x bitmaps [9971 files], Windows 95/NT4 bitmaps [133
files], and Windows 98/2000 bitmaps [859 files]), correctly parsing 100% (>=
99.98% when using standard held-out cross-validation) of the corresponding
corpora. The regenerated parsers contain automatically inserted safety checks
that eliminate common classes of errors such as memory errors. We find that
BIEBER can help reverse-engineer file formats, because it automatically
identifies predicates for the decision tree that relate to key semantics of the
file format. We also discuss how BIEBER helped us detect and fix two new bugs
in stb_image as well as independently rediscover and fix a known bug.
|
Irreversibility is usually captured by a comparison between the process that
happens and a corresponding "reverse process". In the last decades, this
comparison has been extensively studied through fluctuation relations. Here we
revisit fluctuation relations from the standpoint, suggested decades ago by
Watanabe, that the comparison should involve the prediction and the
retrodiction on the unique process, rather than two processes. We identify a
necessary and sufficient condition for a retrodictive reading of a fluctuation
relation. The retrodictive narrative also brings to the fore the possibility of
deriving fluctuation relations based on various statistical divergences, and
clarifies some of the traditional assumptions as arising from the choice of a
reference prior.
|
Let $GP(q,d)$ be the $d$-Paley graph defined on the finite field
$\mathbb{F}_q$. It is notoriously difficult to improve the trivial upper bound
$\sqrt{q}$ on the clique number of $GP(q,d)$. In this paper, we investigate the
connection between Gauss sums over a finite field and the maximum cliques of
their corresponding generalized Paley graphs. We show that the trivial upper
bound on the clique number of $GP(q,d)$ is tight if and only if $d \mid
(\sqrt{q}+1)$, which strengthens the previous related results by
Broere-D\"oman-Ridley and Schneider-Silva. We also obtain a new simple proof of
Stickelberger's theorem on evaluating semi-primitive Gauss sums.
|
The nearest prototype classification is a less computationally intensive
replacement for the $k$-NN method, especially when large datasets are
considered. In metric spaces, centroids are often used as prototypes to
represent whole clusters. The selection of cluster prototypes in non-metric
spaces is more challenging as the idea of computing centroids is not directly
applicable.
In this paper, we present CRS, a novel method for selecting a small yet
representative subset of objects as a cluster prototype. Memory and
computationally efficient selection of representatives is enabled by leveraging
the similarity graph representation of each cluster created by the NN-Descent
algorithm. CRS can be used in an arbitrary metric or non-metric space because
of the graph-based approach, which requires only a pairwise similarity measure.
As we demonstrate in the experimental evaluation, our method outperforms the
state of the art techniques on multiple datasets from different domains.
|
We report on the results of multi-wavelength follow-up observations with
Gemini, VLA, and ATCA, to search for a host galaxy and any persistent radio
emission associated with FRB 180309. This FRB is among the most luminous FRB
detections to date, with a luminosity of $> 8.7\times 10^{32}$ erg Hz$^{-1}$ at
the dispersion-based redshift upper limit of 0.32. We used the
high-significance detection of FRB 180309 with the Parkes Telescope and a beam
model of the Parkes Multibeam Receiver to improve the localization of the FRB
to a region spanning approximately $\sim2'\times2'$. We aimed to seek bright
galaxies within this region to determine the strongest candidates as the
originator of this highly luminous FRB. We identified optical sources within
the localization region above our r-band magnitude limit of 24.27, fourteen of
which have photometric redshifts whose fitted mean is consistent with the
redshift upper limit ($z < 0.32$) of our FRB. Two of these galaxies are
coincident with marginally detected "persistent" radio sources of flux density
24.3$\mu$Jy beam$^{-1}$ and 22.1$\mu$Jy beam$^{-1}$ respectively. Our
redshift-dependent limit on the luminosity of any associated persistent radio
source is comparable to the luminosity limits for other localized FRBs. We
analyze several properties of the candidate hosts we identified, including
chance association probability, redshift, and presence of radio emission,
however it remains possible that any of these galaxies could be the host of
this FRB. Follow-up spectroscopy on these objects to explore their H$\alpha$
emission and ionization contents, as well as to obtain more precisely measured
redshifts, may be able to isolate a single host for this luminous FRB.
|
Spin waves are promising chargeless information carriers for the future,
energetically efficient beyond-CMOS systems. Among many advantages there are
the ease of achieving nonlinearity, the variety of possible interactions, and
excitation types. Although the rapidly developing magnonic research has already
yielded impressive realizations, multi-mode nonlinear effects, particularly
propagating waves and their nanoscale realizations, are still an open research
problem. We study theoretically the dynamic interactions of the spin waves
confined to the edge of a thin ferromagnetic film with the spin-wave beam
incident at this edge. We found the inelastically scattered spin-wave beams at
frequencies increased and decreased by the frequency of the edge spin-wave
relative to the specularly reflected beam. We observed a strong dependence of
the angular shift of the inelastic scattered spin-wave beam on the edge-mode
frequency, which allowed us to propose a magnonic demultiplexing of the signal
encoded in spin waves propagating along the edge. Since dynamic magnetostatic
interactions, which are ubiquitous in the spin-wave dynamics, are decisive in
this process, this indicates the possibility of implementing the presented
effects, also in other configurations and their use in magnonic systems.
|
Spectral clustering is a popular algorithm that clusters points using the
eigenvalues and eigenvectors of Laplacian matrices derived from the data. For
years, spectral clustering has been working mysteriously. This paper explains
spectral clustering by dividing it into two categories based on whether the
graph Laplacian is fully connected or not. For a fully connected graph, this
paper demonstrates the dimension reduction part by offering an objective
function: the covariance between the original data points' similarities and the
mapped data points' similarities. For a multi-connected graph, this paper
proves that with a proper $k$, the first $k$ eigenvectors are the indicators of
the connected components. This paper also proves there is an equivalence
between spectral embedding and PCA.
|
The new two-dimensional (2D) kagome superconductor CsV$_3$Sb$_5$ has
attracted much recent attention due to the coexistence of superconductivity,
charge order, topology and kagome physics. A key issue in this field is to
unveil the unique reconstructed electronic structure, which successfully
accommodates different orders and interactions to form a fertile ground for
emergent phenomena. Here, we report angle-resolved photoemission spectroscopy
(ARPES) evidence for two distinct band reconstructions in CsV$_3$Sb$_5$. The
first one is characterized by the appearance of new electron energy band at low
temperature. The new band is theoretically reproduced when the three
dimensionality of the charge order is considered for a band-folding along the
out-of-plane direction. The second reconstruction is identified as a surface
induced orbital-selective shift of the electron energy band. Our results
provide the first evidence for the three dimensionality of the charge order in
single-particle spectral function, highlighting the importance of long-range
out-of-plane electronic correlations in this layered kagome superconductor.
They also point to the feasibility of orbital-selective control of the band
structure via surface modification, which would open a new avenue for
manipulating exotic phenomena in this system, including superconductivity.
|
Transformer networks are effective at modeling long-range contextual
information and have recently demonstrated exemplary performance in the natural
language processing domain. Conventionally, the temporal action proposal
generation (TAPG) task is divided into two main sub-tasks: boundary prediction
and proposal confidence prediction, which rely on the frame-level dependencies
and proposal-level relationships separately. To capture the dependencies at
different levels of granularity, this paper intuitively presents a unified
temporal action proposal generation framework with original Transformers,
called TAPG Transformer, which consists of a Boundary Transformer and a
Proposal Transformer. Specifically, the Boundary Transformer captures long-term
temporal dependencies to predict precise boundary information and the Proposal
Transformer learns the rich inter-proposal relationships for reliable
confidence evaluation. Extensive experiments are conducted on two popular
benchmarks: ActivityNet-1.3 and THUMOS14, and the results demonstrate that TAPG
Transformer outperforms state-of-the-art methods. Equipped with the existing
action classifier, our method achieves remarkable performance on the temporal
action localization task. Codes and models will be available.
|
Infrastructure systems, such as power, transportation, telecommunication, and
water systems, are composed of multiple components which are interconnected and
interdependent to produce and distribute essential goods and services. So, the
robustness of infrastructure systems to resist disturbances is crucial for the
durable performance of modern societies. Multilayer networks have been used to
model the multiplicity and interrelation of infrastructure systems and
percolation theory is the most common approach to quantify the robustness of
such networks. This survey systematically reviews literature published between
2010 and 2021, on applying percolation theory to assess the robustness of
infrastructure systems modeled as multilayer networks. We discussed all network
properties applied to build infrastructure models. Among all properties,
interdependency strength and communities were the most common network property
whilst very few studies considered realistic attributes of infrastructure
systems such as directed links and feedback conditions. The review highlights
that the properties produced approximately similar model outcomes, in terms of
detecting improvement or deterioration in the robustness of multilayer
infrastructure networks, with few exceptions. Most of the studies focused on
highly simpliffied synthetic models rather than models built by real datasets.
Thus, this review suggests analyzing multiple properties in a single model to
assess whether they boost or weaken the impact of each other. In addition, the
effect size of different properties on the robustness of infrastructure systems
should be quantiffied. It can support the design and planning of robust
infrastructure systems by arranging and prioritizing the most effective
properties.
|
We study the energy-density dynamics at finite momentum of the
two-dimensional Kitaev spin-model on the honeycomb lattice. Due to
fractionalization of magnetic moments, the energy relaxation occurs through
mobile Majorana matter, coupled to a static $\mathbb{Z}_2$ gauge field. At
finite temperatures, the $\mathbb{Z}_2$ flux excitations act as an emergent
disorder, which strongly affects the energy dynamics. We show that sufficiently
far above the flux proliferation temperature, but not yet in the classical
regime, gauge disorder modifies the coherent low-temperature energy-density
dynamics into a form which is almost diffusive, with hydrodynamic momentum
scaling of a diffusion-kernel, which however remains retarded, primarily due to
the presence of two distinct relaxation channels of particle-hole and
particle-particle nature. Relations to thermal conductivity are clarified. Our
analysis is based on complementary calculations in the low-temperature
homogeneous gauge and a mean-field treatment of thermal gauge fluctuations,
valid at intermediate and high temperatures.
|
The electronic behaviour in graphene under arbitrary uniaxial deformations,
such as foldings or flexural fields, is studied by including in the Dirac
equation pseudo-electromagnetic fields. General foldings are thus studied by
showing that uniaxial deformations can be considered as pseudo-magnetic fields
in the Coulomb gauge norm. This allows to give an expression for the Fermi
(zero) energy modes wavefunctions. For random deformations, contact is made
with previous works on the quantum Hall effect under random magnetic fields,
showing that the density of states has a power law behaviour and that the zero
energy modes wavefunctions are multifractal. This hints of an unusual electron
velocity distribution. Also, it is shown that a strong Aharonov-Bohm
pseudo-effect is produced. For more general non-uniaxial general flexural
strain, it is not possible to use the Coulomb gauge. The results presented here
allow to tailor-made graphene uniaxial deformations to achieve specific
wave-functions and electronic properties.
|
The direct linearisation framework is presented for the two-dimensional Toda
equations associated with the infinite-dimensional Lie algebras $A_\infty$,
$B_\infty$ and $C_\infty$, as well as the Kac--Moody algebras $A_{r}^{(1)}$,
$A_{2r}^{(2)}$, $C_{r}^{(1)}$ and $D_{r+1}^{(2)}$ for arbitrary integers
$r\in\mathbb{Z}^+$, from the aspect of a set of linear integral equations in a
certain form. Such a scheme not only provides a unified perspective to
understand the underlying integrability structure, but also induces the direct
linearising type solution potentially leading to the universal solution space,
for each class of the two-dimensional Toda system. As particular applications
of this framework to the two-dimensional Toda lattices, we rediscover the Lax
pairs and the adjoint Lax pairs and simultaneously construct the generalised
Cauchy matrix solutions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.