abstract
stringlengths 42
2.09k
|
---|
We study the emptiness and $\lambda$-reachability problems for unary and
binary Probabilistic Finite Automata (PFA) and characterise the complexity of
these problems in terms of the degree of ambiguity of the automaton and the
size of its alphabet. Our main result is that emptiness and
$\lambda$-reachability are solvable in EXPTIME for polynomially ambiguous unary
PFA and if, in addition, the transition matrix is over $\{0, 1\}$, we show they
are in NP. In contrast to the Skolem-hardness of the $\lambda$-reachability and
emptiness problems for exponentially ambiguous unary PFA, we show that these
problems are NP-hard even for finitely ambiguous unary PFA. For binary
polynomially ambiguous PFA with fixed and commuting transition matrices, we
prove NP-hardness of the $\lambda$-reachability (dimension $9$), nonstrict
emptiness (dimension $37$) and strict emptiness (dimension $40$) problems.
|
Federated learning (FL) has emerged with increasing popularity to collaborate
distributed medical institutions for training deep networks. However, despite
existing FL algorithms only allow the supervised training setting, most
hospitals in realistic usually cannot afford the intricate data labeling due to
absence of budget or expertise. This paper studies a practical yet challenging
FL problem, named \textit{Federated Semi-supervised Learning} (FSSL), which
aims to learn a federated model by jointly utilizing the data from both labeled
and unlabeled clients (i.e., hospitals). We present a novel approach for this
problem, which improves over traditional consistency regularization mechanism
with a new inter-client relation matching scheme. The proposed learning scheme
explicitly connects the learning across labeled and unlabeled clients by
aligning their extracted disease relationships, thereby mitigating the
deficiency of task knowledge at unlabeled clients and promoting discriminative
information from unlabeled samples. We validate our method on two large-scale
medical image classification datasets. The effectiveness of our method has been
demonstrated with the clear improvements over state-of-the-arts as well as the
thorough ablation analysis on both tasks\footnote{Code will be made available
at \url{https://github.com/liuquande/FedIRM}}.
|
The quasi-one-dimensional spin ladder compounds, BaFe$_2$S$_3$ and
BaFe$_2$Se$_3$, are investigated by infrared spectroscopy and density
functional theory (DFT) calculations. We observe strong anisotropic electronic
properties and an optical gap in the leg direction that is gradually filled
above the antiferromagnetic (afm) ordering temperature, turning the systems
into a metallic phase. Combining the optical data with the DFT calculations we
associate the optical gap feature with the $p$-$d$ transition that appears only
in the afm ordered state. Hence, the insulating ground state along the leg
direction is attributed to Slater physics rather than Mott-type correlations.
|
The on-going COVID-19 pandemic highlights the severe health risks posed by
deep submicron sized airborne viruses and particulates in the spread of
infectious diseases. There is an urgent need for the development of efficient,
durable and reusable filters for this size range. Here we report the
realization of efficient particulate filters using nanowire-based low-density
metal foams which combine extremely large surface areas with excellent
mechanical properties. The metal foams exhibit outstanding filtration
efficiencies (>96.6%) in the PM_{0.3} regime, with potentials for further
improvement. Their mechanical stability and light weight, chemical and
radiation resistance, ease of cleaning and reuse, and recyclability further
make such metal foams promising filters for combating COVID-19 and other types
of airborne particulates.
|
Mendelian randomization (MR) is a statistical method exploiting genetic
variants as instrumental variables to estimate the causal effect of modifiable
risk factors on an outcome of interest. Despite wide uses of various popular
two-sample MR methods based on genome-wide association study summary level
data, however, those methods could suffer from potential power loss or/and
biased inference when the chosen genetic variants are in linkage disequilibrium
(LD), and also have relatively large direct effects on the outcome whose
distribution might be heavy-tailed which is commonly referred to as the
idiosyncratic pleiotropy phenomenon. To resolve those two issues, we propose a
novel Robust Bayesian Mendelian Randomization (RBMR) model that uses the more
robust multivariate generalized t-distribution to model such direct effects in
a probabilistic model framework which can also incorporate the LD structure
explicitly. The generalized t-distribution can be represented as a Gaussian
scaled mixture so that our model parameters can be estimated by the EM-type
algorithms. We compute the standard errors by calibrating the evidence lower
bound using the likelihood ratio test. Through extensive simulation studies, we
show that our RBMR has robust performance compared to other competing methods.
We also apply our RBMR method to two benchmark data sets and find that RBMR has
smaller bias and standard errors. Using our proposed RBMR method, we find that
coronary artery disease is associated with increased risk of critically ill
coronavirus disease 2019 (COVID-19). We also develop a user-friendly R package
RBMR for public use.
|
On the space of $\pm 1$ spin configurations on the 3d-square lattice, we
consider the \emph{shaken dynamics}, a parallel Markovian dynamics that can be
interpreted in terms of Probabilistic Cellular Automata, whose transition
probabilities are defined in terms of a pair ferromagnetic Ising-type
Hamiltonian with nearest neighbor interaction $J$, depending on an additional
parameter $q$, measuring the tendency of the system to remain locally in the
same state. We compute the stationary measure of the shaken dynamics and we
investigate its relation with the Gibbs measure for the Ising model. It turns
out that the two parameters $J$ and $q$ tune the geometry of the underlying
lattice. By a judicious use of perturbative methods we show rigorously that our
model exhibits a line of critical points in $J-q$ plane that separates the
ordered phase from the disordered one, and we perform numerical simulation to
determine the phase transition curve. Our method allows us to find in a unified
way the critical values of $J$ for Ising model with first neighbors
interaction, defined on a whole class of lattice, intermediate between the
two-dimensional hexagonal and the three-dimensional cubic one, such as, for
example, the tetrahedral lattice. Finally we estimate the critical exponents of
the magnetic susceptibility and show that our model captures a phase transition
in the geometry of the system at $q = 0$.
|
Current supervised sketch-based image retrieval (SBIR) methods achieve
excellent performance. However, the cost of data collection and labeling
imposes an intractable barrier to practical deployment of real applications. In
this paper, we present the first attempt at unsupervised SBIR to remove the
labeling cost (category annotations and sketch-photo pairings) that is
conventionally needed for training. Existing single-domain unsupervised
representation learning methods perform poorly in this application, due to the
unique cross-domain (sketch and photo) nature of the problem. We therefore
introduce a novel framework that simultaneously performs unsupervised
representation learning and sketch-photo domain alignment. Technically this is
underpinned by exploiting joint distribution optimal transport (JDOT) to align
data from different domains during representation learning, which we extend
with trainable cluster prototypes and feature memory banks to further improve
scalability and efficacy. Extensive experiments show that our framework
achieves excellent performance in the new unsupervised setting, and performs
comparably or better than state-of-the-art in the zero-shot setting.
|
We use first principles molecular dynamics simulations coupled to the
thermodynamic integration method to study the hcp-bcc transition and melting of
beryllium up to a pressure of 1600~GPa. We derive the melting line by equating
solid and liquid Gibbs free energies, and represent it by a Simon Glatzel fit
$T_m= 1564~\text{K} (1 + P/(15.6032 ~\text{GPa}))^{0.383}$, which is in good
agreement with previous two-phase simulations below 6000~K. We also derive the
hcp-bcc solid-solid phase boundary and show the quasiharmonic approximation
underestimates the stability of the hcp structure, predicting lower transition
pressures between hcp and bcc phases. However, our results are consistent with
the stability regime predicted by the phonon quasiparticle method. We also
predict that hcp-bcc-liquid triple point is located at 164.7~GPa and 4314~K. In
addition, we compute the shock Hugoniot curve, and show that it is in good
agreement with experiments, intersecting our derived melting curve at
$\sim$235~GPa at 4900~K. Finally, we show that an isentropic compression path
that intersects the melting curve at both low and high temperature in the
liquid regime, can reappear in the solid after a gap as large as 7000~K.
Therefore, we predict that a large section of the melting curve could be
sampled, in principle, by a ramp compression experiment, where solid and liquid
Be would coexist as the sample is compressed.
|
After a few microseconds of the creation of our Universe through the Big
Bang, the primordial matter was believed to be a soup of the fundamental
constituents of matter -- quarks and gluons. This is expected to be created in
the laboratory by colliding heavy nuclei at ultra-relativistic speeds. A plasma
of quarks and gluons, called Quark-Gluon Plasma (QGP) can be created at the
energy and luminosity frontiers in the Relativistic Heavy Ion Collider (RHIC),
at Brookhaven National Laboratory, New York, USA, and the Large Hadron Collider
(LHC) at CERN, Geneva, Switzerland. Heavy quarks, namely the charm and bottom
quarks, are considered as novel probes to characterize QGP, and hence the
produced Quantum Chromodynamics (QCD) matter. Heavy quark transport
coefficients play a significant role in understanding the properties of QGP.
Experimental measurements of nuclear suppression factor and elliptic flow can
constrain the heavy quark transport coefficients, which are key ingredients for
phenomenological studies, and they help to disentangle different energy loss
mechanisms. We give a general perspective of the heavy quark drag and diffusion
coefficients in QGP and discuss their potentials as probes to disentangle
different hadronization mechanisms, as well as to probe the initial
electromagnetic fields produced in non-central heavy-ion collisions.
Experimental perspectives on future measurements are discussed with special
emphasis on heavy-flavors as next-generation probes in view of new
technological developments.
|
We present simulations which predict significantly higher laser to X-ray
efficiencies than those previously found in high intensity (1e20-1e22 W/cm2)
laser-solid simulations. The bremsstrahlung emission is shown to last for
10-100 ps, which is difficult to model with conventional particle-in-cell (PIC)
codes. The importance of collective effects is also demonstrated, showing the
limitations of Monte Carlo modelling in these systems. A new, open-source
hybrid-PIC code with bremsstrahlung routines has been developed to model this
X-ray production in 3D. Special boundary conditions are used to emulate complex
electron refluxing behaviour, which has been characterised in 2D full-PIC
simulations. The peak X-ray efficiency was recorded in thick gold targets, with
7.4% conversion of laser energy into X-rays of energy 1 MeV or higher. The
target size is shown to play a role in the conversion efficiency and angular
distribution of emitted X-rays, and a simple analytic model is presented for
estimating these efficiencies.
|
Studying the diffusion and kinetic equilibration of heavy quarks within a hot
QCD medium profits from the knowledge of a coloured Lorentz force that acts on
them. Starting from the spatial components of the vector current, and carrying
out two matching computations, one for the heavy quark mass scale ($M$) and
another for thermal scales ($\sqrt{MT}$, $T$), we determine 1-loop matching
coefficients for the electric and magnetic parts of a Lorentz force. The
magnetic part has a non-zero anomalous dimension, which agrees with that
extracted from two other considerations, one thermal and the other in vacuum.
The matching coefficient could enable a lattice study of a colour-magnetic
2-point correlator.
|
In this paper, we provide a general framework for the construction of the
Einstein frame within non-linear extensions of the teleparallel equivalents of
General Relativity. These include the metric teleparallel and the symmetric
teleparallel, but also the general teleparallel theories. We write the actions
in a form where we separate the Einstein--Hilbert term, the conformal mode due
to the non-linear nature of the theories (which is analogous to the extra
degree of freedom in $f(R)$ theories), and the sector that manifestly shows the
dynamics arising from the breaking of local symmetries. This frame is then used
to study the theories around the Minkowski background, and we show how all the
non-linear extensions share the same quadratic action around Minkowski. As a
matter of fact, we find that the gauge symmetries that are lost by going to the
non-linear generalisations of the teleparallel General Relativity equivalents
arise as accidental symmetries in the linear theory around Minkowski.
Remarkably, we also find that the conformal mode can be absorbed into a Weyl
rescaling of the metric at this order and, consequently, it disappears from the
linear spectrum so only the usual massless spin 2 perturbation propagates.
These findings unify in a common framework the known fact that no additional
modes propagate on Minkowski backgrounds, and we can trace it back to the
existence of accidental gauge symmetries of such a background.
|
For a caching system with multiple users, we aim to characterize the
memory-rate tradeoff for caching with uncoded cache placement, under nonuniform
file popularity. Focusing on the modified coded caching scheme (MCCS) recently
proposed by Yu, etal., we formulate the cache placement optimization problem
for the MCCS to minimize the average delivery rate under nonuniform file
popularity, restricting to a class of popularity-first placements. We then
present two information-theoretic lower bounds on the average rate for caching
with uncoded placement, one for general cache placements and the other
restricted to the popularity-first placements. By comparing the average rate of
the optimized MCCS with the lower bounds, we prove that the optimized MCCS
attains the general lower bound for the two-user case, providing the exact
memory-rate tradeoff. Furthermore, it attains the popularity-first-based lower
bound for the case of general K users with distinct file requests. In these two
cases, our results also reveal that the popularity-first placement is optimal
for the MCCS, and zero-padding used in coded delivery incurs no loss of
optimality. For the case of K users with redundant file requests, our analysis
shows that there may exist a gap between the optimized MCCS and the lower
bounds due to zero-padding. We next fully characterize the optimal
popularity-first cache placement for the MCCS, which is shown to possess a
simple file-grouping structure and can be computed via an efficient algorithm
using closed-form expressions. Finally, we extend our study to accommodate both
nonuniform file popularity and sizes, where we show that the optimized MCCS
attains the lower bound for the two-user case, providing the exact memory-rate
tradeoff. Numerical results show that, for general settings, the gap between
the optimized MCCS and the lower bound only exists in limited cases and is very
small.
|
Novelty detection using deep generative models such as autoencoder,
generative adversarial networks mostly takes image reconstruction error as
novelty score function. However, image data, high dimensional as it is,
contains a lot of different features other than class information which makes
models hard to detect novelty data. The problem gets harder in multi-modal
normality case. To address this challenge, we propose a new way of measuring
novelty score in multi-modal normality cases using orthogonalized latent space.
Specifically, we employ orthogonal low-rank embedding in the latent space to
disentangle the features in the latent space using mutual class information.
With the orthogonalized latent space, novelty score is defined by the change of
each latent vector. Proposed algorithm was compared to state-of-the-art novelty
detection algorithms using GAN such as RaPP and OCGAN, and experimental results
show that ours outperforms those algorithms.
|
The concept of k-spectrum for genomes is here investigated as a basic tool to
analyze genomes. Related spectral notions based on k-mers are introduced with
some related mathematical properties which are relevant for informational
analysis of genomes. Procedures to generate spectral segmentations of genomes
are provided and are tested (under several values of length k for k-mers) on
cases of real genomes, such as some human chromosomes and Saccharomyces
cerevisiae.
|
In this paper we consider the strategic asset allocation of an insurance
company. This task can be seen as a special case of portfolio optimization. In
the 1950s, Markowitz proposed to formulate portfolio optimization as a
bicriteria optimization problem considering risk and return as objectives.
However, recent developments in the field of insurance require four and more
objectives to be considered, among them the so-called solvency ratio that stems
from the Solvency II directive of the European Union issued in 2009. Moreover,
the distance to the current portfolio plays an important role. While literature
on portfolio optimization with three objectives is already scarce, applications
with four and more objectives have not yet been solved so far by
multi-objective approaches based on scalarizations. However, recent algorithmic
improvements in the field of exact multi-objective methods allow the
incorporation of many objectives and the generation of well-spread
representations within few iterations. We describe the implementation of such
an algorithm for a strategic asset allocation with four objective functions and
demonstrate its usefulness for the practitioner. Our approach is in operative
use in a German insurance company. Our partners report a significant
improvement in their decision making process since, due to the proper
integration of the new objectives, the software proposes portfolios of much
better quality than before within short running time.
|
Programming languages with algebraic effects often track the computations'
effects using type-and-effect systems. In this paper, we propose to view an
algebraic effect theory of a computation as a variable context; consequently,
we propose to track algebraic effects of a computation with \emph{contextual
modal types}. We develop ECMTT, a novel calculus which tracks algebraic effects
by a contextualized variant of the modal $\Box$ (necessity) operator, that it
inherits from Contextual Modal Type Theory (CMTT).
Whereas type-and-effect systems add effect annotations on top of a prior
programming language, the effect annotations in ECMTT are inherent to the
language, as they are managed by programming constructs corresponding to the
logical introduction and elimination forms for the $\Box$ modality. Thus, the
type-and-effect system of ECMTT is actually just a type system.
Our design obtains the properties of local soundness and completeness, and
determines the operational semantics solely by $\beta$-reduction, as customary
in other logic-based calculi. In this view, effect handlers arise naturally as
a witness that one context (i.e., algebraic theory) can be reached from
another, generalizing explicit substitutions from CMTT.
To the best of our knowledge, ECMTT is the first system to relate algebraic
effects to modal types. We also see it as a step towards providing a
correspondence in the style of Curry and Howard that may transfer a number of
results from the fields of modal logic and modal type theory to that of
algebraic effects.
|
We extend a classical test of subsphericity, based on the first two moments
of the eigenvalues of the sample covariance matrix, to the high-dimensional
regime where the signal eigenvalues of the covariance matrix diverge to
infinity and either $p/n \rightarrow 0$ or $p/n \rightarrow \infty$. In the
latter case we further require that the divergence of the eigenvalues is
suitably fast in a specific sense. Our work can be seen to complement that of
Schott (2006) who established equivalent results in the case $p/n \rightarrow
\gamma \in (0, \infty)$. As our second main contribution, we use the test to
derive a consistent estimator for the latent dimension of the model.
Simulations and a real data example are used to demonstrate the results,
providing also evidence that the test might be further extendable to a wider
asymptotic regime.
|
Recent works on ride-sharing order dispatching have highlighted the
importance of taking into account both the spatial and temporal dynamics in the
dispatching process for improving the transportation system efficiency. At the
same time, deep reinforcement learning has advanced to the point where it
achieves superhuman performance in a number of fields. In this work, we propose
a deep reinforcement learning based solution for order dispatching and we
conduct large scale online A/B tests on DiDi's ride-dispatching platform to
show that the proposed method achieves significant improvement on both total
driver income and user experience related metrics. In particular, we model the
ride dispatching problem as a Semi Markov Decision Process to account for the
temporal aspect of the dispatching actions. To improve the stability of the
value iteration with nonlinear function approximators like neural networks, we
propose Cerebellar Value Networks (CVNet) with a novel distributed state
representation layer. We further derive a regularized policy evaluation scheme
for CVNet that penalizes large Lipschitz constant of the value network for
additional robustness against adversarial perturbation and noises. Finally, we
adapt various transfer learning methods to CVNet for increased learning
adaptability and efficiency across multiple cities. We conduct extensive
offline simulations based on real dispatching data as well as online AB tests
through the DiDi's platform. Results show that CVNet consistently outperforms
other recently proposed dispatching methods. We finally show that the
performance can be further improved through the efficient use of transfer
learning.
|
Let $G$ be a graph on $n$ nodes. In the stochastic population protocol model,
a collection of $n$ indistinguishable, resource-limited nodes collectively
solve tasks via pairwise interactions. In each interaction, two randomly chosen
neighbors first read each other's states, and then update their local states. A
rich line of research has established tight upper and lower bounds on the
complexity of fundamental tasks, such as majority and leader election, in this
model, when $G$ is a clique. Specifically, in the clique, these tasks can be
solved fast, i.e., in $n \operatorname{polylog} n$ pairwise interactions, with
high probability, using at most $\operatorname{polylog} n$ states per node.
In this work, we consider the more general setting where $G$ is an arbitrary
graph, and present a technique for simulating protocols designed for
fully-connected networks in any connected regular graph. Our main result is a
simulation that is efficient on many interesting graph families: roughly, the
simulation overhead is polylogarithmic in the number of nodes, and quadratic in
the conductance of the graph. As a sample application, we show that, in any
regular graph with conductance $\phi$, both leader election and exact majority
can be solved in $\phi^{-2} \cdot n \operatorname{polylog} n$ pairwise
interactions, with high probability, using at most $\phi^{-2} \cdot
\operatorname{polylog} n$ states per node. This shows that there are fast and
space-efficient population protocols for leader election and exact majority on
graphs with good expansion properties. We believe our results will prove
generally useful, as they allow efficient technology transfer between the
well-mixed (clique) case, and the under-explored spatial setting.
|
We present a technique for a complete 3D reconstruction of small objects
moving in front of a textured background. It is a particular variation of
multibody structure from motion, which specializes to two objects only. The
scene is captured in several static configurations between which the relative
pose of the two objects may change. We reconstruct every static configuration
individually and segment the points locally by finding multiple poses of
cameras that capture the scene's other configurations. Then, the local
segmentation results are combined, and the reconstructions are merged into the
resulting model of the scene. In experiments with real artifacts, we show that
our approach has practical advantages when reconstructing 3D objects from all
sides. In this setting, our method outperforms the state-of-the-art. We
integrate our method into the state of the art 3D reconstruction pipeline
COLMAP.
|
We propose a probe for the analysis of deep learning architectures that is
based on machine learning and approximation theoretical principles. Given a
deep learning architecture and a training set, during or after training, the
Sparsity Probe allows to analyze the performance of intermediate layers by
quantifying the geometrical features of representations of the training set. We
show how the Sparsity Probe enables measuring the contribution of adding depth
to a given architecture, to detect under-performing layers, etc., all this
without any auxiliary test data set.
|
We propose a stochastic SIR model, specified as a system of stochastic
differential equations, to analyse the data of the Italian COVID-19 epidemic,
taking also into account the under-detection of infected and recovered
individuals in the population. We find that a correct assessment of the amount
of under-detection is important to obtain reliable estimates of the critical
model parameters. Moreover, a single SIR model over the whole epidemic period
is unable to correctly describe the behaviour of the pandemic. Then, the
adaptation of the model in every time-interval between relevant government
decrees that implement contagion mitigation measures, provides short-term
predictions and a continuously updated assessment of the basic reproduction
number.
|
The article is devoted to questions concerning the problems of compactness of
solutions of the Dirichlet problem for the Beltrami equation in some simply
connected domain. In terms of prime ends, we have proved results of a detailed
form for the case when the maximal dilations of these solutions satisfy certain
integral constraints. In addition, in this article we have proved theorems on
the local and global behavior of plane and spatial mappings with direct and
inverse modulus conditions.
|
Process digitization and integration is an increasing need for enterprises,
while cyber-attacks denote a growing threat. Using the Business Process
Management Notation (BPMN) is common to handle the digital and integration
focus within and across organizations. In other parts of the same companies,
threat modeling and attack graphs are used for analyzing the security posture
and resilience.
In this paper, we propose a novel approach to use attack graph simulations on
processes represented in BPMN. Our contributions are the identification of
BPMN's attack surface, a mapping of BPMN elements to concepts in a Meta Attack
Language (MAL)-based Domain-Specific Language (DSL), called coreLang, and a
prototype to demonstrate our approach in a case study using a real-world
invoice integration process. The study shows that non-invasively enriching BPMN
instances with cybersecurity analysis through attack graphs is possible without
much human expert input. The resulting insights into potential vulnerabilities
could be beneficial for the process modelers.
|
We identify infinite classes of potentials for which the Coleman instantons
do not exist. For these potentials, the decay of a false vacuum must be
described by the new instantons introduced in [7,8].
|
Systematic differences in the the proton's charge radius, as determined by
ordinary atoms and muonic atoms, have caused a resurgence of interest in
elastic lepton scattering measurements. The proton's charge radius, defined as
the slope of the charge form factor at Q$^2$=0, does not depend on the probe.
Any difference in the apparent size of the proton, when determined from
ordinary versus muonic hydrogen, could point to new physics or need for the
higher order corrections. While recent measurements seem to now be in
agreement, there is to date no high precision elastic scattering data with both
electrons and positrons. A high precision proton radius measurement could be
performed in Hall B at Jefferson Lab with a positron beam and the calorimeter
based setup of the PRad experiment. This measurement could also be extended to
deuterons where a similar discrepancy has been observed between the muonic and
electronic determination of deuteron charge radius. A new, high precision
measurement with positrons, when viewed alongside electron scattering
measurements and the forthcoming MUSE muon scattering measurement, could help
provide new insights into the origins of the proton radius puzzle, and also
provide new experimental constraints on radiative correction calculations.
|
Lithium iron phosphate (LixFePO4), a cathode material used in rechargeable
Li-ion batteries, phase separates upon de/lithiation under equilibrium. The
interfacial structure and chemistry within these cathode materials affects
Li-ion transport, and therefore battery performance. Correlative imaging of
LixFePO4 was performed using four-dimensional scanning transmission electron
microscopy (4D-STEM), scanning transmission X-ray microscopy (STXM), and X-ray
ptychography in order to analyze the local structure and chemistry of the same
particle set. Over 50,000 diffraction patterns from 10 particles provided
measurements of both structure and chemistry at a nanoscale spatial resolution
(16.6-49.5 nm) over wide (several micron) fields-of-view with statistical
robustness.LixFePO4 particles at varying stages of delithiation were measured
to examine the evolution of structure and chemistry as a function of
delithiation. In lithiated and delithiated particles, local variations were
observed in the degree of lithiation even while local lattice structures
remained comparatively constant, and calculation of linear coefficients of
chemical expansion suggest pinning of the lattice structures in these
populations. Partially delithiated particles displayed broadly core-shell-like
structures, however, with highly variable behavior both locally and per
individual particle that exhibited distinctive intermediate regions at the
interface between phases, and pockets within the lithiated core that correspond
to FePO4 in structure and chemistry.The results provide insight into the
LixFePO4 system, subtleties in the scope and applicability of Vegards law
(linear lattice parameter-composition behavior) under local versus global
measurements, and demonstrate a powerful new combination of experimental and
analytical modalities for bridging the crucial gap between local and
statistical characterization.
|
Large Transformers pretrained over clinical notes from Electronic Health
Records (EHR) have afforded substantial gains in performance on predictive
clinical tasks. The cost of training such models (and the necessity of data
access to do so) coupled with their utility motivates parameter sharing, i.e.,
the release of pretrained models such as ClinicalBERT. While most efforts have
used deidentified EHR, many researchers have access to large sets of sensitive,
non-deidentified EHR with which they might train a BERT model (or similar).
Would it be safe to release the weights of such a model if they did? In this
work, we design a battery of approaches intended to recover Personal Health
Information (PHI) from a trained BERT. Specifically, we attempt to recover
patient names and conditions with which they are associated. We find that
simple probing methods are not able to meaningfully extract sensitive
information from BERT trained over the MIMIC-III corpus of EHR. However, more
sophisticated "attacks" may succeed in doing so: To facilitate such research,
we make our experimental setup and baseline probing models available at
https://github.com/elehman16/exposing_patient_data_release
|
In ultrasound tomography, the speed of sound inside an object is estimated
based on acoustic measurements carried out by sensors surrounding the object.
An accurate forward model is a prominent factor for high-quality image
reconstruction, but it can make computations far too time-consuming in many
applications. Using approximate forward models, it is possible to speed up the
computations, but the quality of the reconstruction may have to be compromised.
In this paper, a neural network -based approach is proposed, that can
compensate for modeling errors caused by the approximate forward models. The
approach is tested with various different imaging scenarios in a simulated
two-dimensional domain. The results show that with fairly small training
datasets, the proposed approach can be utilized to approximate the modelling
errors, and to significantly improve the image reconstruction quality in
ultrasound tomography, compared to commonly used inversion algorithms.
|
This paper deals with the topological entropy for hom Markov shifts
$\mathcal{T}_M$ on $d$-tree. If $M$ is a reducible adjacency matrix with $q$
irreducible components $M_1, \cdots, M_q$, we show that
$h(\mathcal{T}_{M})=\max_{1\leq i\leq q}h(\mathcal{T}_{M_{i}})$ fails
generally, and present a case study with full characterization in terms of the
equality. Though that it is likely the sets $\{h(\mathcal{T}_{M}):M\text{ is
binary and irreducible}\}$ and $\{h(\mathcal{T}_{X}):X\text{ is a one-sided
shift}\}$ are not coincident, we show the two sets share the common closure.
Despite the fact that such closure is proved to contain the interval $[d \log
2, \infty)$, numerical experiments suggest its complement contain open
intervals.
|
In this note, we study the holographic CFT in the de Sitter static patch at
finite temperature $T$ and chemical potential. We find that butterfly velocity
$v_B$ in such field theory degenerates for all values of the Hubble parameter
$H$ and $T$. We interpret this as a chaos disruption caused by the interplay
between the expansion of chaotic correlations constrained by $v_B$ and effects
caused by de Sitter curvature. The chemical potential restores healthy
butterfly velocity for some range of temperatures. Also, we provide some
analogy of this chaos suppression with the Schwinger effect in de Sitter and
black hole formation from shock wave collision.
|
A rotation curve inequality that holds for spherically symmetric mass
distributions is derived, and tested against the SPARC galaxy rotation curves
dataset. We identify several Galaxies, eg NGC7793 and UGC05253, which are
candidates for hosting non-spherical dark matter structures that could be
detected by more precise measurements.
|
Wide-area synchrophasor ambient measurements provide a valuable data source
for real-time oscillation mode monitoring and analysis. This paper introduces a
novel method for identifying inter-area oscillation modes using wide-area
ambient measurements. Based on multivariate empirical mode decomposition
(MEMD), which can analyze multi-channel non-stationary and nonlinear signals,
the proposed method is capable of detecting the common oscillation mode that
exists in multiple synchrophasor measurements at low amplitudes. Test results
based on two real-world datasets validate the effectiveness of the proposed
method.
|
We demonstrate an individual single-walled carbon nanotube light emitter
integrated onto a microcavity and a waveguide operating in the telecom
wavelength regime. Light emission from the carbon nanotube is enhanced at the
cavity resonance and is efficiently extracted from the waveguide facet. We have
transferred carbon nanotubes to a nanobeam cavity with a dry process, ensuring
that an individual carbon nanotube is used. The guided light emission from a
chirality-identified single carbon nanotube has a narrow linewidth of less than
1.3 nm and an off-resonance rejection of $\sim$17 dB. The waveguide-coupled
device configuration is compatible with fully integrated on-chip designs and is
promising for carbon-nanotube-based photonics.
|
This paper suggests the use of multiple distributed intelligent reflecting
surfaces (IRSs) towards a smarter control of the propagation environment.
Notably, we also take into account the inevitable correlated Rayleigh fading in
IRS-assisted systems. In particular, in a single-input and single-output (SISO)
system, we consider and compare two insightful scenarios, namely, a finite
number of large IRSs and a large number of finite size IRSs to show which
implementation method is more advantageous. In this direction, we derive the
coverage probability in closed-form for both cases contingent on statistical
channel state information (CSI) by using the deterministic equivalent (DE)
analysis. Next, we obtain the optimal coverage probability. Among others,
numerical results reveal that the addition of more surfaces outperforms the
design scheme of adding more elements per surface. Moreover, in the case of
uncorrelated Rayleigh fading, statistical CSI-based IRS systems do not allow
the optimization of the coverage probability.
|
This paper presents a sparse solver based on the alternating direction method
of multipliers algorithm for a linear model predictive control (MPC)
formulation in which the terminal state is constrained to a given ellipsoid.
The motivation behind this solver is to substitute the typical polyhedral
invariant set used as a terminal constraint in many nominal and robust linear
MPC formulations with an invariant set in the form of an ellipsoid, which is
(typically) much easier to compute and results in an optimization problem with
significantly fewer constraints, even for average-sized systems. However, this
optimization problem is no longer the quadratic programming problem found in
most linear MPC approaches, thus meriting the development of a tailored solver.
The proposed solver is suitable for its use in embedded systems, since it is
sparse, has a small memory footprint and requires no external libraries. We
show the results of its implementation in an embedded system to control a
simulated multivariable plant, comparing it against other alternatives.
|
We show, using the same Lagrangian for the $K_1(1270) \to \pi K^*_0(1430)$
and $K^*_0(1430) \to K_1(1270) \pi$ decays, that the present PDG data on the
partial decay width of $K_1(1270) \to \pi K^*_0(1430)$ implies a width for
$K^*_0(1430) \to K_1(1270) \pi$ decay which is about ten times larger than the
total $K^*_0(1430)$ width. A discussion on this inconsistency is done,
stressing its relationship to the existence of two $K_1(1270)$ states obtained
with the chiral unitary theory, which are not considered in the experimental
analyses of $K\pi\pi$ data.
|
This article considers average marginal effects (AME) in a panel data fixed
effects logit model. Relating the identified set of the AME to an extremal
moment problem, we first show how to obtain sharp bounds on the AME
straightforwardly, without any optimization. Then, we consider two strategies
to build confidence intervals on the AME. In the first, we estimate the sharp
bounds with a semiparametric two-step estimator. The second, very simple
strategy estimates instead a quantity known to be at a bounded distance from
the AME. It does not require any nonparametric estimation but may result in
larger confidence intervals. Monte Carlo simulations suggest that both
approaches work well in practice, the second being often very competitive.
Finally, we show that our results also apply to average treatment effects, the
average structural functions and ordered, fixed effects logit models.
|
Ram Pressure Stripping can remove gas from satellite galaxies in clusters via
a direct interaction between the intracluster medium (ICM) and the interstellar
medium. This interaction is generally thought of as a contact force per area,
however we point out that these gases must interact in a hydrodynamic fashion,
and argue that this will lead to mixing of the galactic gas with the ICM wind.
We develop an analytic framework for how mixing is related to the acceleration
of stripped gas from a satellite galaxy. We then test this model using three
"wind-tunnel" simulations of Milky Way-like galaxies interacting with a moving
ICM, and find excellent agreement with predictions using the analytic
framework. Focusing on the dense clumps in the stripped tails, we find that
they are nearly uniformly mixed with the ICM, indicating that all gas in the
tail mixes with the surroundings, and dense clumps are not separate entities to
be modeled differently than diffuse gas. We find that while mixing drives
acceleration of stripped gas, the density and velocity of the surrounding wind
will determine whether the mixing results in the heating of stripped gas into
the ICM, or the cooling of the ICM into dense clouds.
|
Domain shift is a major challenge for object detectors to generalize well to
real world applications. Emerging techniques of domain adaptation for two-stage
detectors help to tackle this problem. However, two-stage detectors are not the
first choice for industrial applications due to its long time consumption. In
this paper, a novel Domain Adaptive YOLO (DA-YOLO) is proposed to improve
cross-domain performance for one-stage detectors. Image level features
alignment is used to strictly match for local features like texture, and
loosely match for global features like illumination. Multi-scale instance level
features alignment is presented to reduce instance domain shift effectively ,
such as variations in object appearance and viewpoint. A consensus
regularization to these domain classifiers is employed to help the network
generate domain-invariant detections. We evaluate our proposed method on
popular datasets like Cityscapes, KITTI, SIM10K and etc.. The results
demonstrate significant improvement when tested under different cross-domain
scenarios.
|
Most of the recent work on terminology integration in machine translation has
assumed that terminology translations are given already inflected in forms that
are suitable for the target language sentence. In day-to-day work of
professional translators, however, it is seldom the case as translators work
with bilingual glossaries where terms are given in their dictionary forms;
finding the right target language form is part of the translation process. We
argue that the requirement for apriori specified target language forms is
unrealistic and impedes the practical applicability of previous work. In this
work, we propose to train machine translation systems using a source-side data
augmentation method that annotates randomly selected source language words with
their target language lemmas. We show that systems trained on such augmented
data are readily usable for terminology integration in real-life translation
scenarios. Our experiments on terminology translation into the morphologically
complex Baltic and Uralic languages show an improvement of up to 7 BLEU points
over baseline systems with no means for terminology integration and an average
improvement of 4 BLEU points over the previous work. Results of the human
evaluation indicate a 47.7% absolute improvement over the previous work in term
translation accuracy when translating into Latvian.
|
The adaptive stochastic gradient descent (SGD) with momentum has been widely
adopted in deep learning as well as convex optimization. In practice, the last
iterate is commonly used as the final solution to make decisions. However, the
available regret analysis and the setting of constant momentum parameters only
guarantee the optimal convergence of the averaged solution. In this paper, we
fill this theory-practice gap by investigating the convergence of the last
iterate (referred to as individual convergence), which is a more difficult task
than convergence analysis of the averaged solution. Specifically, in the
constrained convex cases, we prove that the adaptive Polyak's Heavy-ball (HB)
method, in which only the step size is updated using the exponential moving
average strategy, attains an optimal individual convergence rate of
$O(\frac{1}{\sqrt{t}})$, as opposed to the optimality of $O(\frac{\log t}{\sqrt
{t}})$ of SGD, where $t$ is the number of iterations. Our new analysis not only
shows how the HB momentum and its time-varying weight help us to achieve the
acceleration in convex optimization but also gives valuable hints how the
momentum parameters should be scheduled in deep learning. Empirical results on
optimizing convex functions and training deep networks validate the correctness
of our convergence analysis and demonstrate the improved performance of the
adaptive HB methods.
|
NASA's Transiting Exoplanet Survey Satellite (TESS) mission is expected to
discover hundreds of planets via single transits first identified in their
light curves. Determining the orbital period of these single transit candidates
typically requires a significant amount of follow-up work to observe a second
transit or measure a radial velocity orbit. In Yao et al. (2019), we developed
simulations that demonstrated the ability to use archival photometric data in
combination with TESS to "precover" the orbital period for these candidates
with a precision of several minutes, assuming circular orbits. In this work, we
incorporate updated models for TESS single transits, allowing for eccentric
orbits, along with an updated methodology to improve the reliability of the
results. Additionally, we explore how radial velocity (RV) observations can be
used to follow up single transit events, using strategies distinct from those
employed when the orbital period is known. We find that the use of an estimated
period based on a circular orbit to schedule reconnaissance RV observations can
efficiently distinguish eclipsing binaries from planets. For candidates that
pass reconnaissance RV observations, we simulate RV monitoring campaigns that
enable one to obtain an approximate orbital solution. We find this method can
regularly determine the orbital periods for planets more massive than 0.5 M_J
with orbital periods as long as 100 days.
|
Current speech agent interactions are typically user-initiated, limiting the
interactions they can deliver. Future functionality will require agents to be
proactive, sometimes interrupting users. Little is known about how these spoken
interruptions should be designed, especially in urgent interruption contexts.
We look to inform design of proactive agent interruptions through investigating
how people interrupt others engaged in complex tasks. We therefore developed a
new technique to elicit human spoken interruptions of people engaged in other
tasks. We found that people interrupted sooner when interruptions were urgent.
Some participants used access rituals to forewarn interruptions, but most
rarely used them. People balanced speed and accuracy in timing interruptions,
often using cues from the task they interrupted. People also varied phrasing
and delivery of interruptions to reflect urgency. We discuss how our findings
can inform speech agent design and how our paradigm can help gain insight into
human interruptions in new contexts.
|
A rank-adaptive integrator for the dynamical low-rank approximation of matrix
and tensor differential equations is presented. The fixed-rank integrator
recently proposed by two of the authors is extended to allow for an adaptive
choice of the rank, using subspaces that are generated by the integrator
itself. The integrator first updates the evolving bases and then does a
Galerkin step in the subspace generated by both the new and old bases, which is
followed by rank truncation to a given tolerance. It is shown that the adaptive
low-rank integrator retains the exactness, robustness and symmetry-preserving
properties of the previously proposed fixed-rank integrator. Beyond that, up to
the truncation tolerance, the rank-adaptive integrator preserves the norm when
the differential equation does, it preserves the energy for Schr\"odinger
equations and Hamiltonian systems, and it preserves the monotonic decrease of
the functional in gradient flows. Numerical experiments illustrate the
behaviour of the rank-adaptive integrator.
|
SARS-CoV-2 is the third betacoronavirus to enter the human population in the
past 20 years, revealing a concerning pattern. Clearly, preventing a future
pandemic from such viruses is a critical priority. Previous studies have shown
that shRNAs can be powerful suppressors of RNA viruses in transgenic animals
and substantially reduce transmission. Thus, we propose the introduction of
anti-betacoronavirus shRNAs using CRISPR/CAS9 gene drive into the horseshoe bat
population, the natural reservoir of those viruses, to combat this pandemic
threat at its source. Importantly, our approach is not expected to create any
harm to bats and can benefit other animals in the ecosystem that contract
betacoronaviruses from bats. We map the ethical and the technical aspects and
suggest guidelines for moving forward with this proposal.
|
Lattice-skin structures composed of a thin-shell skin and a lattice infill
are widespread in nature and large-scale engineering due to their efficiency
and exceptional mechanical properties. Recent advances in additive
manufacturing, or 3D printing, make it possible to create lattice-skin
structures of almost any size with arbitrary shape and geometric complexity. We
propose a novel gradient-based approach to optimising both the shape and infill
of lattice-skin structures to improve their efficiency further. The respective
gradients are computed by fully considering the lattice-skin coupling while the
lattice topology and shape optimisation problems are solved in a sequential
manner. The shell is modelled as a Kirchhoff-Love shell and analysed using
isogeometric subdivision surfaces, whereas the lattice is modelled as a
pin-jointed truss. The lattice consists of many cells, possibly of different
sizes, with each containing a small number of struts. We propose a penalisation
approach akin to the SIMP (solid isotropic material with penalisation) method
for topology optimisation of the lattice. Furthermore, a corresponding
sensitivity filter and a lattice extraction technique are introduced to ensure
the stability of the optimisation process and to eliminate scattered struts of
small cross-sectional areas. The developed topology optimisation technique is
suitable for non-periodic, non-uniform lattices. For shape optimisation of both
the shell and the lattice, the geometry of the lattice-skin structure is
parameterised using the free-form deformation technique. The topology and shape
optimisation problems are solved in an iterative, sequential manner. The
effectiveness of the proposed approach and the influence of different
algorithmic parameters are demonstrated with several numerical examples.
|
The elastic energy of mixing for multi-component solid solutions is derived
by generalizing Eshelby's sphere-in-hole model for binary alloys. By surveying
the dependence of the elastic energy on chemical composition and lattice
misfit, we propose a lattice strain coefficient {\lambda}*. Applying to several
high-entropy alloys and superalloys, we found that most solid solution alloys
are stable when {\lambda}*<0.16, analogous to the Hume-Rothery atomic-size rule
for binary alloys. We also reveal that the polydispersity index {\delta},
frequently used for describing strain in multi-component alloys, is directly
related to the elastic energy (e) with e=q{\delta}^2, q being an elastic
constant. Furthermore, the effects of (i) the number and (ii) the atomic-size
distribution of constituting elements on the phase stability of high-entropy
alloys were quantified. The present derivations open for richer considerations
of elastic effects in high-entropy alloys, offering immediate support for
quantitative assessments of their thermodynamic properties and studying related
strengthening mechanisms.
|
The formal semantics of an interpreted first-order logic (FOL) statement can
be given in Tarskian Semantics or a basically equivalent Game Semantics. The
latter maps the statement and the interpretation into a two-player semantic
game. Many combinatorial problems can be described using interpreted FOL
statements and can be mapped into a semantic game. Therefore, learning to play
a semantic game perfectly leads to the solution of a specific instance of a
combinatorial problem. We adapt the AlphaZero algorithm so that it becomes
better at learning to play semantic games that have different characteristics
than Go and Chess. We propose a general framework, Persephone, to map the FOL
description of a combinatorial problem to a semantic game so that it can be
solved through a neural MCTS based reinforcement learning algorithm. Our goal
for Persephone is to make it tabula-rasa, mapping a problem stated in
interpreted FOL to a solution without human intervention.
|
The diamond is the graph obtained by removing an edge from the complete graph
on 4 vertices. A graph is ($P_6$, diamond)-free if it contains no induced
subgraph isomorphic to a six-vertex path or a diamond. In this paper we show
that the chromatic number of a ($P_6$, diamond)-free graph $G$ is no larger
than the maximum of 6 and the clique number of $G$. We do this by reducing the
problem to imperfect ($P_6$, diamond)-free graphs via the Strong Perfect Graph
Theorem, dividing the imperfect graphs into several cases, and giving a proper
colouring for each case. We also show that there is exactly one
6-vertex-critical ($P_6$, diamond, $K_6$)-free graph. Together with the
Lov\'asz theta function, this gives a polynomial time algorithm to compute the
chromatic number of ($P_6$, diamond)-free graphs.
|
Distinguishability and predictability are part of complementarity relations
which apply to two different kinds of interference experiments, with and
without a path-detector, respectively. In [Opt. Comm. 179, 337 (2000)], Englert
and Bergou pointed out the possible connection between distinguishability,
predictability, and entanglement. They even conjectured that an entanglement
measure was hidden between the measures of distinguishability and
predictability. Here, we push forward this conjecture. We start defining a new
entropic distinguishability measure and suggesting an entanglement measure as
the difference between this entropic distinguishability and an entropic
predictability measure already defined in the literature. Besides, we prove
that it is possible to define an entanglement monotone from the largest value
of the distinguishability and the corresponding predictability, provided that
the predictability satisfy the criteria already established in the literature.
Thus, this result formally connects an entanglement monotone with
distinguishability and the corresponding predictability, without appealing to
specific measures.
|
A matching is compatible to two or more labeled point sets of size $n$ with
labels $\{1,\dots,n\}$ if its straight-line drawing on each of these point sets
is crossing-free. We study the maximum number of edges in a matching compatible
to two or more labeled point sets in general position in the plane. We show
that for any two labeled convex sets of $n$ points there exists a compatible
matching with $\lfloor \sqrt {2n}\rfloor$ edges. More generally, for any $\ell$
labeled point sets we construct compatible matchings of size
$\Omega(n^{1/\ell})$. As a corresponding upper bound, we use probabilistic
arguments to show that for any $\ell$ given sets of $n$ points there exists a
labeling of each set such that the largest compatible matching has
${\mathcal{O}}(n^{2/({\ell}+1)})$ edges. Finally, we show that $\Theta(\log n)$
copies of any set of $n$ points are necessary and sufficient for the existence
of a labeling such that any compatible matching consists only of a single edge.
|
Consider a Hamiltonian diffeomorphism $g$ on a surface. We describe several
scenarios where a curve $L$ and its image $g(L)$ provide a simple evidence that
$g$ is not autonomous.
|
Balancing the needs of data privacy and predictive utility is a central
challenge for machine learning in healthcare. In particular, privacy concerns
have led to a dearth of public datasets, complicated the construction of
multi-hospital cohorts and limited the utilization of external machine learning
resources. To remedy this, new methods are required to enable data owners, such
as hospitals, to share their datasets publicly, while preserving both patient
privacy and modeling utility. We propose NeuraCrypt, a private encoding scheme
based on random deep neural networks. NeuraCrypt encodes raw patient data using
a randomly constructed neural network known only to the data-owner, and
publishes both the encoded data and associated labels publicly. From a
theoretical perspective, we demonstrate that sampling from a sufficiently rich
family of encoding functions offers a well-defined and meaningful notion of
privacy against a computationally unbounded adversary with full knowledge of
the underlying data-distribution. We propose to approximate this family of
encoding functions through random deep neural networks. Empirically, we
demonstrate the robustness of our encoding to a suite of adversarial attacks
and show that NeuraCrypt achieves competitive accuracy to non-private baselines
on a variety of x-ray tasks. Moreover, we demonstrate that multiple hospitals,
using independent private encoders, can collaborate to train improved x-ray
models. Finally, we release a challenge dataset to encourage the development of
new attacks on NeuraCrypt.
|
The paper addresses an improved inner current reference calculation to be
employed in the control of modular multilevel converters operating during
either balanced or unbalanced conditions. The suggested reference calculation
is derived based on the AC and DC additive and differential voltage components
applied to the upper and lower arms of the converter. In addition, the impacts
caused not only by the AC network's impedances but also by the MMC's arm
impedances are also considered during the derivation of the AC additive current
reference expressions. Another issue discussed in this article regards that
singular voltage conditions, where the positive-sequence component is equal to
the negative one, may occur not only in the AC network but also internally
(within the converter's applied voltages). Several different inner current
reference calculation methods are compared and their applicability during the
former fault conditions is analyzed. The paper presents a detailed formulation
of the inner current reference calculation and applies it to different
unbalanced AC grid faults where it is shown that the presented approach can be
potentially used to maintain the internal energy of the converter balanced
during normal and fault conditions.
|
The purpose of this technical report is to review the main properties of an
accelerated composite gradient (ACG) method commonly referred to as the Fast
Iterative Shrinkage-Thresholding Algorithm (FISTA). In addition, we state a
version of FISTA for solving both convex and strongly convex composite
minimization problems and derive its iteration complexities to generate
iterates satisfying various stopping criteria, including one which arises in
the course of solving other composite optimization problems via inexact
proximal point schemes. This report also discusses different reformulations of
the convex version of FISTA and how they relate to other formulations in the
literature.
|
We present a Convolutional Neural Network (CNN) architecture for inverse
Raman amplifier design. This model aims at finding the pump powers and
wavelengths required for a target signal power evolution, both in distance
along the fiber and in frequency. Using the proposed framework, the prediction
of the pump configuration required to achieve a target power profile is
demonstrated numerically with high accuracy in C-band considering both
counter-propagating and bidirectional pumping schemes. For a distributed Raman
amplifier based on a 100 km single-mode fiber, a low mean set (0.51, 0.54 and
0.64 dB) and standard deviation set (0.62, 0.43 and 0.38 dB) of the maximum
test error are obtained numerically employing 2 and 3 counter, and 4
bidirectional propagating pumps, respectively.
|
The intrinsic orbital magnetization of a TMD monolayer is usually calculated
for a plane unbounded system without mentioning the geometrical shape of
samples and boundary conditions (BCs) for electron wave functions. The method
of calculations includes allowing for the Berry curvature contribution also in
the case when the system is described by the two-band minimal model [9]. In the
present paper, we show that the geometrical and topological properties of the
specimen, as well as the BCs, play an important role in the problem of
magnetization even for a macroscopic specimen.
|
A conventional approach to train neural ordinary differential equations
(ODEs) is to fix an ODE solver and then learn the neural network's weights to
optimize a target loss function. However, such an approach is tailored for a
specific discretization method and its properties, which may not be optimal for
the selected application and yield the overfitting to the given solver. In our
paper, we investigate how the variability in solvers' space can improve neural
ODEs performance. We consider a family of Runge-Kutta methods that are
parameterized by no more than two scalar variables. Based on the solvers'
properties, we propose an approach to decrease neural ODEs overfitting to the
pre-defined solver, along with a criterion to evaluate such behaviour.
Moreover, we show that the right choice of solver parameterization can
significantly affect neural ODEs models in terms of robustness to adversarial
attacks. Recently it was shown that neural ODEs demonstrate superiority over
conventional CNNs in terms of robustness. Our work demonstrates that the model
robustness can be further improved by optimizing solver choice for a given
task. The source code to reproduce our experiments is available at
https://github.com/juliagusak/neural-ode-metasolver.
|
In this article, we are presenting the relationship between environmental
pollution and the income level of the selected twenty-four countries. We
implemented a data-based research analysis where, for each country, we analyzed
the related data for fifty-six years, from 1960 to 2016, to assess the
relationship between the carbon emission and income level. After performing the
related data analysis for each country, we concluded whether the results for
that country were in line with the Environmental Kuznets Curve (EKC)
hypothesis. The EKC hypothesis suggests that the carbon emission per capita
starts a declining trend when the country-specific high level of income is
reached. The results of our data analyses show that the EKC hypothesis is valid
for high-income countries and the declining trends of carbon emission are
clearly observed when the income level reaches a specific high enough level. On
the other hand, for the non-high income countries, our analysis results show
that it is too early to make an assessment at this growth stage of their
economies because they have not reached their related high-enough income per
capita levels yet. Furthermore, we performed two more additional analyses on
high-income countries. First, we analyzed the related starting years of their
carbon emission declining trends. The big variance in the starting years of the
carbon emission declining trends shows that the international policies are
clearly ineffective in initiating the declining trend in carbon emission. In
addition, for the high-income countries, we explained the differences in their
carbon emission per capita levels in 2014 with their SGI indices and their
dependence on high-carbon emission energy production.
|
Momentum strategies are an important part of alternative investments and are
at the heart of commodity trading advisors (CTAs). These strategies have,
however, been found to have difficulties adjusting to rapid changes in market
conditions, such as during the 2020 market crash. In particular, immediately
after momentum turning points, where a trend reverses from an uptrend
(downtrend) to a downtrend (uptrend), time-series momentum (TSMOM) strategies
are prone to making bad bets. To improve the response to regime change, we
introduce a novel approach, where we insert an online changepoint detection
(CPD) module into a Deep Momentum Network (DMN) [1904.04912] pipeline, which
uses an LSTM deep-learning architecture to simultaneously learn both trend
estimation and position sizing. Furthermore, our model is able to optimise the
way in which it balances 1) a slow momentum strategy which exploits persisting
trends, but does not overreact to localised price moves, and 2) a fast
mean-reversion strategy regime by quickly flipping its position, then swapping
it back again to exploit localised price moves. Our CPD module outputs a
changepoint location and severity score, allowing our model to learn to respond
to varying degrees of disequilibrium, or smaller and more localised
changepoints, in a data driven manner. Back-testing our model over the period
1995-2020, the addition of the CPD module leads to an improvement in Sharpe
ratio of one-third. The module is especially beneficial in periods of
significant nonstationarity, and in particular, over the most recent years
tested (2015-2020) the performance boost is approximately two-thirds. This is
interesting as traditional momentum strategies have been underperforming in
this period.
|
The forthcoming generation of multi-petawatt lasers opens the way to abundant
pair production by the nonlinear Breit-Wheeler process, i.e., the decay of a
photon into an electron-positron pair inside an intense laser field. In this
paper we explore the optimal conditions for Breit-Wheeler pair production in
the head-on collision of a laser pulse with gamma photons. The role of the
laser peak intensity versus the focal spot size and shape is examined keeping a
constant laser energy to match experimental constraints. A simple model for the
soft-shower case, where most pairs originate from the decay of the initial
gamma photons, is derived. This approach provides us with a semi-analytical
model for more complex situations involving either Gaussian or Laguerre-Gauss
(LG) laser beams. We then explore the influence of the order of the LG beams on
pair creation. Finally we obtain the result that, above a given threshold, a
larger spot size (or a higher order in the case of LG laser beams) is more
favorable than a higher peak intensity. Our results match very well with
three-dimensional particle-in-cell simulations and can be used to guide
upcoming experimental campaigns.
|
Novel view synthesis is a long-standing problem in machine learning and
computer vision. Significant progress has recently been made in developing
neural scene representations and rendering techniques that synthesize
photorealistic images from arbitrary views. These representations, however, are
extremely slow to train and often also slow to render. Inspired by neural
variants of image-based rendering, we develop a new neural rendering approach
with the goal of quickly learning a high-quality representation which can also
be rendered in real-time. Our approach, MetaNLR++, accomplishes this by using a
unique combination of a neural shape representation and 2D CNN-based image
feature extraction, aggregation, and re-projection. To push representation
convergence times down to minutes, we leverage meta learning to learn neural
shape and image feature priors which accelerate training. The optimized shape
and image features can then be extracted using traditional graphics techniques
and rendered in real time. We show that MetaNLR++ achieves similar or better
novel view synthesis results in a fraction of the time that competing methods
require.
|
The integral model of a GU(n-1,1) Shimura variety carries a universal abelian
scheme over it, and the dual top exterior power of its Lie algebra carries a
natural hermitian metric. We express the arithmetic volume of this metrized
line bundle, defined as an iterated self-intersection in the Gillet-Soule
arithmetic Chow ring, in terms of logarithmic derivatives of Dirichlet
L-functions.
|
A recent line of work has studied the relationship between the Wishart matrix
$X^\top X$, where $X\in \mathbb{R}^{d\times n}$ has i.i.d. standard Gaussian
entries, and the corresponding Gaussian matrix with independent entries above
the diagonal. Jiang and Li (2015) and Bubeck et al. (2016) showed that these
two matrix ensembles converge in total variation whenever $d/n^3\to \infty$,
and Bubeck et al. (2016) showed this to be sharp. In this paper we aim to
identify the precise threshold for $d$ in terms of $n$ for subsets of Wishart
matrices to converge in total variation to independent Gaussians. It turns out
that the combinatorial structure of the revealed entries, viewed as the
adjacency matrix of a graph $G$, characterizes the distance from fully
independent. Specifically, we show that the threshold for $d$ depends on the
number of various small subgraphs in $G$. So, even when the number of revealed
entries is fixed, the threshold can vary wildly depending on their
configuration. Convergence of masked Wishart to independent Gaussians thus
inherently involves an interplay between both probabilistic and combinatorial
phenomena. Our results determine the sharp threshold for a large family of $G$,
including Erd\H{o}s-R\'enyi $G\sim \mathcal{G}(n,p)$ at all values $p\gtrsim
n^{-2}\mathrm{polylog}(n)$. Our proof techniques are both combinatorial and
information theoretic, which together allow us to carefully unravel the
dependencies in the masked Wishart ensemble.
|
The realization of multifunctional two-dimensional (2D) materials is
fundamentally intriguing, such as combination of piezoelectricity with
topological insulating phase or ferromagnetism. In this work, a Janus monolayer
$\mathrm{SrAlGaSe_4}$ is built from 2D $\mathrm{MA_2Z_4}$ family with dynamic,
mechanical and thermal stabilities, which is piezoelectric due to lacking
inversion symmetry. The unstrained $\mathrm{SrAlGaSe_4}$ monolayer is a narrow
gap normal insulator (NI) with spin orbital coupling (SOC). However, the NI to
topological insulator (TI) phase transition can be induced by the biaxial
strain, and a piezoelectric quantum spin Hall insulator (PQSHI) can be
achieved. More excitingly, the phase transformation point is only about 1.01
tensile strain, and nontrivial band topology can hold until considered 1.16
tensile strain. Moreover, a Rashba spin splitting in the conduction bands can
exit in PQSHI due to the absence of a horizontal mirror symmetry and the
presence of SOC. For monolayer $\mathrm{SrAlGaSe_4}$, both in-plane and much
weak out-of-plane piezoelectric polarizations can be induced with a uniaxial
strain applied. The calculated piezoelectric strain coefficients $d_{11}$ and
$d_{31}$ of monolayer $\mathrm{SrAlGaSe_4}$ are -1.865 pm/V and -0.068 pm/V at
1.06 tensile strain as a representative TI. In fact, many PQSHIs can be
realized from 2D $\mathrm{MA_2Z_4}$ family. To confirm that, similar to
$\mathrm{SrAlGaSe_4}$, the coexistence of piezoelectricity and topological
orders can be realized by strain (about 1.04 tensile strain) in the
$\mathrm{CaAlGaSe_4}$ monolayer. Our works suggest that Janus monolayer
$\mathrm{SrAlGaSe_4}$ is a pure 2D system for PQSHI, enabling future studies
exploring the interplay between piezoelectricity and topological orders, which
can lead to novel applications in electronics and spintronics.
|
Hyperfine-structure constants of odd Ra$^{+}$ due to the interactions of
nuclear magnetic dipole, electric quadrupole, and magnetic octupole moments
with the electrons are investigated in the framework of relativistic
coupled-cluster method within single- and double-excitation approximation. The
calculated energies and magnetic dipole hyperfine-structure constants $A$
exhibit a good agreement with available experimental values. Combining with the
experimental electric quadrupole hyperfine-structure constant, we also
extracted the electric quadrupole moments $Q$ of $^{209,211,221,223}$Ra. Our
$Q$($^{221}$Ra) and $Q$($^{223}$Ra) are consistent with the referenced values
from a semi-empirical analysis (Z. Phys. D: At., Mol. Clusters 11, 105 (1988)),
but $Q(^{211}$Ra)=$0.33(2)$ is smaller than the referenced value $0.48(4)$ by
about 30\%. Furthermore, we also performed a procedure for assessing the
contributions of magnetic octupole moment to the hyperfine splitting. The
sensitivity of hyperfine-structure interval measurements in $^{223}$Ra$^{+}$
that can reveal the effect caused by the nuclear octupole moment are found to
be on the order of kHz.
|
Person search has recently emerged as a challenging task that jointly
addresses pedestrian detection and person re-identification. Existing
approaches follow a fully supervised setting where both bounding box and
identity annotations are available. However, annotating identities is
labor-intensive, limiting the practicability and scalability of current
frameworks. This paper inventively considers weakly supervised person search
with only bounding box annotations. We proposed to address this novel task by
investigating three levels of context clues (i.e., detection, memory and scene)
in unconstrained natural images. The first two are employed to promote local
and global discriminative capabilities, while the latter enhances clustering
accuracy. Despite its simple design, our CGPS achieves 80.0% in mAP on
CUHK-SYSU, boosting the baseline model by 8.8%. Surprisingly, it even achieves
comparable performance with several supervised person search models. Our code
is available at https://github.com/ljpadam/CGPS
|
In the setting of Carnot groups, we prove the $q-$Logarithmic Sobolev
inequality for probability measures as a function of the Carnot-Carath\'eodory
distance. As an application, we use the Hamilton-Jacobi equation in the setting
of Carnot groups to prove the $p-$Talagrand inequality and hypercontractivity.
|
The exploration of germanium (Ge) detectors with amorphous Ge (a-Ge) contacts
has drawn attention to the searches for rare-event physics such as dark matter
and neutrinoless double-beta decay. The charge barrier height (CBH) of the a-Ge
contacts deposited on the detector surface is crucial to suppress the leakage
current of the detector in order to achieve la ow-energy detection threshold
and high-energy resolution. The temperature-dependent CBH of a-Ge contacts for
three Ge detectors is analyzed to study the bulk leakage current (BLC)
characteristics. The detectors were fabricated at the University of South
Dakota using homegrown crystals. The CBH is determined from the BLC when the
detectors are operated in the reverse bias mode with a guard-ring structure,
which separates the BLC from the surface leakage current (SLC). The results
show that CBH is temperature dependent. The direct relation of the CBH
variation to temperature is related to the barrier inhomogeneities created on
the interface of a-Ge and crystalline Ge. The inhomogeneities that occur at the
interface were analyzed using the Gaussian distribution model for three
detectors. The CBH of a-Ge contact is projected to zero temperature. The
implication of the CBH at zero temperature is discussed for Ge detectors with
a-Ge contacts in searching for rare-event physics.
|
Automated cooking machine is a goal for the future. The main aim is to make
the cooking process easier, safer, and create human welfare. To allow robots to
accurately perform the cooking activities, it is important for them to
understand the cooking environment and recognize the objects, especially
correctly identifying the state of the cooking objects. This will significantly
improve the correctness of the following cooking recipes. In this project,
several parts of the experiment were conducted to design a robust deep
convolutional neural network for classifying the state of the cooking objects
from scratch. The model is evaluated by using various techniques, such as
adjusting architecture layers, tuning key hyperparameters, and using different
optimization techniques to maximize the accuracy of state classification.
|
Research in Cognitive Science suggests that humans understand and represent
knowledge of the world through causal relationships. In addition to
observations, they can rely on experimenting and counterfactual reasoning --
i.e. referring to an alternative course of events -- to identify causal
relations and explain atypical situations. Different instances of control
systems, such as smart homes, would benefit from having a similar causal model,
as it would help the user understand the logic of the system and better react
when needed. However, while data-driven methods achieve high levels of
correlation detection, they mainly fall short of finding causal relations,
notably being limited to observations only. Notably, they struggle to identify
the cause from the effect when detecting a correlation between two variables.
This paper introduces a new way to learn causal models from a mixture of
experiments on the environment and observational data. The core of our method
is the use of selected interventions, especially our learning takes into
account the variables where it is impossible to intervene, unlike other
approaches. The causal model we obtain is then used to generate Causal Bayesian
Networks, which can be later used to perform diagnostic and predictive
inference. We use our method on a smart home simulation, a use case where
knowing causal relations pave the way towards explainable systems. Our
algorithm succeeds in generating a Causal Bayesian Network close to the
simulation's ground truth causal interactions, showing encouraging prospects
for application in real-life systems.
|
Studies of neutron stars are at their peak after the multi-messenger
observation of the binary merger event GW170817, which strongly constraints the
stellar parameters like tidal deformability, masses and radii. Although current
and future observations will provide stronger limits on the neutron stars
parameters, knowledge of explicit interior solutions to Einstein's equations,
which connect observed parameters with the internal structure, are crucial to
have a satisfactory description of the interior of these compact objects. A
well known exact solution, which has shown a relatively good approximation to a
neutron star, is the Tolman VII solution. In order to provide a better fitting
for the energy density profile, with the realistic equations of state for
neutron stars, recently Jiang and Yagi proposed a modified version of this
model which introduces an additional parameter $\alpha$ reflecting the
interplay of the quadratic and the newly added quartic term in the energy
density profile. Here we study the dynamical stability of this modified Tolman
VII solution using the theory of infinitesimal and adiabatic radial
oscillations developed by Chandrasekhar. For this purpose, we determine values
of the critical adiabatic index, for the onset of instability, considering
configurations with varying compactness and $\alpha$. We found that the new
models are stable against radial oscillations for a considerable range of
values of compactness and the new parameter $\alpha$, thus supporting their
applicability as a physically plausible approximation of realistic neutron
stars.
|
We present single-mode nanowire (NW) lasers with ultralow threshold in the
near-infrared spectral range. To ensure the single-mode operation, the NW
diameter and length are reduced specifically to minimize the longitudinal and
transverse modes of the NW cavity. Increased optical losses and reduced gain
volume by the dimension reduction are compensated by excellent NW morphology
and InGaAs/GaAs multi-quantum disks. At 5 K, a threshold low as 1.6 {\mu}J/cm2
per pulse is achieved with a resulting quality factor exceeding 6400. By
further passivating the NW with an AlGaAs shell to suppress surface
non-radiative recombination, single-mode lasing operation is obtained with a
threshold of only 48 {\mu}J/cm2 per pulse at room temperature with a high
characteristic temperature of 223 K and power output of ~ 0.9 {\mu}W. These
single-mode, ultralow threshold, high power output NW lasers are promising for
the development of near-infrared nanoscale coherent light sources for
integrated photonic circuits, sensing, and spectroscopy.
|
In reconfigurable intelligent surfaces (RISs) aided communications, the
existing passive beamforming (PB) design involves polynomial complexity in the
number of reflecting elements, and thus is difficult to implement due to a
massive number of reflecting elements. To overcome this difficulty, we propose
a reflection-angle-based cascaded channel model by adopting the generalized
Snell's law, in which the dimension of the variable space involved in
optimization is significantly reduced, resulting in a simplified hierarchical
passive beamforming (HPB) design. We develop an efficient two-stage HPB
algorithm, which exploits the angular domain property of the channel, to
maximize the achievable rate of the target user. Simulation results demonstrate
the appealing performance and low complexity of the proposed HPB design.
|
In this paper a methodology is described to estimate multigroup neutron
source distributions which must be added into a subcritical system to drive it
to a steady state prescribed power distribution. This work has been motivated
by the principle of operation of the ADS (Accelerator Driven System) reactors,
which have subcritical cores stabilized by the action of external sources. We
use the energy multigroup two-dimensional neutron transport equation in the
discrete ordinates formulation (SN) and the equation which is adjoint to it,
whose solution is interpreted here as a distribution measuring the importance
of the angular flux of neutrons to a linear functional. These equations are
correlated through a reciprocity relation, leading to a relationship between
the interior sources of neutrons and the power produced by unit length of
height of the domain. A coarse-mesh numerical method of the spectral nodal
class, referred to as adjoint response matrix constant-nodal method, is applied
to numerically solve the adjoint SN equations. Numerical experiments are
performed to analyze the accuracy of the present methodology so as to
illustrate its potential practical applications.
|
We show that in three-dimensional (3D) topological metals, a subset of the
van Hove singularities of the density of states sits exactly at the transitions
between topological and trivial gapless phases. We may refer to these as
topological van Hove singularities. By investigating two minimal models, we
show that they originate from energy saddle points located between Weyl points
with opposite chiralities, and we illustrate their topological nature through
their magnetotransport properties in the ballistic regime. We exemplify the
relation between van Hove singularities and topological phase transitions in
Weyl systems by analyzing the 3D Hofstadter model, which offers a simple and
interesting playground to consider different kinds of Weyl metals and to
understand the features of their density of states. In this model, as a
function of the magnetic flux, the occurrence of topological van Hove
singularities can be explicitly checked.
|
In this paper we present a methodology for data accesses when solving batches
of Tridiagonal and Pentadiagonal matrices that all share the same
left-hand-side (LHS) matrix. The intended application is to the numerical
solution of Partial Differential Equations via the finite-difference method,
although the methodology is applicable more broadly. By only storing one copy
of this matrix, a significant reduction in storage overheads is obtained,
together with a corresponding decrease in compute time. Taken together, these
two performance enhancements lead to an overall more efficient implementation
over the current state of the art algorithms cuThomasBatch and cuPentBatch,
allowing for a greater number of systems to be solved on a single GPU. We
demonstrate the methodology in the case of the Diffusion Equation,
Hyperdiffusion Equation, and the Cahn--Hilliard Equation, all in one spatial
dimension. In this last example, we demonstrate how the method can be used to
perform $2^{20}$ independent simulations of phase separation in one dimension.
In this way, we build up a robust statistical description of the coarsening
phenomenon which is the defining behavior of phase separation. We anticipate
that the method will be of further use in other similar contexts requiring
statistical simulation of physical systems.
|
Neural approaches have achieved state-of-the-art accuracy on machine
translation but suffer from the high cost of collecting large scale parallel
data. Thus, a lot of research has been conducted for neural machine translation
(NMT) with very limited parallel data, i.e., the low-resource setting. In this
paper, we provide a survey for low-resource NMT and classify related works into
three categories according to the auxiliary data they used: (1) exploiting
monolingual data of source and/or target languages, (2) exploiting data from
auxiliary languages, and (3) exploiting multi-modal data. We hope that our
survey can help researchers to better understand this field and inspire them to
design better algorithms, and help industry practitioners to choose appropriate
algorithms for their applications.
|
Graphite is a ubiquitous electrode material with particular promise for use
in e.g., energy storage and desalination devices, but very little is known
about the properties of the graphite-electrolyte double layer at
technologically relevant concentrations. Here, the (electrified)
graphite-NaCl(aq) interface was examined using constant chemical potential
molecular dynamics simulations; this approach avoids ion depletion (due to
surface adsorption) and maintains a constant concentration of ions beyond the
surface. Specific Na+ adsorption at the graphite basal surface causes charging
of the interface in the absence of an applied potential. At moderate bulk
concentrations, this leads to accumulation of counter-ions in a diffuse layer
to balance the effective surface charge, consistent with established models of
the electrical double layer (DL). Beyond 0.6 M, however, a combination of
over-screening and ion crowding in the DL results in alternating compact layers
of ion density perpendicular to the interface. The transition to this regime is
marked by an increasing DL size and anomalous negative shifts to the potential
of zero charge with incremental changes to the bulk concentration. Our
observations are supported by changes to the position of the differential
capacitance minimum measured by electrochemical impedance spectroscopy.
Furthermore, a striking level of agreement between the differential capacitance
from simulations and experiments allows us to critically assess the accepted
norm that electrochemical capacitance measurements report simply on the density
of states of the graphite material. Finally, ion crowding at the highest
concentrations (beyond 5 M) leads to the formation of liquid-like NaCl clusters
confined to highly non-ideal regions of the double layer, where ion diffusion
is up to five times slower than in the bulk.
|
The distance matrix $\mathcal{D}$ of a connected graph $G$ is the matrix
indexed by the vertices of $G$ which entry $\mathcal{D}_{i,j}$ equals the
distance between the vertices $v_i$ and $v_j$. The distance signless Laplacian
matrix $\mathcal{Q}(G)$ of graph $G$ is defined as
$\mathcal{Q}(G)=Diag(Tr)+\mathcal{D}(G)$, where $Diag(Tr)$ is the diagonal
matrix of the vertex transmissions in $G$. The largest eigenvalue of
$\mathcal{Q}(G)$ is called the distance signless Laplacian spectral radius of
$G$, written as $\eta_1(G)$. And a perfect matching in a graph is a set of
disadjacent edges covering every vertex of $G$. In this paper, we present two
suffcient conditions in terms of the distance signless Laplacian sepectral
radius for the exsitence of perfect matchings in graphs and bipatite graphs.
|
A continuum, post-deposition mesoscopic model of a Moir\'e-regulated
self-assembly of metal nanoclusters on a twisted bilayer graphene is presented.
Quasi-two-dimensional nanocluster-like steady states at a low adsorbate
coverage are analytically determined for Pt, Ni, and Pb adsorbates, pointing
that nanoclusters self-assemble at the Moir\'e cells centers. This is followed
by the computations of nanoclusters self-assembly dynamics. Differences in the
self-assembly efficiency for three chosen metals are highlighted across three
typical values of an initial submonolayer coverage and for three temperature
regimes. Accounting for the adsorption potential of metal atoms onto graphene
leads to a significantly faster nanoclusters self-assembly and has a transient
impact on the nanoclusters morphologies. A model extensions to the cases of
nanoclusters self-assembly on a Moir\'e formed by a monolayer graphene over a
metal substrate, and the electromigration-guided self-assembly on such Moir\'e
are proposed.
|
The inviscid limit for the two-dimensional compressible viscoelastic
equations on the half plane is considered under the no-slip boundary condition.
When the initial deformation tensor is a perturbation of the identity matrix
and the initial density is near a positive constant, we establish the uniform
estimates of solutions to the compressible viscoelastic flows in the conormal
Sobolev spaces. It is well-known that for the corresponding inviscid limit of
the compressible Navier-Stokes equations with the no-slip boundary condition,
one does not expect the uniform energy estimates of solutions due to the
appearance of strong boundary layers. However, when the deformation tensor
effect is taken into account, our results show that the deformation tensor
plays an important role in the vanishing viscosity process and can prevent the
formation of strong boundary layers. As a result we are able to justify the
inviscid limit of solutions for the compressible viscous flows under the
no-slip boundary condition governed by the viscoelastic equations, based on the
uniform conormal regularity estimates achieved in this paper.
|
This paper develops an efficient procedure for designing low-complexity
codebooks for precoding in a full-dimension (FD) multiple-input multiple-output
(MIMO) system with a uniform planar array (UPA) antenna at the transmitter (Tx)
using tensor learning. In particular, instead of using statistical channel
models, we utilize a model-free data-driven approach with foundations in
machine learning to generate codebooks that adapt to the surrounding
propagation conditions. We use a tensor representation of the FD-MIMO channel
and exploit its properties to design quantized version of the channel
precoders. We find the best representation of the optimal precoder as a
function of Kronecker Product (KP) of two low-dimensional precoders,
respectively corresponding to the horizontal and vertical dimensions of the
UPA, obtained from the tensor decomposition of the channel. We then quantize
this precoder to design product codebooks such that an average loss in mutual
information due to quantization of channel state information (CSI) is
minimized. The key technical contribution lies in exploiting the constraints on
the precoders to reduce the product codebook design problem to an unsupervised
clustering problem on a Cartesian Product Grassmann manifold (CPM), where the
cluster centroids form a finite-sized precoder codebook. This codebook can be
found efficiently by running a $K$-means clustering on the CPM. With a suitable
induced distance metric on the CPM, we show that the construction of product
codebooks is equivalent to finding the optimal set of centroids on the factor
manifolds corresponding to the horizontal and vertical dimensions. Simulation
results are presented to demonstrate the capability of the proposed design
criterion in learning the codebooks and the attractive performance of the
designed codebooks.
|
We establish a framework for doing second order conformal perturbation theory
for the symmetric orbifold Sym$^N(T^4)$ to all orders in $N$. This allows us to
compute how 1/4-BPS states of the D1-D5 system on $AdS_3\times S^3\times T^4$
are lifted as we move away from the orbifold point. As an application we
confirm a previous observation that in the large $N$ limit not all 1/4-BPS
states that can be lifted do get lifted. This provides evidence that the
supersymmetric index actually undercounts the number of 1/4-BPS states at a
generic point in the moduli space.
|
Well ordered covers of square-free monomial ideals are subsets of the minimal
generating set ordered in a certain way that give rise to a Lyubeznik
resolution for the ideal, and have guaranteed nonvanishing Betti numbers in
certain degrees. This paper is about square-free monomial ideals which have a
well ordered cover. We consider the question of subadditivity of syzygies of
square-free monomial ideals via complements in the lcm lattice of the ideal,
and examine how lattice complementation breaks well ordered covers of the ideal
into (well ordered) covers of subideals. We also introduce a family of well
ordered covers called strongly disjoint sets of simplicial bouquets
(generalizing work of Kimura on graphs), which are relatively easy to identify
in simplicial complexes. We examine the subadditivity property via numerical
characteristics of these bouquets.
|
Topological phases of matter is an exotic phenomena in modern condense matter
physics, which has attracted much attention due to the unique boundary states
and transport properties. Recently, this topological concept in electronic
materials has been exploited in many other fields of physics. Motivated by
designing and controlling the behavior of electromagnetic waves, in optical,
microwave, and sound frequencies, topological photonics emerges as a rapid
growing research field. Due to the flexibility and diversity of superconducting
quantum circuits system, it is an promising platform to realize exotic
topological phases of matter and to probe and explore topologically-protected
effects in new ways. Here, we review theoretical and experimental advances of
topological photonics on superconducting quantum circuits via the
experimentally demonstrated parametric tunable coupling techniques, including
using of the superconducting transmission line resonator, superconducting
qubits, and the coupled system of them. On superconducting circuits, the
flexible interactions and intrinsic nonlinearity making topological photonics
in this system not only a simple photonic analog of topological effects for
novel devices, but also a realm of exotic but less-explored fundamental
physics.
|
Given a query patch from a novel class, one-shot object detection aims to
detect all instances of that class in a target image through the semantic
similarity comparison. However, due to the extremely limited guidance in the
novel class as well as the unseen appearance difference between query and
target instances, it is difficult to appropriately exploit their semantic
similarity and generalize well. To mitigate this problem, we present a
universal Cross-Attention Transformer (CAT) module for accurate and efficient
semantic similarity comparison in one-shot object detection. The proposed CAT
utilizes transformer mechanism to comprehensively capture bi-directional
correspondence between any paired pixels from the query and the target image,
which empowers us to sufficiently exploit their semantic characteristics for
accurate similarity comparison. In addition, the proposed CAT enables feature
dimensionality compression for inference speedup without performance loss.
Extensive experiments on COCO, VOC, and FSOD under one-shot settings
demonstrate the effectiveness and efficiency of our method, e.g., it surpasses
CoAE, a major baseline in this task by 1.0% in AP on COCO and runs nearly 2.5
times faster. Code will be available in the future.
|
The scientific impact of current and upcoming photometric galaxy surveys is
contingent on our ability to obtain redshift estimates for large numbers of
faint galaxies. In the absence of spectroscopically confirmed redshifts,
broad-band photometric redshift point estimates (photo-$z$s) have been
superseded by photo-$z$ probability density functions (PDFs) that encapsulate
their nontrivial uncertainties. Initial applications of photo-$z$ PDFs in weak
gravitational lensing studies of cosmology have obtained the redshift
distribution function $\mathcal{N}(z)$ by employing computationally
straightforward stacking methodologies that violate the laws of probability. In
response, mathematically self-consistent models of varying complexity have been
proposed in an effort to answer the question, "What is the right way to obtain
the redshift distribution function $\mathcal{N}(z)$ from a catalog of photo-$z$
PDFs?" This letter aims to motivate adoption of such principled methods by
addressing the contrapositive of the more common presentation of such models,
answering the question, "Under what conditions do traditional stacking methods
successfully recover the true redshift distribution function $\mathcal{N}(z)$?"
By placing stacking in a rigorous mathematical environment, we identify two
such conditions: those of perfectly informative data and perfectly informative
prior information. Stacking has maintained its foothold in the astronomical
community for so long because the conditions in question were only weakly
violated in the past. These conditions, however, will be strongly violated by
future galaxy surveys. We therefore conclude that stacking must be abandoned in
favor of mathematically supported methods in order to advance observational
cosmology.
|
The non-autoregressive models have boosted the efficiency of neural machine
translation through parallelized decoding at the cost of effectiveness when
comparing with the autoregressive counterparts. In this paper, we claim that
the syntactic and semantic structures among natural language are critical for
non-autoregressive machine translation and can further improve the performance.
However, these structures are rarely considered in the existing
non-autoregressive models. Inspired by this intuition, we propose to
incorporate the explicit syntactic and semantic structures of languages into a
non-autoregressive Transformer, for the task of neural machine translation.
Moreover, we also consider the intermediate latent alignment within target
sentences to better learn the long-term token dependencies. Experimental
results on two real-world datasets (i.e., WMT14 En-De and WMT16 En-Ro) show
that our model achieves a significantly faster speed, as well as keeps the
translation quality when compared with several state-of-the-art
non-autoregressive models.
|
We present Wi-Lo, which allows to convert an ordinary 802.11 (WiFi) access
point into an internet of things (IoT) gateway supporting the low-power wide
area network (LPWAN) technology LoRa in the downlink. Our Wi-Lo system only
requires a software update and no additional hardware. It uses signal emulation
technique based on complementary code keying modulation from 802.11b in order
to emulate a downlink LoRa (long range) transmission. The Wi-Lo gateway can be
used by a normal WiFi-enabled smartphone to send packets to LoRa compliant IoT
devices like smart sensors. We implemented a prototype using commodity WiFi
hardware. Experimental results show that Wi-Lo enables a normal WiFi node to
communication to LoRa devices even over long distances, which is comparable to
the configurations using pure LoRa transmitter and receivers.
|
Real-world imagery is often characterized by a significant imbalance of the
number of images per class, leading to long-tailed distributions. An effective
and simple approach to long-tailed visual recognition is to learn feature
representations and a classifier separately, with instance and class-balanced
sampling, respectively. In this work, we introduce a new framework, by making
the key observation that a feature representation learned with instance
sampling is far from optimal in a long-tailed setting. Our main contribution is
a new training method, referred to as Class-Balanced Distillation (CBD), that
leverages knowledge distillation to enhance feature representations. CBD allows
the feature representation to evolve in the second training stage, guided by
the teacher learned in the first stage. The second stage uses class-balanced
sampling, in order to focus on under-represented classes. This framework can
naturally accommodate the usage of multiple teachers, unlocking the information
from an ensemble of models to enhance recognition capabilities. Our experiments
show that the proposed technique consistently outperforms the state of the art
on long-tailed recognition benchmarks such as ImageNet-LT, iNaturalist17 and
iNaturalist18. The experiments also show that our method does not sacrifice the
accuracy of head classes to improve the performance of tail classes, unlike
most existing work.
|
Learning representations of nodes in a low dimensional space is a crucial
task with numerous interesting applications in network analysis, including link
prediction, node classification, and visualization. Two popular approaches for
this problem are matrix factorization and random walk-based models. In this
paper, we aim to bring together the best of both worlds, towards learning node
representations. In particular, we propose a weighted matrix factorization
model that encodes random walk-based information about nodes of the network.
The benefit of this novel formulation is that it enables us to utilize kernel
functions without realizing the exact proximity matrix so that it enhances the
expressiveness of existing matrix decomposition methods with kernels and
alleviates their computational complexities. We extend the approach with a
multiple kernel learning formulation that provides the flexibility of learning
the kernel as the linear combination of a dictionary of kernels in data-driven
fashion. We perform an empirical evaluation on real-world networks, showing
that the proposed model outperforms baseline node embedding algorithms in
downstream machine learning tasks.
|
Supply voltage scaling is one of the most effective techniques to reduce the
power consumption of microprocessors. However, technology limitations such as
aging and process variability enforce microprocessor designers to apply
pessimistic voltage guardbands to guarantee correct operation in the field for
any foreseeable workload. This worst-case design practice makes energy
efficiency hard to scale with technology evolution. Improving energy-efficiency
requires the identification of the chip design margins through time-consuming
and comprehensive characterization of its operational limits. Such a
characterization of state-of-the-art multi-core CPUs fabricated in aggressive
technologies is a multi-parameter process, which requires statistically
significant information. In this paper, we present an automated framework to
support system-level voltage and frequency scaling characterization of Applied
Micro's state-of-the-art ARMv8-based multicore CPUs used in the X-Gene 2
micro-server family. The fully automated framework can provide fine-grained
information of the system's state by monitoring any abnormal behavior that may
occur during reduced supply voltage conditions. We also propose a new metric to
quantify the behavior of a microprocessor when it operates beyond nominal
conditions. Our experimental results demonstrate potential uses of the
characterization framework to identify the limits of operation for improved
energy efficiency.
|
In this technical report, we present our solution of KDD Cup 2021 OGB
Large-Scale Challenge - PCQM4M-LSC Track. We adopt Graphormer and ExpC as our
basic models. We train each model by 8-fold cross-validation, and additionally
train two Graphormer models on the union of training and validation sets with
different random seeds. For final submission, we use a naive ensemble for these
18 models by taking average of their outputs. Using our method, our team
MachineLearning achieved 0.1200 MAE on test set, which won the first place in
KDD Cup graph prediction track.
|
Touch data, and in particular text-entry data, has been mostly collected in
the laboratory, under controlled conditions. While touch and text-entry data
have consistently shown its potential for monitoring and detecting a variety of
conditions and impairments, its deployment in-the-wild remains a challenge. In
this paper, we present WildKey, an Android keyboard toolkit that allows for the
usable deployment of in-the-wild user studies. WildKey is able to analyze
text-entry behaviors through implicit and explicit text-entry data collection
while ensuring user privacy. We detail each of the WildKey's components and
features, all of the metrics collected, and discuss the steps taken to ensure
user privacy and promote compliance.
|
We propose nonabelian higher-rank gauge theories in 2+1D and 3+1D. The gauge
group is constructed from the volume-preserving diffeomorphisms of space. We
show that the intriguing physics of the lowest Landau level (LLL) limit can be
interpreted as the consequences of the symmetry. We derive the renowned
Girvin-MacDonald-Platzman (GMP) algebra as well as the topological Wen-Zee term
within our formalism. Using the gauge symmetry in 2+1D, we derive the LLL
effective action of vortex crystal in rotating Bose gas as well as Wigner
crystal of electron in an applied magnetic field. We show that the nonlinear
sigma models of ferromagnets in 2+1D and 3+1D exhibit the higher-rank gauge
symmetries that we introduce in this paper. We interpret the fractonic behavior
of the excitations on the lowest Landau level and of skyrmions in ferromagnets
as the consequence of the higher-rank gauge symmetry.
|
For $G=G_{n, 1/2}$, the Erd\H{o}s--Renyi random graph, let $X_n$ be the
random variable representing the number of distinct partitions of $V(G)$ into
sets $A_1, \ldots, A_q$ so that the degree of each vertex in $G[A_i]$ is
divisible by $q$ for all $i\in[q]$. We prove that if $q\geq 3$ is odd then
$X_n\xrightarrow{d}{\mathrm{Po}(1/q!)}$, and if $q \geq 4$ is even then
$X_n\xrightarrow{d}{\mathrm{Po}(2^q/q!)}$. More generally, we show that the
distribution is still asymptotically Poisson when we require all degrees in
$G[A_i]$ to be congruent to $x_i$ modulo $q$ for each $i\in[q]$, where the
residues $x_i$ may be chosen freely. For $q=2$, the distribution is not
asymptotically Poisson, but it can be determined explicitly.
|
We report a theoretical study of the coherence dynamics of spin qubits in
two-dimensional materials (2DMs) and van-der-Waals heterostructures, as a
function of the host thickness and the composition of the surrounding
environment. We focus on MoS$_2$ and WS$_2$, two promising systems for quantum
technology applications, and we consider the decoherence arising from the
interaction of the spin qubit with nuclear spins. We show that the Hahn-echo
coherence time is determined by a complex interplay between the source of
decoherence in the qubit host and in the environment, which in turn determines
whether the noise evolution is in a classical or quantum mechanical regime. We
suggest that the composition and thickness of van-der-Waals heterostructures
encapsulating a qubit host can be engineered to maximize coherence times.
Finally, we discuss how quantum sensors may be able to probe the dynamics of
the nuclear bath in 2DMs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.