abstract
stringlengths 42
2.09k
|
---|
Recent work has shown that many problems of satisfiability and resiliency in
workflows may be viewed as special cases of the authorization policy existence
problem (APEP), which returns an authorization policy if one exists and 'No'
otherwise. However, in many practical settings it would be more useful to
obtain a 'least bad' policy than just a 'No', where 'least bad' is
characterized by some numerical value indicating the extent to which the policy
violates the base authorization relation and constraints. Accordingly, we
introduce the Valued APEP, which returns an authorization policy of minimum
weight, where the (non-negative) weight is determined by the constraints
violated by the returned solution. We then establish a number of results
concerning the parameterized complexity of Valued APEP. We prove that the
problem is fixed-parameter tractable (FPT) if the set of constraints satisfies
two restrictions, but is intractable if only one of these restrictions holds.
(Most constraints known to be of practical use satisfy both restrictions.) We
also introduce a new type of resiliency for workflow satisfiability problem,
show how it can be addressed using Valued APEP and use this to build a set of
benchmark instances for Valued APEP. Following a set of computational
experiments with two mixed integer programming (MIP) formulations, we
demonstrate that the Valued APEP formulation based on the user profile concept
has FPT-like running time and usually significantly outperforms a naive
formulation.
|
Sample efficiency and risk-awareness are central to the development of
practical reinforcement learning (RL) for complex decision-making. The former
can be addressed by transfer learning and the latter by optimizing some utility
function of the return. However, the problem of transferring skills in a
risk-aware manner is not well-understood. In this paper, we address the problem
of risk-aware policy transfer between tasks in a common domain that differ only
in their reward functions, in which risk is measured by the variance of reward
streams. Our approach begins by extending the idea of generalized policy
improvement to maximize entropic utilities, thus extending policy improvement
via dynamic programming to sets of policies and levels of risk-aversion. Next,
we extend the idea of successor features (SF), a value function representation
that decouples the environment dynamics from the rewards, to capture the
variance of returns. Our resulting risk-aware successor features (RaSF)
integrate seamlessly within the RL framework, inherit the superior task
generalization ability of SFs, and incorporate risk-awareness into the
decision-making. Experiments on a discrete navigation domain and control of a
simulated robotic arm demonstrate the ability of RaSFs to outperform
alternative methods including SFs, when taking the risk of the learned policies
into account.
|
Using Langevin dynamics simulations, we study the hysteresis in unzipping of
longer double stranded DNA chains whose ends are subjected to a time dependent
periodic force with frequency $\omega$ and amplitude $G$ keeping the other end
fixed. We find that the area of the hysteresis loop, $A_{loop}$, scales as
$1/\omega$ at higher frequencies, whereas it scales as
$(G-G_c)^{\alpha}\omega^{\beta}$ with exponents $\alpha=1$ and $\beta=1.25$ in
the low frequency regime. These values are same as the exponents obtained in
Monte Carlo simulation studies of a directed self avoiding walk model of a
homopolymer DNA [R. Kapri, Phys. Rev. E 90, 062719 (2014)], and the block
copolymer DNA [R. K. Yadav and R. Kapri, Phys. Rev. E 103, 012413 (2021)] on a
square lattice, and differs from the values reported earlier using Langevin
dynamics simulation studies on a much shorter DNA hairpins.
|
In spoken conversational question answering (SCQA), the answer to the
corresponding question is generated by retrieving and then analyzing a fixed
spoken document, including multi-part conversations. Most SCQA systems have
considered only retrieving information from ordered utterances. However, the
sequential order of dialogue is important to build a robust spoken
conversational question answering system, and the changes of utterances order
may severely result in low-quality and incoherent corpora. To this end, we
introduce a self-supervised learning approach, including incoherence
discrimination, insertion detection, and question prediction, to explicitly
capture the coreference resolution and dialogue coherence among spoken
documents. Specifically, we design a joint learning framework where the
auxiliary self-supervised tasks can enable the pre-trained SCQA systems towards
more coherent and meaningful spoken dialogue learning. We also utilize the
proposed self-supervised learning tasks to capture intra-sentence coherence.
Experimental results demonstrate that our proposed method provides more
coherent, meaningful, and appropriate responses, yielding superior performance
gains compared to the original pre-trained language models. Our method achieves
state-of-the-art results on the Spoken-CoQA dataset.
|
Chalcogenide phase change materials (PCMs) have been extensively applied in
data storage, and they are now being proposed for high resolution displays,
holographic displays, reprogrammable photonics, and all-optical neural
networks. These wide-ranging applications all exploit the radical property
contrast between the PCMs different structural phases, extremely fast switching
speed, long-term stability, and low energy consumption. Designing PCM photonic
devices requires an accurate model to predict the response of the device during
phase transitions. Here, we describe an approach that accurately predicts the
microstructure and optical response of phase change materials during laser
induced heating. The framework couples the Gillespie Cellular Automata approach
for modelling phase transitions with effective medium theory and Fresnel
equations. The accuracy of the approach is verified by comparing the PCM
optical response and microstructure evolution with the results of nanosecond
laser switching experiments. We anticipate that this approach to simulating the
switching response of PCMs will become an important component for designing and
simulating programmable photonics devices. The method is particularly important
for predicting the multi-level optical response of PCMs, which is important for
all-optical neural networks and PCM-programmable perceptrons.
|
We present an automatic COVID1-19 diagnosis framework from lung CT-scan slice
images. In this framework, the slice images of a CT-scan volume are first
proprocessed using segmentation techniques to filter out images of closed lung,
and to remove the useless background. Then a resampling method is used to
select one or multiple sets of a fixed number of slice images for training and
validation. A 3D CNN network with BERT is used to classify this set of selected
slice images. In this network, an embedding feature is also extracted. In cases
where there are more than one set of slice images in a volume, the features of
all sets are extracted and pooled into a global feature vector for the whole
CT-scan volume. A simple multiple-layer perceptron (MLP) network is used to
further classify the aggregated feature vector. The models are trained and
evaluated on the provided training and validation datasets. On the validation
dataset, the accuracy is 0.9278 and the F1 score is 0.9261.
|
We review the status of the anomalies in $b\to s \ell\ell$ transitions, and
comment on the impact of the most recent measurements in 2019 and 2020 on the
global fits. We also discuss a few developments in the theory calculation of
local and non-local form factors.
|
In this paper, we introduce the new task of controllable text edition, in
which we take as input a long text, a question, and a target answer, and the
output is a minimally modified text, so that it fits the target answer. This
task is very important in many situations, such as changing some conditions,
consequences, or properties in a legal document, or changing some key
information of an event in a news text. This is very challenging, as it is hard
to obtain a parallel corpus for training, and we need to first find all text
positions that should be changed and then decide how to change them. We
constructed the new dataset WikiBioCTE for this task based on the existing
dataset WikiBio (originally created for table-to-text generation). We use
WikiBioCTE for training, and manually labeled a test set for testing. We also
propose novel evaluation metrics and a novel method for solving the new task.
Experimental results on the test set show that our proposed method is a good
fit for this novel NLP task.
|
A mixture preorder is a preorder on a mixture space (such as a convex set)
that is compatible with the mixing operation. In decision theoretic terms, it
satisfies the central expected utility axiom of strong independence. We
consider when a mixture preorder has a multi-representation that consists of
real-valued, mixture-preserving functions. If it does, it must satisfy the
mixture continuity axiom of Herstein and Milnor (1953). Mixture continuity is
sufficient for a mixture-preserving multi-representation when the dimension of
the mixture space is countable, but not when it is uncountable. Our strongest
positive result is that mixture continuity is sufficient in conjunction with a
novel axiom we call countable domination, which constrains the order complexity
of the mixture preorder in terms of its Archimedean structure. We also consider
what happens when the mixture space is given its natural weak topology.
Continuity (having closed upper and lower sets) and closedness (having a closed
graph) are stronger than mixture continuity. We show that continuity is
necessary but not sufficient for a mixture preorder to have a
mixture-preserving multi-representation. Closedness is also necessary; we leave
it as an open question whether it is sufficient. We end with results concerning
the existence of mixture-preserving multi-representations that consist entirely
of strictly increasing functions, and a uniqueness result.
|
Widespread processes in nature and technology are governed by the dynamical
transition whereby a material in an initially solid-like state then yields
plastically. Major unresolved questions concern whether any material will yield
smoothly and gradually (ductile behaviour) or fail abruptly and
catastrophically (brittle behaviour); the roles of sample annealing, disorder
and shear band formation in the onset of yielding and failure; and, most
importantly from a practical viewpoint, whether any impending catastrophic
failure can be anticipated before it happens. We address these questions by
studying the yielding of slowly sheared athermal amorphous materials, within a
minimal mesoscopic lattice elastoplastic model. Our contributions are fourfold.
First, we elucidate whether yielding will be ductile or brittle, for any given
level of sample annealing. Second, we show that yielding comprises two distinct
stages: a pre-failure stage, in which small levels of strain heterogeneity
slowly accumulate, followed by a catastrophic brittle failure event, in which a
crack quickly propagates across the sample via a cooperating line of plastic
events. Third, we provide an expression for the slowly growing level of strain
heterogeneity in the pre-failure stage, expressed in terms of the macroscopic
stress-strain curve and the sample size, and in excellent agreement with our
simulation results. Fourth, we elucidate the basic mechanism via which a crack
then nucleates and provide an approximate expression for the probability
distribution of shear strains at which failure occurs, as determined by the
disorder inherent in the sample, expressed in terms of a single annealing
parameter, and the system size. Importantly, this indicates a possible route to
predicting sudden material failure, before it occurs.
|
When a three-dimensional material is constructed by stacking different
two-dimensional layers into an ordered structure, new and unique physical
properties can emerge. An example is the delafossite PdCoO2, which consists of
alternating layers of metallic Pd and Mott-insulating CoO2 sheets. To
understand the nature of the electronic coupling between the layers that gives
rise to the unique properties of PdCoO2, we revealed its layer-resolved
electronic structure combining standing-wave X-ray photoemission spectroscopy
and ab initio many-body calculations. Experimentally, we have decomposed the
measured valence band spectrum into contributions from Pd and CoO2 layers.
Computationally, we find that many-body interactions in Pd and CoO2 layers are
highly different. Holes in the CoO2 layer interact strongly with
charge-transfer excitons in the same layer, whereas holes in the Pd layer
couple to plasmons in the Pd layer. Interestingly, we find that holes in states
hybridized across both layers couple to both types of excitations
(charge-transfer excitons or plasmons), with the intensity of photoemission
satellites being proportional to the projection of the state onto a given
layer. This establishes satellites as a sensitive probe for inter-layer
hybridization. These findings pave the way towards a better understanding of
complex many-electron interactions in layered quantum materials.
|
Recent query explanation systems help users understand anomalies in
aggregation results by proposing predicates that describe input records that,
if deleted, would resolve the anomalies. However, it can be difficult for users
to understand how a predicate was chosen, and these approaches are limited to
errors that can be resolved through deletion. In contrast, data errors may be
due to group-wise errors, such as missing records or systematic value errors.
This paper presents Reptile, an explanation system for hierarchical data. Given
an anomalous aggregate query result, Reptile recommends the next drill-down
attribute,and ranks the drill-down groups based on the extent repairing the
group's statistics to its expected values resolves the anomaly. Reptile
efficiently trains a multi-level model that leverages the data's hierarchy to
estimate the expected values, and uses a factorised representation of the
feature matrix to remove redundancies due to the data's hierarchical structure.
We further extend model training to support factorised data, and develop a
suite of optimizations that leverage the data's hierarchical structure. Reptile
reduces end-to-end runtimes by more than 6 times compared to a Matlab-based
implementation, correctly identifies 21/30 data errors in John Hopkin's
COVID-19 data, and correctly resolves 20/22 complaints in a user study using
data and researchers from Columbia University's Financial Instruments Sector
Team.
|
We designed Si-based all-dielectric 1 $\times$ 2 TE and TM power splitters
with various splitting ratios and simulated them using the inverse design of
adjoint and numerical 3D finite-difference time-domain methods. The proposed
devices exhibit ultra-high transmission efficiency above 98 and 99\%, and
excess losses below 0.1 and 0.035 dB, for TE and TM splitters, respectively.
The merits of these devices include a minor footprint of 2.2 $\times$ 2.2
$\mu$m $\times$ $\mu$m and a broad operating bandwidth of 200 nm with a center
wavelength of $\lambda$ = 1.55 $\mu$ m.
|
Numerical continuation in the context of optimization can be used to mitigate
convergence issues due to a poor initial guess. In this work, we extend this
idea to Riemannian optimization problems, that is, the minimization of a target
function on a Riemannian manifold. For this purpose, a suitable homotopy is
constructed between the original problem and a problem that admits an easy
solution. We develop and analyze a path-following numerical continuation
algorithm on manifolds for solving the resulting parameter-dependent equation.
To illustrate our developments, we consider two typical classical applications
of Riemannian optimization: the computation of the Karcher mean and low-rank
matrix completion. We demonstrate that numerical continuation can yield
improvements for challenging instances of both problems.
|
Iso-edge domains are a variant of the iso-Delaunay decomposition introduced
by Voronoi. They were introduced by Baranovskii & Ryshkov in order to solve the
covering problem in dimension $5$.
In this work we revisit this decomposition and prove the following new
results:
$\bullet$ We review the existing theory and give a general mass-formula for
the iso-edge domains.
$\bullet$ We prove that the associated toroidal compactification of the
moduli space of principally polarized abelian varieties is projective.
$\bullet$ We prove the Conway--Sloane conjecture in dimension $5$.
$\bullet$ We prove that the quadratic forms for which the conorms are
non-negative are exactly the matroidal ones in dimension $5$.
|
In this paper we study exact boundary controllability for a linear wave
equation with strong and weak interior degeneration of the coefficient in the
principle part of the elliptic operator. The objective is to provide a
well-posedness analysis of the corresponding system and derive conditions for
its controllability through boundary actions. Passing to a relaxed version of
the original problem, we discuss existence and uniqueness of solutions, and
using the HUM method we derive conditions on the rate of degeneracy for both
exact boundary controllability and the lack thereof.
|
Modularity is a quantity which has been introduced in the context of complex
networks in order to quantify how close a network is to an ideal modular
network in which the nodes form small interconnected communities that are
joined together with relatively few edges. In this paper, we consider this
quantity on a recent probabilistic model of complex networks introduced by
Krioukov et al. (Phys. Rev. E 2010).
This model views a complex network as an expression of hidden hierarchies,
encapsulated by an underlying hyperbolic space. For certain parameters, this
model was proved to have typical features that are observed in complex networks
such as power law degree distribution, bounded average degree, clustering
coefficient that is asymptotically bounded away from zero, and ultra-small
typical distances. In the present work, we investigate its modularity and we
show that, in this regime, it converges to 1 in probability.
|
Setting an effective reserve price for strategic bidders in repeated auctions
is a central question in online advertising. In this paper, we investigate how
to set an anonymous reserve price in repeated auctions based on historical bids
in a way that balances revenue and incentives to misreport. We propose two
simple and computationally efficient methods to set reserve prices based on the
notion of a clearing price and make them robust to bidder misreports. The first
approach adds random noise to the reserve price, drawing on techniques from
differential privacy. The second method applies a smoothing technique by adding
noise to the training bids used to compute the reserve price. We provide
theoretical guarantees on the trade-offs between the revenue performance and
bid-shading incentives of these two mechanisms. Finally, we empirically
evaluate our mechanisms on synthetic data to validate our theoretical findings.
|
This paper explores the genotype-phenotype relationship. It outlines
conditions under which the dependence of quantitative trait on the genome might
be predictable, based on measurement of a limited subset of genotypes. It uses
the theory of real-valued Boolean functions in a systematic way to translate
trait data into the Fourier domain. Important trait features, such as the
roughness of the trait landscape or the modularity of a trait have a simple
Fourier interpretation. Roughness at a gene location corresponds to high
sensitivity to mutation, while a modular organization of gene activity reduces
such sensitivity.
Traits where rugged loci are rare will naturally compress gene data in the
Fourier domain, leading to a sparse representation of trait data, concentrated
in identifiable, low-level coefficients. This Fourier representation of a trait
organizes epistasis in a form which is isometric to the trait data. As Fourier
matrices are known to be maximally incoherent with the standard basis, this
permits employing compressive sensing techniques to work from data sets that
are relatively small -- sometimes even polynomial -- compared to the
exponentially large sets of possible genomes.
This theory provides a theoretical underpinning for systematic use of Boolean
function machinery to dissect the dependency of a trait on the genome and
environment.
|
One of the open questions in the field of interaction design is "what inputs
or interaction techniques should be used with augmented reality devices?" The
transition from a touchpad and a keyboard to a multi-touch device was
relatively small. The transition from a multi-touch device to an HMD with no
controllers or clear surface to interact with is more complicated. This book is
a guide for how to figure out what interaction techniques and modalities people
prefer when interacting with those devices. The name of the technique covered
here is Elicitation. Elicitation is a form of participatory design, meaning
design with direct end-user involvement. By running an elicitation study
researchers can observe unconstrained human interactions with emerging
technologies to help guide input design.
|
We use a sample of 14 massive, dynamically relaxed galaxy clusters to
constrain the Hubble Constant, $H_0$, by combining X-ray and Sunyaev-Zel'dovich
(SZ) effect signals measured with Chandra, Planck and Bolocam. This is the
first such analysis to marginalize over an empirical, data-driven prior on the
overall accuracy of X-ray temperature measurements, while our restriction to
the most relaxed, massive clusters also minimizes astrophysical systematics.
For a cosmological-constant model with $\Omega_m = 0.3$ and $\Omega_{\Lambda} =
0.7$, we find $H_0 = 67.3^{+21.3}_{-13.3}$ km/s/Mpc, limited by the temperature
calibration uncertainty (compared to the statistically limited constraint of
$H_0 = 72.3^{+7.6}_{-7.6}$ km/s/Mpc). The intrinsic scatter in the X-ray/SZ
pressure ratio is found to be $13 \pm 4$ per cent ($10 \pm 3$ per cent when two
clusters with significant galactic dust emission are removed from the sample),
consistent with being primarily due to triaxiality and projection. We discuss
the prospects for reducing the dominant systematic limitation to this analysis,
with improved X-ray calibration and/or precise measurements of the relativistic
SZ effect providing a plausible route to per cent level constraints on $H_0$.
|
Understanding the fracture toughness of glasses is of prime importance for
science and technology. We study it here using extensive atomistic simulations
in which the interaction potential, glass transition cooling rate and loading
geometry are systematically varied, mimicking a broad range of experimentally
accessible properties. Glasses' nonequilibrium mechanical disorder is
quantified through $A_{\rm g}$, the dimensionless prefactor of the universal
spectrum of nonphononic excitations, which measures the abundance of soft
glassy defects that affect plastic deformability. We show that while a
brittle-to-ductile transition might be induced by reducing the cooling rate,
leading to a reduction in $A_{\rm g}$, iso-$\!A_{\rm g}$ glasses are either
brittle or ductile depending on the degree of Poisson contraction under
unconstrained uniaxial tension. Eliminating Poisson contraction using
constrained tension reveals that iso-$\!A_{\rm g}$ glasses feature similar
toughness, and that varying $A_{\rm g}$ under these conditions results in
significant toughness variation. Our results highlight the roles played by both
soft defects and loading geometry (which affects the activation of defects) in
the toughness of glasses.
|
We consider a multiuser diffusion-based molecular communication (MC) system
where multiple spatially distributed transmitter (TX)-receiver (RX) pairs
establish point-to-point communication links employing the same type of
signaling molecules. To realize the full potential of such a system, an
in-depth understanding of the interplay between the spatial user density and
inter-user interference (IUI) and its impact on system performance in an
asymptotic regime with large numbers of users is needed. In this paper, we
adopt a three-dimensional (3-D) system model with multiple independent and
spatially distributed point-to-point transmission links, where both the TXs and
RXs are positioned according to a regular hexagonal grid, respectively. Based
on this model, we first derive an expression for the channel impulse responses
(CIRs) of all TX-RX links in the system. Then, we provide the maximum
likelihood (ML) decision rule for the RXs and show that it reduces to a
threshold-based detector. We derive an analytical expression for the
corresponding detection threshold which depends on the statistics of the MC
channel and the statistics of the IUI. Furthermore, we derive an analytical
expression for the bit error rate (BER) and the achievable rate of a single
transmission link. Finally, we propose a new performance metric, which we refer
to as area rate efficiency (ARE), that captures the tradeoff between the user
density and IUI. The ARE characterizes how efficiently given TX and RX areas
are used for information transmission and is given in terms of bits per area
unit. We show that there exists an optimal user density for maximization of the
ARE. Results from particle-based and Monte Carlo simulations validate the
accuracy of the expressions derived for the CIR, optimal detection threshold,
BER, and ARE.
|
Intermediate mass ratio inspiral (IMRI) binaries -- containing stellar-mass
black holes coalescing into intermediate-mass black holes ($M>100M_{\odot}$) --
are a highly anticipated source of gravitational waves (GWs) for Advanced
LIGO/Virgo. Their detection and source characterization would provide a unique
probe of strong-field gravity and stellar evolution. Due to the asymmetric
component masses and the large primary, these systems generically excite
subdominant modes while reducing the importance of the dominant quadrupole
mode. Including higher order harmonics can also result in a $10\%-25\%$
increase in signal-to-noise ratio for IMRIs, which may help to detect these
systems. We show that by including subdominant GW modes into the analysis we
can achieve a precise characterization of IMRI source properties. For example,
we find that the source properties for IMRIs can be measured to within
$2\%-15\%$ accuracy at a fiducial signal-to-noise ratio of 25 if subdominant
modes are included. When subdominant modes are neglected, the accuracy degrades
to $9\%-44\%$ and significant biases are seen in chirp mass, mass ratio,
primary spin and luminosity distances. We further demonstrate that including
subdominant modes in the waveform model can enable an informative measurement
of both individual spin components and improve the source localization by a
factor of $\sim$10. We discuss some important astrophysical implications of
high-precision source characterization enabled by subdominant modes such as
constraining the mass gap and probing formation channels.
|
In this paper, we report about a large-scale online discussion with 1099
citizens on the Afghanistan Sustainable Development Goals.
|
We make a spectral analysis of the massive Dirac operator in a tubular
neighborhood of an unbounded planar curve,subject to infinite mass boundary
conditions. Under general assumptions on the curvature, we locate the essential
spectrum and derive an effective Hamiltonian on the base curve which
approximates the original operator in the thin-strip limit. We also investigate
the existence of bound states in the non-relativistic limit and give a
geometric quantitative condition for the bound states to exist.
|
Current neuroimaging techniques provide paths to investigate the structure
and function of the brain in vivo and have made great advances in understanding
Alzheimer's disease (AD). However, the group-level analyses prevalently used
for investigation and understanding of the disease are not applicable for
diagnosis of individuals. More recently, deep learning, which can efficiently
analyze large-scale complex patterns in 3D brain images, has helped pave the
way for computer-aided individual diagnosis by providing accurate and automated
disease classification. Great progress has been made in classifying AD with
deep learning models developed upon increasingly available structural MRI data.
The lack of scale-matched functional neuroimaging data prevents such models
from being further improved by observing functional changes in pathophysiology.
Here we propose a potential solution by first learning a
structural-to-functional transformation in brain MRI, and further synthesizing
spatially matched functional images from large-scale structural scans. We
evaluated our approach by building computational models to discriminate
patients with AD from healthy normal subjects and demonstrated a performance
boost after combining the structural and synthesized functional brain images
into the same model. Furthermore, our regional analyses identified the temporal
lobe to be the most predictive structural-region and the parieto-occipital lobe
to be the most predictive functional-region of our model, which are both in
concordance with previous group-level neuroimaging findings. Together, we
demonstrate the potential of deep learning with large-scale structural and
synthesized functional MRI to impact AD classification and to identify AD's
neuroimaging signatures.
|
Since most industrial control applications use PID controllers, PID tuning
and anti-windup measures are significant problems. This paper investigates
tuning the feedback gains of a PID controller via back-calculation and
automatic differentiation tools. In particular, we episodically use a cost
function to generate gradients and perform gradient descent to improve
controller performance. We provide a theoretical framework for analyzing this
non-convex optimization and establish a relationship between back-calculation
and disturbance feedback policies. We include numerical experiments on linear
systems with actuator saturation to show the efficacy of this approach.
|
Risk is traditionally described as the expected likelihood of an undesirable
outcome, such as collisions for autonomous vehicles. Accurately predicting risk
or potentially risky situations is critical for the safe operation of
autonomous vehicles. In our previous work, we showed that risk could be
characterized by two components: 1) the probability of an undesirable outcome
and 2) an estimate of how undesirable the outcome is (loss). This paper is an
extension to our previous work. In this paper, using our trained deep
reinforcement learning model for navigating around crowds, we developed a
risk-based decision-making framework for the autonomous vehicle that integrates
the high-level risk-based path planning with the reinforcement learning-based
low-level control. We evaluated our method in a high-fidelity simulation such
as CARLA. This work can improve safety by allowing an autonomous vehicle to one
day avoid and react to risky situations.
|
Inelastic Cooper pair tunneling across a voltage-biased Josephson junction in
series with one or more microwave cavities can generate photons via resonant
processes in which the energy lost by the Cooper pair matches that of the
photon(s) produced. We generalise previous theoretical treatments of such
systems to analyse cases where two or more different photon generation
processes are resonant simultaneously. We also explore in detail a specific
case where generation of a single photon in one cavity mode is simultaneously
resonant with the generation of two photons in a second mode. We find that the
coexistence of the two resonances leads to effective couplings between the
modes which in turn generate entanglement.
|
Quantile regression is a field with steadily growing importance in
statistical modeling. It is a complementary method to linear regression, since
computing a range of conditional quantile functions provides a more accurate
modelling of the stochastic relationship among variables, especially in the
tails. We introduce a non-restrictive and highly flexible nonparametric
quantile regression approach based on C- and D-vine copulas. Vine copulas allow
for separate modeling of marginal distributions and the dependence structure in
the data, and can be expressed through a graph theoretical model given by a
sequence of trees. This way we obtain a quantile regression model, that
overcomes typical issues of quantile regression such as quantile crossings or
collinearity, the need for transformations and interactions of variables. Our
approach incorporates a two-step ahead ordering of variables, by maximizing the
conditional log-likelihood of the tree sequence, while taking into account the
next two tree levels. Further, we show that the nonparametric conditional
quantile estimator is consistent. The performance of the proposed methods is
evaluated in both low- and high-dimensional settings using simulated and real
world data. The results support the superior prediction ability of the proposed
models.
|
We revisit the problem of local normality of Kraus-Polley-Reents infravacuum
representations and provide a straightforward proof based on the Araki-Yamagami
criterion. We apply this result to the theory of superselection sectors.
Namely, we extend the novel formalism of second conjugate classes and relative
normalizers to the local relativistic setting.
|
In this essay we give a general picture about the evolution of Grohendieck's
ideas regarding the notion of space. Starting with his fundamental work in
algebraic geometry, where he introduces schemes and toposes as generalizations
of classical notions of spaces, passing through tame topology and ending with
the formulation of a geometry of forms, we show how the ideas of Grothendieck
evolved from pure mathematical considerations to physical and philosophical
questions about the nature and structure of space and its mathematical models.
|
Manipulating and cooling small particles with light are long-standing
challenges in many areas of science, from the foundations of physics to
applications in biology and nano-technology. Light fields can, in particular,
be used to isolate mesoscopic particles from their environment by levitating
them optically. These levitated particles of micron size and smaller exhibit
pristine mechanical resonances and can be cooled down to their motional quantum
ground state. Significant roadblocks on the way to scale up levitation from a
single to multiple particles in close proximity are the requirements to
constantly monitor the particles' positions as well as to engineer light fields
that react fast and appropriately to their displacements. Given the complexity
of light scattering between particles, each of these two challenges currently
seems insurmountable already in itself. Here, we present an approach that
solves both problems at once by forgoing any local information on the
particles. Instead, our procedure is based on the far-field information stored
in the scattering matrix and its changes with time. We demonstrate how to
compose from these ingredients a linear energy-shift operator, whose maximal or
minimal eigenstates are identified as the incoming wavefronts that implement
the most efficient heating or cooling of a moving ensemble of
arbitrarily-shaped levitated particles, respectively. We expect this optimal
approach to be a game-changer for the collective manipulation of multiple
particles on-the-fly, i.e., without the necessity to track them. An
experimental implementation is suggested based on stroboscopic scattering
matrix measurements and a time-adaptive injection of the optimal light fields.
|
We study the utility of the lexical translation model (IBM Model 1) for
English text retrieval, in particular, its neural variants that are trained
end-to-end. We use the neural Model1 as an aggregator layer applied to
context-free or contextualized query/document embeddings. This new approach to
design a neural ranking system has benefits for effectiveness, efficiency, and
interpretability. Specifically, we show that adding an interpretable neural
Model 1 layer on top of BERT-based contextualized embeddings (1) does not
decrease accuracy and/or efficiency; and (2) may overcome the limitation on the
maximum sequence length of existing BERT models. The context-free neural Model
1 is less effective than a BERT-based ranking model, but it can run efficiently
on a CPU (without expensive index-time precomputation or query-time operations
on large tensors). Using Model 1 we produced best neural and non-neural runs on
the MS MARCO document ranking leaderboard in late 2020.
|
Teaching cases based on stories about real organizations are a powerful means
of storytelling. These cases closely parallel real-world situations and can
deliver on pedagogical objectives as writers can use their creative license to
craft a storyline that better focuses on the specific principles, concepts, and
challenges they want to address in their teaching. The method instigates
critical discussion, draws out relevant experiences from students, encourages
questioning of accepted practices, and creates dialogue between theory and
practice. We present Horizon, a case study of a firm that suffers a
catastrophic incident of Intellectual Property (IP) theft. The case study was
developed to teach information security management (ISM) principles in key
areas such as strategy, risk, policy and training to postgraduate Information
Systems and Information Technology students at the University of Melbourne,
Australia.
|
This article constructs the Shiromizu-Maeda-Sasaki 3-brane in the context of
five-dimensional Einstein-Chern-Simons gravity. We started by considering
Israel's junction condition for Lovelock's theory to read the junctions
conditions for AdS-Chern-Simons gravity. Using the S-expansion procedure, we
mapped the AdS-Chern-Simons junction conditions to Einstein-Chern-Simons
gravity, allowing us to derive effective four-dimensional Einstein-Chern-Simons
field equations.
|
In this paper we consider the existence and stability of multi-spike
solutions to the fractional Gierer-Meinhardt model with periodic boundary
conditions. In particular we rigorously prove the existence of symmetric and
asymmetric two-spike solutions using a Lyapunov-Schmidt reduction. The linear
stability of these two-spike solutions is then rigorously analyzed and found to
be determined by the eigenvalues of a certain $2\times 2$ matrix. Our rigorous
results are complemented by formal calculations of $N$-spike solutions using
the method of matched asymptotic expansions. In addition, we explicitly
consider examples of one- and two-spike solutions for which we numerically
calculate their relevant existence and stability thresholds. By considering a
one-spike solution we determine that the introduction of fractional diffusion
for the activator or inhibitor will respectively destabilize or stabilize a
single spike solution with respect to oscillatory instabilities. Furthermore,
when considering two-spike solutions we find that the range of parameter values
for which asymmetric two-spike solutions exist and for which symmetric
two-spike solutions are stable with respect to competition instabilities is
expanded with the introduction of fractional inhibitor diffusivity. However our
calculations indicate that asymmetric two-spike solutions are always linearly
unstable.
|
We present a language model that combines a large parametric neural network
(i.e., a transformer) with a non-parametric episodic memory component in an
integrated architecture. Our model uses extended short-term context by caching
local hidden states -- similar to transformer-XL -- and global long-term memory
by retrieving a set of nearest neighbor tokens at each timestep. We design a
gating function to adaptively combine multiple information sources to make a
prediction. This mechanism allows the model to use either local context,
short-term memory, or long-term memory (or any combination of them) on an ad
hoc basis depending on the context. Experiments on word-based and
character-based language modeling datasets demonstrate the efficacy of our
proposed method compared to strong baselines.
|
This paper considers mean field games in a multi-agent Markov decision
process (MDP) framework. Each player has a continuum state and binary action,
and benefits from the improvement of the condition of the overall population.
Based on an infinite horizon discounted individual cost, we show existence of a
stationary equilibrium, and prove its uniqueness under a positive externality
condition. We further analyze comparative statics of the stationary equilibrium
by quantitatively determining the impact of the effort cost.
|
We present a new method, exact in $\alpha'$, to explicitly compute string
tree-level amplitudes involving one massive state and any number of massless
ones. This construction relies on the so-called twisted heterotic string, which
admits only gauge multiplets, a gravitational multiplet, and a single massive
supermultiplet in its spectrum. In this simplified model, we determine the
moduli-space integrand of all amplitudes with one massive state using
Berends-Giele currents of the gauge multiplet. These integrands are then
straightforwardly mapped to gravitational amplitudes in the twisted heterotic
string and to the corresponding massive amplitudes of the conventional type-I
and type-II superstrings.
|
Context: It is not uncommon for a new team member to join an existing Agile
software development team, even after development has started. This new team
member faces a number of challenges before they are integrated into the team
and can contribute productively to team progress. Ideally, each newcomer should
be supported in this transition through an effective team onboarding program,
although prior evidence suggests that this is challenging for many
organisations. Objective: We seek to understand how Agile teams address the
challenge of team onboarding in order to inform future onboarding design.
Method: We conducted an interview survey of eleven participants from eight
organisations to investigate what onboarding activities are common across Agile
software development teams. We also identify common goals of onboarding from a
synthesis of literature. A repertory grid instrument is used to map the
contributions of onboarding techniques to onboarding goals. Results: Our study
reveals that a broad range of team onboarding techniques, both formal and
informal, are used in practice. It also shows that particular techniques that
have high contributions to a given goal or set of goals. Conclusions: In
presenting a set of onboarding goals to consider and an evidence-based
mechanism for selecting techniques to achieve the desired goals it is expected
that this study will contribute to better-informed onboarding design and
planning. An increase in practitioner awareness of the options for supporting
new team members is also an expected outcome.
|
Immersive viewing is emerging as the next interface evolution for
human-computer interaction. A truly wireless immersive application necessitates
immense data delivery with ultra-low latency, raising stringent requirements
for next-generation wireless networks. A potential solution for addressing
these requirements is through the efficient usage of in-device storage and
computation capabilities. This paper proposes a novel location-based coded
cache placement and delivery scheme, which leverages the nested code modulation
(NCM) to enable multi-rate multicasting transmission. To provide a uniform
quality of experience in different network locations, we formulate a linear
programming cache allocation problem. Next, based on the users' spatial
realizations, we adopt an NCM based coded delivery algorithm to efficiently
serve a distinct group of users during each transmission. Numerical results
demonstrate that the proposed location-based delivery method significantly
increases transmission efficiency compared to state of the art.
|
The discovery of the disentanglement properties of the latent space in GANs
motivated a lot of research to find the semantically meaningful directions on
it. In this paper, we suggest that the disentanglement property is closely
related to the geometry of the latent space. In this regard, we propose an
unsupervised method for finding the semantic-factorizing directions on the
intermediate latent space of GANs based on the local geometry. Intuitively, our
proposed method, called Local Basis, finds the principal variation of the
latent space in the neighborhood of the base latent variable. Experimental
results show that the local principal variation corresponds to the semantic
factorization and traversing along it provides strong robustness to image
traversal. Moreover, we suggest an explanation for the limited success in
finding the global traversal directions in the latent space, especially W-space
of StyleGAN2. We show that W-space is warped globally by comparing the local
geometry, discovered from Local Basis, through the metric on Grassmannian
Manifold. The global warpage implies that the latent space is not well-aligned
globally and therefore the global traversal directions are bound to show
limited success on it.
|
Developers often encounter unfamiliar code during software maintenance which
consumes a significant amount of time for comprehension, especially for novice
programmers. Automated techniques that analyze a source code and present key
information to the developers can lead to an effective comprehension of the
code. Researchers have come up with automated code summarization techniques
that focus on code summarization by generating brief summaries rather than
aiding its comprehension. Existing debuggers represent the execution states of
the program but they do not show the complete execution at a single point.
Studies have revealed that the effort required for program comprehension can be
reduced if novice programmers are provided with worked examples. Hence, we
propose COSPEX (Comprehension using Summarization via Program Execution) - an
Atom plugin that dynamically extracts key information for every line of code
executed and presents it to the developers in the form of an interactive
example-like dynamic information instance. As a preliminary evaluation, we
presented 14 undergraduates having Python programming experience up to 1 year
with a code comprehension task in a user survey. We observed that COSPEX helped
novice programmers in program comprehension and improved their understanding of
the code execution. The source code and tool are available at:
https://bit.ly/3utHOBM, and the demo on Youtube is available at:
https://bit.ly/2Sp08xQ.
|
To enable large-scale Internet of Things (IoT) deployment, Low-power
wide-area networking (LPWAN) has attracted a lot of research attention with the
design objectives of low-power consumption, wide-area coverage, and low cost.
In particular, long battery lifetime is central to these technologies since
many of the IoT devices will be deployed in hard-toaccess locations. Prediction
of the battery lifetime depends on the accurate modelling of power consumption.
This paper presents detailed power consumption models for two cellular IoT
technologies: Narrowband Internet of Things (NB-IoT) and Long Term Evolution
for Machines (LTE-M). A comprehensive power consumption model based on User
Equipment (UE) states and procedures for device battery lifetime estimation is
presented. An IoT device power measurement testbed has been setup and the
proposed model has been validated via measurements with different coverage
scenarios and traffic configurations, achieving the modelling inaccuracy within
5%. The resulting estimated battery lifetime is promising, showing that the
10-year battery lifetime requirement specified by 3GPP can be met with proper
configuration of traffic profile, transmission, and network parameters.
|
Most of the existing video self-supervised methods mainly leverage temporal
signals of videos, ignoring that the semantics of moving objects and
environmental information are all critical for video-related tasks. In this
paper, we propose a novel self-supervised method for video representation
learning, referred to as Video 3D Sampling (V3S). In order to sufficiently
utilize the information (spatial and temporal) provided in videos, we
pre-process a video from three dimensions (width, height, time). As a result,
we can leverage the spatial information (the size of objects), temporal
information (the direction and magnitude of motions) as our learning target. In
our implementation, we combine the sampling of the three dimensions and propose
the scale and projection transformations in space and time respectively. The
experimental results show that, when applied to action recognition, video
retrieval and action similarity labeling, our approach improves the
state-of-the-arts with significant margins.
|
Quantum secure data transfer is an important topic for quantum cyber
security. We propose a scheme to realize quantum secure data transfer in the
basis of quantum secure direct communication (QSDC). In this proposal, the
transmitted data is encoded in the pulse shape of a single optical qubit, which
is emitted from a trapped atom owned by the sender and received by the receiver
with another trapped atom. The encoding process can be implemented with high
fidelity by controlling the time-dependent driving pulse on the trapped atom to
manipulate the Rabi frequency in accordance with the target pulse shape of the
emitted photons. In the receiving process, we prove that, the single photon can
be absorbed with arbitrary probability by selecting appropriate driving pulse.
We also show that, based on the QSDC protocol, the data transfer process is
immune to the individual attacks.
|
Rate-splitting multiple access (RSMA) has been recognized as a promising
physical layer strategy for 6G. Motivated by ever increasing popularity of
cache-enabled content delivery in wireless communications, this paper proposes
an innovative multigroup multicast transmission scheme based on RSMA for
cache-aided cloud-radio access networks (C-RAN). Our proposed scheme not only
exploits the properties of content-centric communications and local caching at
the base stations (BSs), but also incorporates RSMA to better manage
interference in multigroup multicast transmission with statistical channel
state information (CSI) known at the central processor (CP) and the BSs. At the
RSMA-enabled cloud CP, the message of each multicast group is split into a
private and a common part with the former private part being decoded by all
users in the respective group and the latter common part being decoded by
multiple users from other multicast groups. Common message decoding is done for
the purpose of mitigating the interference. In this work, we jointly optimize
the clustering of BSs and the precoding with the aim of maximizing the minimum
rate among all multicast groups to guarantee fairness serving all groups. The
problem is a mixed-integer non-linear stochastic program (MINLSP), which is
solved by a practical algorithm we proposed including a heuristic clustering
algorithm for assigning a set of BSs to serve each user followed by an
efficient iterative algorithm that combines the sample average approximation
(SAA) and weighted minimum mean square error (WMMSE) to solve the stochastic
non-convex sub-problem of precoder design. Numerical results show the explicit
max-min rate gain of our proposed transmission scheme compared to the
state-of-the-art trivial interference processing methods. Therefore, we
conclude that RSMA is a promising technique for cache-aided C-RAN.
|
In this paper, we investigate statistics on alternating words under
correspondence between ``possible reflection paths within several layers of
glass'' and ``alternating words''. For
$v=(v_1,v_2,\cdots,v_n)\in\mathbb{Z}^{n}$, we say $P$ is a path within $n$
glass plates corresponding to $v$, if $P$ has exactly $v_i$ reflections
occurring at the $i^{\rm{th}}$ plate for all $i\in\{1,2,\cdots,n\}$. We give a
recursion for the number of paths corresponding to $v$ satisfying $v \in
\mathbb{Z}^n$ and $\sum_{i\geq 1} v_i=m$. Also, we establish recursions for
statistics around the number of paths corresponding to a given vector
$v\in\mathbb{Z}^n$ and a closed form for $n=3$. Finally, we give a equivalent
condition for the existence of path corresponding to a given vector $v$.
|
Many models such as Long Short Term Memory (LSTMs), Gated Recurrent Units
(GRUs) and transformers have been developed to classify time series data with
the assumption that events in a sequence are ordered. On the other hand, fewer
models have been developed for set based inputs, where order does not matter.
There are several use cases where data is given as partially-ordered sequences
because of the granularity or uncertainty of time stamps. We introduce a novel
transformer based model for such prediction tasks, and benchmark against
extensions of existing order invariant models. We also discuss how transition
probabilities between events in a sequence can be used to improve model
performance. We show that the transformer-based equal-time model outperforms
extensions of existing set models on three data sets.
|
In this paper, we show that a simple generalization of the holographic axion
model can realize spontaneous breaking of translational symmetry by considering
a special gauge-axion higher derivative term. The finite real part and
imaginary part of the stress tensor imply that the dual boundary system is a
viscoelastic solid. By calculating quasi-normal modes and making a comparison
with predictions from the elasticity theory, we verify the existence of phonons
and pseudo-phonons, where the latter is realized by introducing a weak explicit
breaking of translational symmetry, in the transverse channel. Finally, we
discuss how the phonon dynamics affects the charge transport.
|
In the past decades, many graph drawing techniques have been proposed for
generating aesthetically pleasing graph layouts. However, it remains a
challenging task since different layout methods tend to highlight different
characteristics of the graphs. Recently, studies on deep learning based graph
drawing algorithm have emerged but they are often not generalizable to
arbitrary graphs without re-training. In this paper, we propose a Convolutional
Graph Neural Network based deep learning framework, DeepGD, which can draw
arbitrary graphs once trained. It attempts to generate layouts by compromising
among multiple pre-specified aesthetics considering a good graph layout usually
complies with multiple aesthetics simultaneously. In order to balance the
trade-off, we propose two adaptive training strategies which adjust the weight
factor of each aesthetic dynamically during training. The quantitative and
qualitative assessment of DeepGD demonstrates that it is capable of drawing
arbitrary graphs effectively, while being flexible at accommodating different
aesthetic criteria.
|
One-shot anonymous unselfishness in economic games is commonly explained by
social preferences, which assume that people care about the monetary payoffs of
others. However, during the last ten years, research has shown that different
types of unselfish behaviour, including cooperation, altruism, truth-telling,
altruistic punishment, and trustworthiness are in fact better explained by
preferences for following one's own personal norms - internal standards about
what is right or wrong in a given situation. Beyond better organising various
forms of unselfish behaviour, this moral preference hypothesis has recently
also been used to increase charitable donations, simply by means of
interventions that make the morality of an action salient. Here we review
experimental and theoretical work dedicated to this rapidly growing field of
research, and in doing so we outline mathematical foundations for moral
preferences that can be used in future models to better understand selfless
human actions and to adjust policies accordingly. These foundations can also be
used by artificial intelligence to better navigate the complex landscape of
human morality.
|
Safety is a critical concern when deploying reinforcement learning agents for
realistic tasks. Recently, safe reinforcement learning algorithms have been
developed to optimize the agent's performance while avoiding violations of
safety constraints. However, few studies have addressed the non-stationary
disturbances in the environments, which may cause catastrophic outcomes. In
this paper, we propose the context-aware safe reinforcement learning (CASRL)
method, a meta-learning framework to realize safe adaptation in non-stationary
environments. We use a probabilistic latent variable model to achieve fast
inference of the posterior environment transition distribution given the
context data. Safety constraints are then evaluated with uncertainty-aware
trajectory sampling. The high cost of safety violations leads to the rareness
of unsafe records in the dataset. We address this issue by enabling prioritized
sampling during model training and formulating prior safety constraints with
domain knowledge during constrained planning. The algorithm is evaluated in
realistic safety-critical environments with non-stationary disturbances.
Results show that the proposed algorithm significantly outperforms existing
baselines in terms of safety and robustness.
|
In 2019 P. Patak and M. Tancer obtained the following higher-dimensional
generalization of the Heawood inequality on embeddings of graphs into surfaces.
We expose this result in a short well-structured way accessible to
non-specialists in the field.
Let $\Delta_n^k$ be the union of $k$-dimensional faces of the $n$-dimensional
simplex.
Theorem. (a) If $\Delta_n^k$ PL embeds into the connected sum of $g$ copies
of the Cartesian product $S^k\times S^k$ of two $k$-dimensional spheres, then
$g\ge\dfrac{n-2k}{k+2}$.
(b) If $\Delta_n^k$ PL embeds into a closed $(k-1)$-connected PL
$2k$-manifold $M$, then $(-1)^k(\chi(M)-2)\ge\dfrac{n-2k}{k+1}$.
|
We theoretically study the correlated insulator states, quantum anomalous
Hall (QAH) states, and field-induced topological transitions between different
correlated states in twisted multilayer graphene systems. Taking twisted
bilayer-monolayer graphene and twisted double-bilayer graphene as examples, we
show that both systems stay in spin polarized, $C_{3z}$-broken insulator states
with zero Chern number at 1/2 filling of the flat bands under finite
displacement fields. In some cases these spin polarized, nematic insulator
states are in the quantum valley Hall phase by virtue of the nontrivial band
topology of the systems. The spin polarized insulator state is quasi-degenerate
with the valley polarized state if only the dominant intra-valley Coulomb
interaction is included. Such quasi-degeneracy can be split by atomic on-site
interactions such that the spin polarized, nematic state become the unique
ground state. Such a scenario applies to various twisted multilayer graphene
systems at 1/2 filling, thus can be considered as a universal mechanism.
Moreover, under vertical magnetic fields, the orbital Zeeman splittings and the
field-induced change of charge density in twisted multilayer graphene systems
would compete with the atomic Hubbard interactions, which can drive transitions
from spin polarized zero-Chern-number states to valley-polarized QAH states
with small onset magnetic fields.
|
The problem of estimating the size of a population based on a subset of
individuals observed across multiple data sources is often referred to as
capture-recapture or multiple-systems estimation. This is fundamentally a
missing data problem, where the number of unobserved individuals represents the
missing data. As with any missing data problem, multiple-systems estimation
requires users to make an untestable identifying assumption in order to
estimate the population size from the observed data. Approaches to
multiple-systems estimation often do not emphasize the role of the identifying
assumption during model specification, which makes it difficult to decouple the
specification of the model for the observed data from the identifying
assumption. We present a re-framing of the multiple-systems estimation problem
that decouples the specification of the observed-data model from the
identifying assumptions, and discuss how log-linear models and the associated
no-highest-order interaction assumption fit into this framing. We present an
approach to computation in the Bayesian setting which takes advantage of
existing software and facilitates various sensitivity analyses. We demonstrate
our approach in a case study of estimating the number of civilian casualties in
the Kosovo war. Code used to produce this manuscript is available at
https://github.com/aleshing/revisiting-identifying-assumptions.
|
To solve the hierarchy problem, the relaxion must remain trapped in the
correct minimum, even if the electroweak symmetry is restored after reheating.
In this scenario, the relaxion starts rolling again until the backreaction
potential, with its set of local minima, reappears. Depending on the time of
barrier reappearance, Hubble friction alone may be insufficient to retrap the
relaxion in a large portion of the parameter space. Thus, an additional source
of friction is required, which might be provided by coupling to a dark
photon.The dark photon experiences a tachyonic instability as the relaxion
rolls, which slows down the relaxion by backreacting to its motion, and
efficiently creates anisotropies in the dark photon energy-momentum tensor,
sourcing gravitational waves. We calculate the spectrum of the resulting
gravitational wave background from this new mechanism, and evaluate its
observability by current and future experiments. We further investigate the
possibility that the coherently oscillating relaxion constitutes dark matter
and present the corresponding constraints from gravitational waves.
|
A regular left-order on finitely generated group $G$ is a total,
left-multiplication invariant order on $G$ whose corresponding positive cone is
the image of a regular language over the generating set of the group under the
evaluation map. We show that admitting regular left-orders is stable under
extensions and wreath products and give a classification of the groups all
whose left-orders are regular left-orders. In addition, we prove that solvable
Baumslag-Solitar groups $B(1,n)$ admits a regular left-order if and only if
$n\geq -1$. Finally, Hermiller and Sunic showed that no free product admits a
regular left-order, however we show that if $A$ and $B$ are groups with regular
left-orders, then $(A*B)\times \mathbb{Z}$ admits a regular left-order.
|
Let $(X, D_{X})$ be an arbitrary pointed stable curve of topological type
$(g_{X}, n_{X})$ over an algebraically closed field of characteristic $p>0$. We
prove that the generalized Hasse-Witt invariants of prime-to-$p$ cyclic
admissible coverings of $(X, D_{X})$ attain maximum. As applications, we obtain
an anabelian formula for $(g_{X}, n_{X})$, and prove that the field structures
associated to inertia subgroups of marked points can be reconstructed
group-theoretically from open continuous homomorphisms of admissible
fundamental groups. Moreover, the formula for maximum generalized Hasse-Witt
invariants and the result concerning reconstructions of field structures play
important roles in the theory of moduli spaces of fundamental groups developed
by the author of the present paper.
|
Deep learning (DL) based language models achieve high performance on various
benchmarks for Natural Language Inference (NLI). And at this time, symbolic
approaches to NLI are receiving less attention. Both approaches (symbolic and
DL) have their advantages and weaknesses. However, currently, no method
combines them in a system to solve the task of NLI. To merge symbolic and deep
learning methods, we propose an inference framework called NeuralLog, which
utilizes both a monotonicity-based logical inference engine and a neural
network language model for phrase alignment. Our framework models the NLI task
as a classic search problem and uses the beam search algorithm to search for
optimal inference paths. Experiments show that our joint logic and neural
inference system improves accuracy on the NLI task and can achieve state-of-art
accuracy on the SICK and MED datasets.
|
We continue the study of AdS loop amplitudes in the spectral representation
and in position space. We compute the finite coupling 4-point function in
position space for the large-$N$ conformal Gross Neveu model on $AdS_3$. The
resummation of loop bubble diagrams gives a result proportional to a tree-level
contact diagram. We show that certain families of fermionic Witten diagrams can
be easily computed from their companion scalar diagrams. Thus, many of the
results and identities of [1] are extended to the case of external fermions. We
derive a spectral representation for ladder diagrams in AdS. Finally, we
compute various bulk 2-point correlators, extending the results of [1].
|
Transparent Conductive Oxides (TCOs) are a class of materials that combine
high optical transparency with high electrical conductivity. This property
makes them uniquely appealing as transparent-conductive electrodes in solar
cells and interesting for optoelectronics and infrared-plasmonics applications.
One of the new challenges that researchers and engineers are facing is merging
optical and electrical control in a single device for developing
next-generation photovoltaic, opto-electronic devices and energyefficient
solid-state lighting. In this work, we investigated the possible variations in
the dielectric properties of aluminum-doped ZnO (AZO) upon gating, by means of
Spectroscopic Ellipsometry (SE). We investigated the electrical-bias-dependent
optical response of thin AZO films fabricated by magnetron sputtering, within a
parallel-plane capacitor configuration. We address the possibility to control
their optical and electric performances by applying bias, monitoring the effect
of charge injection/depletion in the AZO layer by means of in-operando SE vs
applied gate voltage.
|
We describe a framework to assemble permanent-magnet cubes in 3D-printed
frames to construct dipole, quadrupole, and solenoid magnets, whose field, in
the absence of iron, can be calculated analytically in three spatial
dimensions. Rotating closely spaced dipoles and quadrupoles in opposite
directions allows us to adjust the integrated strength of a multipole.
Contributions of unwanted harmonics are calculated and found to be moderate. We
then combine multiple magnets to construct beam-line modules: chicane, triplet
cell, and solenoid focusing system.
|
We study the evolution of qubits amplitudes in a one-dimensional chain
consisting of three equidistantly spaced noninteracting qubits embedded in an
open waveguide. The study is performed in the frame of single-excitation
subspace, where the only qubit in the chain is initially excited. We show that
the dynamics of qubits amplitudes crucially depend on the value of $kd$, where
$k$ is the wave vector, $d$ is a distance between neighbor qubits. If $kd$ is
equal to an integer multiple of $\pi$, then the qubits are excited to a
stationary level. In this case, it is the dark states which prevent qubits from
decaying to zero even though they do not contribute to the output spectrum of
photon emission. For other values of $kd$ the excitations of qubits exhibit the
damping oscillations which represent the vacuum Rabi oscillations in a
three-qubit system. In this case, the output spectrum of photon radiation is
determined by a subradiant state which has the lowest decay rate. We also
investigated the case with the frequency of a central qubit being different
from that of the edge qubits. In this case, the qibits decay rates can be
controlled by the frequency detuning between the central and the edge qubits.
|
We demonstrate that the recent measurement of the anomalous magnetic moment
of the muon and dark matter can be simultaneously explained within the Minimal
Supersymmetric Standard Model. Dark matter is a mostly-bino state, with the
relic abundance obtained via co-annihilations with either the sleptons or wino.
The most interesting regions of parameter space will be tested by the next
generation of dark matter direct detection experiments.
|
In [arxiv:2106.02560] we proposed a reduced density matrix functional theory
(RDMFT) for calculating energies of selected eigenstates of interacting
many-fermion systems. Here, we develop a solid foundation for this so-called
$\boldsymbol{w}$-RDMFT and present the details of various derivations. First,
we explain how a generalization of the Ritz variational principle to ensemble
states with fixed weights $\boldsymbol{w}$ in combination with the constrained
search would lead to a universal functional of the one-particle reduced density
matrix. To turn this into a viable functional theory, however, we also need to
implement an exact convex relaxation. This general procedure includes Valone's
pioneering work on ground state RDMFT as the special case
$\boldsymbol{w}=(1,0,\ldots)$. Then, we work out in a comprehensive manner a
methodology for deriving a compact description of the functional's domain. This
leads to a hierarchy of generalized exclusion principle constraints which we
illustrate in great detail. By anticipating their future pivotal role in
functional theories and to keep our work self-contained, several required
concepts from convex analysis are introduced and discussed.
|
Fuzzing is a technique widely used in vulnerability detection. The process
usually involves writing effective fuzz driver programs, which, when done
manually, can be extremely labor intensive. Previous attempts at automation
leave much to be desired, in either degree of automation or quality of output.
In this paper, we propose IntelliGen, a framework that constructs valid fuzz
drivers automatically. First, IntelliGen determines a set of entry functions
and evaluates their respective chance of exhibiting a vulnerability. Then,
IntelliGen generates fuzz drivers for the entry functions through hierarchical
parameter replacement and type inference. We implemented IntelliGen and
evaluated its effectiveness on real-world programs selected from the Android
Open-Source Project, Google's fuzzer-test-suite and industrial collaborators.
IntelliGen covered on average 1.08X-2.03X more basic blocks and 1.36X-2.06X
more paths over state-of-the-art fuzz driver synthesizers FUDGE and FuzzGen.
IntelliGen performed on par with manually written drivers and found 10 more
bugs.
|
A $k$-regular graph is called a divisible design graph (DDG for short) if its
vertex set can be partitioned into $m$ classes of size $n$, such that two
distinct vertices from the same class have exactly $\lambda_1$ common
neighbors, and two vertices from different classes have exactly $\lambda_2$
common neighbors. $4\times n$-lattice graph is the line graph of $K_{4,n}$.
This graph is a DDG with parameters $(4n,n+2,n-2,2,4,n)$. In the paper we
consider DDGs with these parameters. We prove that if $n$ is odd then such
graph can only be a $4\times n$-lattice graph. If $n$ is even we characterise
all DDGs with such parameters. Moreover, we characterise all DDGs with
parameters $(4n,3n-2,3n-6,2n-2,4,n)$ which are related to $4\times n$-lattice
graphs.
|
This paper presents a novel and flexible multi-task multi-layer Bayesian
mapping framework with readily extendable attribute layers. The proposed
framework goes beyond modern metric-semantic maps to provide even richer
environmental information for robots in a single mapping formalism while
exploiting existing inter-layer correlations. It removes the need for a robot
to access and process information from many separate maps when performing a
complex task and benefits from the correlation between map layers, advancing
the way robots interact with their environments. To this end, we design a
multi-task deep neural network with attention mechanisms as our front-end to
provide multiple observations for multiple map layers simultaneously. Our
back-end runs a scalable closed-form Bayesian inference with only logarithmic
time complexity. We apply the framework to build a dense robotic map including
metric-semantic occupancy and traversability layers. Traversability ground
truth labels are automatically generated from exteroceptive sensory data in a
self-supervised manner. We present extensive experimental results on publicly
available data sets and data collected by a 3D bipedal robot platform on the
University of Michigan North Campus and show reliable mapping performance in
different environments. Finally, we also discuss how the current framework can
be extended to incorporate more information such as friction, signal strength,
temperature, and physical quantity concentration using Gaussian map layers. The
software for reproducing the presented results or running on customized data is
made publicly available.
|
The level set method is a widely used tool for solving reachability and
invariance problems. However, some shortcomings, such as the difficulties of
handling dissipation function and constructing terminal conditions for solving
the Hamilton-Jacobi partial differential equation, limit the application of the
level set method in some problems with non-affine nonlinear systems and
irregular target sets. This paper proposes a method that can effectively avoid
the above tricky issues and thus has better generality. In the proposed method,
the reachable or invariant sets with different time horizons are characterized
by some non-zero sublevel sets of a value function. This value function is not
obtained by solving a viscosity solution of the partial differential equation
but by recursion and interpolation approximation. At the end of this paper,
some examples are taken to illustrate the accuracy and generality of the
proposed method.
|
We prove that a special variety of quadratically constrained quadratic
programs, occurring frequently in conjunction with the design of wave systems
obeying causality and passivity (i.e. systems with bounded response),
universally exhibit strong duality. Directly, the problem of continuum
("grayscale" or "effective medium") device design for any (complex) quadratic
wave objective governed by independent quadratic constraints can be solved as a
convex program. The result guarantees that performance limits for many common
physical objectives can be made nearly "tight", and suggests far-reaching
implications for problems in optics, acoustics, and quantum mechanics.
|
PQ-type adjacency polytopes $\nabla^{\rm PQ}_G$ are lattice polytopes arising
from finite graphs $G$. There is a connection between $\nabla^{\rm PQ}_G$ and
the engineering problem known as power-flow study, which models the balance of
electric power on a network of power generation. In particular, the normalized
volume of $\nabla^{\rm PQ}_G$ plays a central role. In the present paper, we
focus the case where $G$ is a join graph. In fact, formulas of the
$h^*$-polynomial and the normalized volume of $\nabla^{\rm PQ}_G$ of a join
graph $G$ are presented. Moreover, we give explicit formulas of the
$h^*$-polynomial and the normalized volume of $\nabla^{\rm PQ}_G$ when $G$ is a
complete multipartite graph or a wheel graph.
|
In this paper, given a linear system of equations A x = b, we are finding
locations in the plane to place objects such that sending waves from the source
points and gathering them at the receiving points solves that linear system of
equations. The ultimate goal is to have a fast physical method for solving
linear systems. The issue discussed in this paper is to apply a fast and
accurate algorithm to find the optimal locations of the scattering objects. We
tackle this issue by using asymptotic expansions for the solution of the
underlyingpartial differential equation. This also yields a potentially faster
algorithm than the classical BEM for finding solutions to the Helmholtz
equation.
|
In this paper, we provide a precise description of the compatibility
conditions for the initial data so that one can show the existence and
uniqueness of regular short-time solution to the Neumann initial-boundary
problem of a class of Landau-Lifshitz-Gilbert system with spin-polarized
transport, which is a strong nonlinear coupled parabolic system with non-local
energy.
|
We prove F\"{o}llmer's pathwise It\^{o} formula for a Banach space-valued
c\`{a}dl\`{a}g path. We also relax the assumption on the sequence of partitions
along which we treat the quadratic variation of a path.
|
We show that the global assumptions on the H-flux in the definition of
T-duality for principal torus bundles by Bunke, Rumpf, and Schick are not
required. That is, these global conditions are implied by the Poincar\'e bundle
condition. This is proved using a new and equivalent "Thom class" formulation
of T-duality for principal torus bundles. We then generalise the local
formulation of T-duality by Bunke, Schick, and Spitzweck to the torus case.
|
let $\widetilde{\bf U}^\imath$ be a quasi-split universal $\imath$quantum
group associated to a quantum symmetric pair $(\widetilde{\bf U},
\widetilde{\bf U}^\imath)$ of Kac-Moody type with a diagram involution $\tau$.
We establish the Serre-Lusztig relations for $\widetilde{\bf U}^\imath$
associated to a simple root $i$ such that $i \neq \tau i$, complementary to the
Serre-Lusztig relations associated to $i=\tau i$ which we obtained earlier. A
conjecture on braid group symmetries on $\widetilde{\bf U}^\imath$ associated
to $i$ disjoint from $\tau i$ is formulated.
|
We propose a novel distributed monetary system called Hearsay that tolerates
both Byzantine and rational behavior without the need for agents to reach
consensus on executed transactions. Recent work [5, 10, 15] has shown that
distributed monetary systems do not require consensus and can operate using a
broadcast primitive with weaker guarantees, such as reliable broadcast.
However, these protocols assume that some number of agents may be Byzantine and
the remaining agents are perfectly correct. For the application of a monetary
system in which the agents are real people with economic interests, the
assumption that agents are perfectly correct may be too strong. We expand upon
this line of thought by weakening the assumption of correctness and instead
adopting a fault tolerance model which allows up to $t < \frac{N}{3}$ agents to
be Byzantine and the remaining agents to be rational. A rational agent is one
which will deviate from the protocol if it is in their own best interest. Under
this fault tolerance model, Hearsay implements a monetary system in which all
rational agents achieve agreement on executed transactions. Moreover, Hearsay
requires only a single broadcast per transaction. In order to incentivize
rational agents to behave correctly in Hearsay, agents are rewarded with
transaction fees for participation in the protocol and punished for noticeable
deviations from the protocol. Additionally, Hearsay uses a novel broadcast
primitive called Rational Reliable Broadcast to ensure that agents can
broadcast messages under Hearsay's fault tolerance model. Rational Reliable
Broadcast achieves equivalent guarantees to Byzantine Reliable Broadcast [7]
but can tolerate the presence of rational agents. To show this, we prove that
following the Rational Reliable Broadcast protocol constitutes a Nash
equilibrium between rational agents and may therefore be of independent
interest.
|
In this work we consider the online control of a known linear dynamic system
with adversarial disturbance and adversarial controller cost. The goal in
online control is to minimize the regret, defined as the difference between
cumulative cost over a period $T$ and the cumulative cost for the best policy
from a comparator class. For the setting we consider, we generalize the
previously proposed online Disturbance Response Controller (DRC) to the
adaptive gradient online Disturbance Response Controller. Using the modified
controller, we present novel regret guarantees that improves the established
regret guarantees for the same setting. We show that the proposed online
learning controller is able to achieve intermediate intermediate regret rates
between $\sqrt{T}$ and $\log{T}$ for intermediate convex conditions, while it
recovers the previously established regret results for general convex
controller cost and strongly convex controller cost.
|
Quantum measurement is ultimately a physical process, resulting from an
interaction between the measured system and a measuring apparatus. Considering
the physical process of measurement within a thermodynamic context naturally
raises the following question: How can the work and heat be interpreted? In the
present paper we model the measurement process for an arbitrary discrete
observable as a measurement scheme. Here the system to be measured is first
unitarily coupled with an apparatus and subsequently the compound system is
objectified with respect to a pointer observable, thus producing definite
measurement outcomes. The work can therefore be interpreted as the change in
internal energy of the compound system due to the unitary coupling. By the
first law of thermodynamics, the heat is the subsequent change in internal
energy of this compound due to pointer objectification. We argue that the
apparatus serves as a stable record for the measurement outcomes only if the
pointer observable commutes with the Hamiltonian and show that such
commutativity implies that the uncertainty of heat will necessarily be
classical.
|
Crack microgeometries pose a paramount influence on effective elastic
characteristics and sonic responses. Geophysical exploration based on seismic
methods are widely used to assess and understand the presence of fractures.
Numerical simulation as a promising way for this issue, still faces some
challenges. With the rapid development of computers and computational
techniques, discrete-based numerical approaches with desirable properties have
been increasingly developed, but have not yet extensively applied to seismic
response simulation for complex fractured media. For this purpose, we apply the
coupled LSM-DFN model (Liu and Fu, 2020b) to examining the validity in
emulating elastic wave propagation and scattering in naturally-fractured media.
By comparing to the theoretical values, the implement of the schema is
validated with input parameters optimization. Moreover, dynamic elastic moduli
from seismic responses are calculated and compared with static ones from
quasi-static loading of uniaxial compression tests. Numerical results are
consistent with the tendency of theoretical predictions and available
experimental data. It shows the potential for reproducing the seismic responses
in complex fractured media and quantitatively investigating the correlations
and differences between static and dynamic elastic moduli.
|
Three-point correlators of spinning operators admit multiple tensor
structures compatible with conformal symmetry. For conserved currents in three
dimensions, we point out that helicity commutes with conformal transformations
and we use this to construct three-point structures which diagonalize helicity.
In this helicity basis, OPE data is found to be diagonal for mean-field
correlators of conserved currents and stress tensor. Furthermore, we use
Lorentzian inversion formula to obtain anomalous dimensions for conserved
currents at bulk tree-level order in holographic theories, which we compare
with corresponding flat-space gluon scattering amplitudes.
|
The essence of the microgrid cyber-physical system (CPS) lies in the cyclical
conversion of information flow and energy flow. Most of the existing coupling
models are modeled with static networks and interface structures, in which the
closed-loop data flow characteristic is not fully considered. It is difficult
for these models to accurately describe spatiotemporal deduction processes,
such as microgrid CPS attack identification, risk propagation, safety
assessment, defense control, and cascading failure. To address this problem, a
modeling method for the coupling relations of microgrid CPS driven by hybrid
spatiotemporal events is proposed in the present work. First, according to the
topological correlation and coupling logic of the microgrid CPS, the cyclical
conversion mechanism of information flow and energy flow is analyzed, and a
microgrid CPS architecture with multi-agents as the core is constructed. Next,
the spatiotemporal evolution characteristic of the CPS is described by hybrid
automata, and the task coordination mechanism of the multi-agent CPS terminal
is designed. On this basis, a discrete-continuous correlation and terminal
structure characteristic representation method of the CPS based on
heterogeneous multi-groups are then proposed. Finally, four spatiotemporal
events, namely state perception, network communication, intelligent
decision-making, and action control, are defined. Considering the constraints
of the temporal conversion of information flow and energy flow, a microgrid CPS
coupling model is established, the effectiveness of which is verified by
simulating false data injection attack (FDIA) scenarios.
|
Assessment and reporting of skills is a central feature of many digital
learning platforms. With students often using multiple platforms,
cross-platform assessment has emerged as a new challenge. While technologies
such as Learning Tools Interoperability (LTI) have enabled communication
between platforms, reconciling the different skill taxonomies they employ has
not been solved at scale. In this paper, we introduce and evaluate a
methodology for finding and linking equivalent skills between platforms by
utilizing problem content as well as the platform's clickstream data. We
propose six models to represent skills as continuous real-valued vectors and
leverage machine translation to map between skill spaces. The methods are
tested on three digital learning platforms: ASSISTments, Khan Academy, and
Cognitive Tutor. Our results demonstrate reasonable accuracy in skill
equivalency prediction from a fine-grained taxonomy to a coarse-grained one,
achieving an average recall@5 of 0.8 between the three platforms. Our skill
translation approach has implications for aiding in the tedious, manual process
of taxonomy to taxonomy mapping work, also called crosswalks, within the
tutoring as well as standardized testing worlds.
|
The ability to transfer coherent quantum information between systems is a
fundamental component of quantum technologies and leads to coherent
correlations within the global quantum process. However correlation structures
in quantum channels are less studied than those in quantum states. Motivated by
recent techniques in randomized benchmarking, we develop a range of results for
efficient estimation of correlations within a bipartite quantum channel. We
introduce sub-unitarity measures that are invariant under local changes of
basis, generalize the unitarity of a channel, and allow for the analysis of
coherent information exchange within channels. Using these, we show that
unitarity is monogamous, and provide a novel information-disturbance relation.
We then define a notion of correlated unitarity that quantifies the
correlations within a given channel. Crucially, we show that this measure is
strictly bounded on the set of separable channels and therefore provides a
witness of non-separability. Finally, we describe how such measures for
effective noise channels can be efficiently estimated within different
randomized benchmarking protocols. We find that the correlated unitarity can be
estimated in a SPAM-robust manner for any separable quantum channel, and show
that a benchmarking/tomography protocol with mid-circuit resets can reliably
witness non-separability for sufficiently small reset errors. The tools we
develop provide information beyond that obtained via simultaneous randomized
benchmarking and so could find application in the analysis of cross-talk and
coherent errors in quantum devices.
|
In this work, we want to learn to model the dynamics of similar yet distinct
groups of interacting objects. These groups follow some common physical laws
that exhibit specificities that are captured through some vectorial
description. We develop a model that allows us to do conditional generation
from any such group given its vectorial description. Unlike previous work on
learning dynamical systems that can only do trajectory completion and require a
part of the trajectory dynamics to be provided as input in generation time, we
do generation using only the conditioning vector with no access to generation
time's trajectories. We evaluate our model in the setting of modeling human
gait and, in particular pathological human gait.
|
Although monitoring and covering are fundamental goals of a wireless sensor
network (WSN), the accidental death of sensors or the running out of their
energy would result in holes in the WSN. Such holes have the potential to
disrupt the primary functions of WSNs. This paper investigates the hole
detection and healing problems in hybrid WSNs with non-identical sensor sensing
ranges. In particular, we aim to propose centralized algorithms for detecting
holes in a given region and maximizing the area covered by a WSN in the
presence of environmental obstacles. To precisely identify the boundary of the
holes, we use an additively weighted Voronoi diagram and a polynomial-time
algorithm.Furthermore, since this problem is known to be computationally
difficult, we propose a centralized greedy 1/2-approximation algorithm to
maximize the area covered by sensors. Finally, we implement the algorithms and
run simulations to show that our approximation algorithm efficiently covers the
holes by moving the mobile sensors.
|
Toward achieving robust and defensive neural networks, the robustness against
the weight parameters perturbations, i.e., sharpness, attracts attention in
recent years (Sun et al., 2020). However, sharpness is known to remain a
critical issue, "scale-sensitivity." In this paper, we propose a novel
sharpness measure, Minimum Sharpness. It is known that NNs have a specific
scale transformation that constitutes equivalent classes where functional
properties are completely identical, and at the same time, their sharpness
could change unlimitedly. We define our sharpness through a minimization
problem over the equivalent NNs being invariant to the scale transformation. We
also develop an efficient and exact technique to make the sharpness tractable,
which reduces the heavy computational costs involved with Hessian. In the
experiment, we observed that our sharpness has a valid correlation with the
generalization of NNs and runs with less computational cost than existing
sharpness measures.
|
We ask the question: to what extent can recent large-scale language and image
generation models blend visual concepts? Given an arbitrary object, we identify
a relevant object and generate a single-sentence description of the blend of
the two using a language model. We then generate a visual depiction of the
blend using a text-based image generation model. Quantitative and qualitative
evaluations demonstrate the superiority of language models over classical
methods for conceptual blending, and of recent large-scale image generation
models over prior models for the visual depiction.
|
This technical report presents a Systematic Literature Review (SLR) study
that focuses on identifying and classifying the recent research practices
pertaining to CPS development through MDE approaches. The study evaluates 140
research papers published during 2010-2018. Accordingly, a comprehensive
analysis of various MDE approaches used in the development life-cycle of CPS is
presented. Furthermore, the study identifies the research gaps and areas that
need more investigation. The contribution helps researchers and practitioners
to get an overall understanding of the research trends and existing challenges
for further research/development.
|
Dark energy stars research is an issue of great interest since recent
astronomical observations with respect to measurements in distant supernovas,
cosmic microwave background and weak gravitational lensing confirm that the
universe is undergoing a phase of accelerated expansion and this cosmological
behavior is caused by the presence of a cosmic fluid which has a strong
negative pressure that allows to explain the expanding universe. In this paper,
we obtained new relativistic stellar configurations within the framework of
Einstein-Gauss-Bonnet (EGB) gravity considering negative anisotropic pressures
and the equation of state pr={\omega}\r{ho} where pr is the radial pressure,
{\omega} is the dark energy parameter, and \r{ho} is the dark energy density.
We have chosen a modified version of metric potential proposed by
Korkina-Orlyanskii (1991). For the new solutions we checked that the radial
pressure, metric coefficients, energy density and anisotropy are well defined
and are regular in the interior of the star and are dependent of the values of
the Gauss-Bonnet coupling constant. The solutions found can be used in the
development of dark energy stars models satisfying all physical acceptability
conditions, but the causality condition and strong energy condition cannot be
satisfied.
|
Establishing and approaching the fundamental limit of orbital angular
momentum (OAM) multiplexing are necessary and increasingly urgent for current
multiple-input multiple-output research. In this work, we elaborate the
fundamental limit in terms of independent scattering channels (or degrees of
freedom of scattered fields) through angular-spectral analysis, in conjunction
with a rigorous Green function method. The scattering channel limit is
universal for arbitrary spatial mode multiplexing, which is launched by a
planar electromagnetic device, such as antenna, metasurface, etc, with a
predefined physical size. As a proof of concept, we demonstrate both
theoretically and experimentally the limit by a metasurface hologram that
transforms orthogonal OAM modes to plane-wave modes scattered at critically
separated angular-spectral regions. Particularly, a minimax optimization
algorithm is applied to suppress angular spectrum aliasing, achieving good
performances in both full-wave simulation and experimental measurement at
microwave frequencies. This work offers a theoretical upper bound and
corresponding approach route for engineering designs of OAM multiplexing.
|
This paper considers the problem of the valuation for integer numbers of the
zeta function and of five other functions which are naturally associated to it.
A relatively elementary approach is exposed, which closely connects this still
partially open problem to five themes of parity: the notions of parity of a
function and of parity of the degree of a polynomial are here related to the
distinctions of parity concerning the natural argument of the six considered
functions as well as the integer numbers of which some inverse powers are
summed. The adopted method essentially aims at enabling the students in
mathematics to have an entry into this problem.
|
Neural Ordinary Differential Equations (NODEs), a framework of
continuous-depth neural networks, have been widely applied, showing exceptional
efficacy in coping with some representative datasets. Recently, an augmented
framework has been successfully developed for conquering some limitations
emergent in application of the original framework. Here we propose a new class
of continuous-depth neural networks with delay, named as Neural Delay
Differential Equations (NDDEs), and, for computing the corresponding gradients,
we use the adjoint sensitivity method to obtain the delayed dynamics of the
adjoint. Since the differential equations with delays are usually seen as
dynamical systems of infinite dimension possessing more fruitful dynamics, the
NDDEs, compared to the NODEs, own a stronger capacity of nonlinear
representations. Indeed, we analytically validate that the NDDEs are of
universal approximators, and further articulate an extension of the NDDEs,
where the initial function of the NDDEs is supposed to satisfy ODEs. More
importantly, we use several illustrative examples to demonstrate the
outstanding capacities of the NDDEs and the NDDEs with ODEs' initial value.
Specifically, (1) we successfully model the delayed dynamics where the
trajectories in the lower-dimensional phase space could be mutually
intersected, while the traditional NODEs without any argumentation are not
directly applicable for such modeling, and (2) we achieve lower loss and higher
accuracy not only for the data produced synthetically by complex models but
also for the real-world image datasets, i.e., CIFAR10, MNIST, and SVHN. Our
results on the NDDEs reveal that appropriately articulating the elements of
dynamical systems into the network design is truly beneficial to promoting the
network performance.
|
The difference in the density of states for up- and down-spin electrons in a
ferromagnet (F) results in spin-dependent scattering of electrons at a
ferromagnet / nonmagnetic (F/N) interface. In a F/N/F spin-valve, this causes a
current-independent difference in resistance ($\Delta R$) between antiparallel
(AP) and parallel (P) magnetization states. Giant magnetoresistance (GMR),
$\Delta R = R(AP) - R(P)$, is positive due to increased scattering of majority
and minority spin-electrons in the AP-state. If N is substituted for a
superconductor (S), there exists a competition between GMR and the
superconducting spin-valve effect: in the AP-state the net magnetic exchange
field acting on S is lowered and the superconductivity is reinforced meaning
$R(AP)$ decreases. For current-perpendicular-to-plane (CPP) spin-valves,
existing experimental studies show that GMR dominates ($\Delta R>0$) over the
superconducting spin valve effect ($\Delta R<0$) [J. Y. Gu et al., Phys. Rev. B
66, 140507(R) (2002)]. Here, however, we report a crossover from GMR ($\Delta R
> 0$) to the superconducting spin valve effect ($\Delta R < 0$) in CPP F/S/F
spin-valves as the superconductor thickness decreases below a critical value.
|
We consider Markov Decision Processes (MDPs) in which every stationary policy
induces the same graph structure for the underlying Markov chain and further,
the graph has the following property: if we replace each recurrent class by a
node, then the resulting graph is acyclic. For such MDPs, we prove the
convergence of the stochastic dynamics associated with a version of optimistic
policy iteration (OPI), suggested in Tsitsiklis (2002), in which the values
associated with all the nodes visited during each iteration of the OPI are
updated.
|
Forecast of football outcomes in terms of Home Win, Draw and Away Win relies
largely on ex ante probability elicitation of these events and ex post
verification of them via computation of probability scoring rules (Brier,
Ranked Probability, Logarithmic, Zero-One scores). Usually, appraisal of the
quality of forecasting procedures is restricted to reporting mean score values.
The purpose of this article is to propose additional tools of verification,
such as score decompositions into several components of special interest.
Graphical and numerical diagnoses of reliability and discrimination and kindred
statistical methods are presented using different techniques of binning (fixed
thresholds, quantiles, logistic and iso regression). These procedures are
illustrated on probability forecasts for the outcomes of the UEFA Champions
League (C1) at the end of the group stage based on typical Poisson regression
models with reasonably good results in terms of reliability as compared to
those obtained from bookmaker odds and whatever the technique used. Links with
research in machine learning and different areas of application (meteorology,
medicine) are discussed.
|
In this work, we present an analysis tool to help golf beginners compare
their swing motion with experts' swing motion. The proposed application
synchronizes videos with different swing phase timings using the latent
features extracted by a neural network-based encoder and detects key frames
where discrepant motions occur. We visualize synchronized image frames and 3D
poses that help users recognize the difference and the key factors that can be
important for their swing skill improvement.
|
Subsets and Splits