abstract
stringlengths 42
2.09k
|
---|
This paper develops a novel second order cone relaxation of the semidefinite
programming formulation of optimal power flow, that does not imply the `angle
relaxation'. We build on a technique developed by Kim et al., extend it for
complex matrices, and apply it to 3x3 positive semidefinite matrices to
generate novel second-order cone constraints that augment upon the well-known
2x2 principal-minor based second-order cone constraints. Finally, we apply it
to optimal power flow in meshed networks and provide numerical illustrations.
|
Self-supervised or weakly supervised models trained on large-scale datasets
have shown sample-efficient transfer to diverse datasets in few-shot settings.
We consider how upstream pretrained models can be leveraged for downstream
few-shot, multilabel, and continual learning tasks. Our model CLIPPER (CLIP
PERsonalized) uses image representations from CLIP, a large-scale image
representation learning model trained using weak natural language supervision.
We developed a technique, called Multi-label Weight Imprinting (MWI), for
multi-label, continual, and few-shot learning, and CLIPPER uses MWI with image
representations from CLIP. We evaluated CLIPPER on 10 single-label and 5
multi-label datasets. Our model shows robust and competitive performance, and
we set new benchmarks for few-shot, multi-label, and continual learning. Our
lightweight technique is also compute-efficient and enables privacy-preserving
applications as the data is not sent to the upstream model for fine-tuning.
|
The adaptive traffic signal control (ATSC) problem can be modeled as a
multiagent cooperative game among urban intersections, where intersections
cooperate to optimize their common goal. Recently, reinforcement learning (RL)
has achieved marked successes in managing sequential decision making problems,
which motivates us to apply RL in the ASTC problem. Here we use independent
reinforcement learning (IRL) to solve a complex traffic cooperative control
problem in this study. One of the largest challenges of this problem is that
the observation information of intersection is typically partially observable,
which limits the learning performance of IRL algorithms. To this, we model the
traffic control problem as a partially observable weak cooperative traffic
model (PO-WCTM) to optimize the overall traffic situation of a group of
intersections. Different from a traditional IRL task that averages the returns
of all agents in fully cooperative games, the learning goal of each
intersection in PO-WCTM is to reduce the cooperative difficulty of learning,
which is also consistent with the traffic environment hypothesis. We also
propose an IRL algorithm called Cooperative Important Lenient Double DQN
(CIL-DDQN), which extends Double DQN (DDQN) algorithm using two mechanisms: the
forgetful experience mechanism and the lenient weight training mechanism. The
former mechanism decreases the importance of experiences stored in the
experience reply buffer, which deals with the problem of experience failure
caused by the strategy change of other agents. The latter mechanism increases
the weight experiences with high estimation and `leniently' trains the DDQN
neural network, which improves the probability of the selection of cooperative
joint strategies. Experimental results show that CIL-DDQN outperforms other
methods in almost all performance indicators of the traffic control problem.
|
COVID-19 pandemic has generated what public health officials called an
infodemic of misinformation. As social distancing and stay-at-home orders came
into effect, many turned to social media for socializing. This increase in
social media usage has made it a prime vehicle for the spreading of
misinformation. This paper presents a mechanism to detect COVID-19
health-related misinformation in social media following an interdisciplinary
approach. Leveraging social psychology as a foundation and existing
misinformation frameworks, we defined misinformation themes and associated
keywords incorporated into the misinformation detection mechanism using applied
machine learning techniques. Next, using the Twitter dataset, we explored the
performance of the proposed methodology using multiple state-of-the-art machine
learning classifiers. Our method shows promising results with at most 78%
accuracy in classifying health-related misinformation versus true information
using uni-gram-based NLP feature generations from tweets and the Decision Tree
classifier. We also provide suggestions on alternatives for countering
misinformation and ethical consideration for the study.
|
Accurate tracking is still a challenging task due to appearance variations,
pose and view changes, and geometric deformations of target in videos. Recent
anchor-free trackers provide an efficient regression mechanism but fail to
produce precise bounding box estimation. To address these issues, this paper
repurposes a Transformer-alike regression branch, termed as Target Transformed
Regression (TREG), for accurate anchor-free tracking. The core to our TREG is
to model pair-wise relation between elements in target template and search
region, and use the resulted target enhanced visual representation for accurate
bounding box regression. This target contextualized representation is able to
enhance the target relevant information to help precisely locate the box
boundaries, and deal with the object deformation to some extent due to its
local and dense matching mechanism. In addition, we devise a simple online
template update mechanism to select reliable templates, increasing the
robustness for appearance variations and geometric deformations of target in
time. Experimental results on visual tracking benchmarks including VOT2018,
VOT2019, OTB100, GOT10k, NFS, UAV123, LaSOT and TrackingNet demonstrate that
TREG obtains the state-of-the-art performance, achieving a success rate of
0.640 on LaSOT, while running at around 30 FPS. The code and models will be
made available at https://github.com/MCG-NJU/TREG.
|
Batch Normalization (BN) is one of the key components for accelerating
network training, and has been widely adopted in the medical image analysis
field. However, BN only calculates the global statistics at the batch level,
and applies the same affine transformation uniformly across all spatial
coordinates, which would suppress the image contrast of different semantic
structures. In this paper, we propose to incorporate the semantic class
information into normalization layers, so that the activations corresponding to
different regions (i.e., classes) can be modulated differently. We thus develop
a novel DualNorm-UNet, to concurrently incorporate both global image-level
statistics and local region-wise statistics for network normalization.
Specifically, the local statistics are integrated by adaptively modulating the
activations along different class regions via the learned semantic masks in the
normalization layer. Compared with existing methods, our approach exploits
semantic knowledge at normalization and yields more discriminative features for
robust segmentation results. More importantly, our network demonstrates
superior abilities in capturing domain-invariant information from multiple
domains (institutions) of medical data. Extensive experiments show that our
proposed DualNorm-UNet consistently improves the performance on various
segmentation tasks, even in the face of more complex and variable data
distributions. Code is available at https://github.com/lambert-x/DualNorm-Unet.
|
With increasing usage of clickbaits in Indonesian Online News, newsworthy
articles sometimes get buried among clickbaity news. A reliable and lightweight
tool is needed to detect such clickbaits on-the-go. Leveraging state-of-the-art
natural language processing model BERT, a RESTful API based application is
developed. This study offloaded the computing resources needed to train the
model on the cloud server, while the client-side application only needs to send
a request to the API and the cloud server will handle the rest. This study
proposed the design and developed a web-based application to detect clickbait
in Indonesian using IndoBERT as a language model. The application usage is
discussed and available for public use with a performance of mean ROC-AUC of
89%.
|
Context: Technical Debt requirements are related to the distance between the
ideal value of the specification and the system's actual implementation, which
are consequences of strategic decisions for immediate gains, or unintended
changes in context. To ensure the evolution of the software, it is necessary to
keep it managed. Identification and measurement are the first two stages of the
management process; however, they are little explored in academic research in
requirements engineering. Objective: We aimed at investigating which evidence
helps to strengthen the process of TD requirements management, including
identification and measurement. Method: We conducted a Systematic Literature
Review through manual and automatic searches considering 7499 studies from 2010
to 2020, and including 61 primary studies. Results: We identified some causes
related to Technical Debt requirements, existing strategies to help in the
identification and measurement, and metrics to support the measurement stage.
Conclusion: Studies on TD requirements are still preliminary, especially on
management tools. Yet, not enough attention is given to interpersonal issues,
which are difficulties encountered when performing such activities, and
therefore also require research. Finally, the provision of metrics to help
measure TD is part of this work's contribution, providing insights into the
application in the requirements context.
|
This paper introduces the first release of Pytearcat, a Python package
developed to compute tensor algebra operations in the context of theoretical
physics, for instance, in general relativity. Given that working with tensors
can become a complex task, people often rely on computational tools to perform
tensor calculations. We aim to build a tensor calculator based on Python, which
benefits from being free and easy to use. Pytearcat syntax resembles the usual
physics notation for tensor calculus, such as the Einstein notation for index
contraction. This version allows the user to perform many tensor operations,
including derivatives and series expansions, along with routines to obtain the
typical General Relativity tensors. A particular concern was put in the
execution times, leading to incorporate an alternative core for the symbolic
calculations, enabling to reach much faster execution times. The syntax and the
versatility of Pytearcat are the most important features of this package, where
the latter can be used to extend Pytearcat to other areas of theoretical
physics.
|
The black hole information paradox has been with us for some time. We outline
the nature of the paradox. We then propose a resolution based on an examination
of the properties of quantum gravity under circumstances that give rise to a
classical singularity. We show that the gravitational wavefunction vanishes as
one gets close to the classical singularity. This results in a future boundary
condition inside the black hole that allows for quantum information to be
recovered in the evaporation process.
|
In this paper we consider a class of boundary value problems for third order
nonlinear functional differential equation. By the reduction of the problem to
operator equation we establish the existence and uniqueness of solution and
construct a numerical method for solving it. We prove that the method is of
second order accuracy and obtain an estimate for total error. Some examples
demonstrate the validity of the obtained theoretical results and the efficiency
of the numerical method. The approach used for the third order nonlinear
functional differential equation can be applied to functional differential
equations of any orders.
|
Due to their long-standing reputation as excellent off-the-shelf predictors,
random forests continue remain a go-to model of choice for applied
statisticians and data scientists. Despite their widespread use, however, until
recently, little was known about their inner-workings and about which aspects
of the procedure were driving their success. Very recently, two competing
hypotheses have emerged -- one based on interpolation and the other based on
regularization. This work argues in favor of the latter by utilizing the
regularization framework to reexamine the decades-old question of whether
individual trees in an ensemble ought to be pruned. Despite the fact that
default constructions of random forests use near full depth trees in most
popular software packages, here we provide strong evidence that tree depth
should be seen as a natural form of regularization across the entire procedure.
In particular, our work suggests that random forests with shallow trees are
advantageous when the signal-to-noise ratio in the data is low. In building up
this argument, we also critique the newly popular notion of "double descent" in
random forests by drawing parallels to U-statistics and arguing that the
noticeable jumps in random forest accuracy are the result of simple averaging
rather than interpolation.
|
The nitrogen-vacancy (NV) centre in diamond has emerged as a candidate to
non-invasively hyperpolarise nuclear spins in molecular systems to improve the
sensitivity of nuclear magnetic resonance (NMR) experiments. Several promising
proof of principle experiments have demonstrated small-scale polarisation
transfer from single NVs to hydrogen spins outside the diamond. However, the
scaling up of these results to the use of a dense NV ensemble, which is a
necessary prerequisite for achieving realistic NMR sensitivity enhancement, has
not yet been demonstrated. In this work, we present evidence for a polarising
interaction between a shallow NV ensemble and external nuclear targets over a
micrometre scale, and characterise the challenges in achieving useful
polarisation enhancement. In the most favourable example of the interaction
with hydrogen in a solid state target, a maximum polarisation transfer rate of
$\approx 7500$ spins per second per NV is measured, averaged over an area
containing order $10^6$ NVs. Reduced levels of polarisation efficiency are
found for liquid state targets, where molecular diffusion limits the transfer.
Through analysis via a theoretical model, we find that our results suggest
implementation of this technique for NMR sensitivity enhancement is feasible
following realistic diamond material improvements.
|
Computing dynamical distributions in quantum many-body systems represents one
of the paradigmatic open problems in theoretical condensed matter physics.
Despite the existence of different techniques both in real-time and frequency
space, computational limitations often dramatically constrain the physical
regimes in which quantum many-body dynamics can be efficiently solved. Here we
show that the combination of machine learning methods and complementary
many-body tensor network techniques substantially decreases the computational
cost of quantum many-body dynamics. We demonstrate that combining kernel
polynomial techniques and real-time evolution, together with deep neural
networks, allows to compute dynamical quantities faithfully. Focusing on
many-body dynamical distributions, we show that this hybrid neural-network
many-body algorithm, trained with single-particle data only, can efficiently
extrapolate dynamics for many-body systems without prior knowledge.
Importantly, this algorithm is shown to be substantially resilient to numerical
noise, a feature of major importance when using this algorithm together with
noisy many-body methods. Ultimately, our results provide a starting point
towards neural-network powered algorithms to support a variety of quantum
many-body dynamical methods, that could potentially solve computationally
expensive many-body systems in a more efficient manner.
|
An Adversarial Swarm model consists of two swarms that are interacting with
each other in a competing manner. In the present study, an agent-based
Adversarial swarm model is developed comprising of two competing swarms, the
Attackers and the Defenders, respectively. The Defender's aim is to protect a
point of interest in unbounded 2D Euclidean space referred to as the Goal. In
contrast, the Attacker's main task is to intercept the Goal while continually
trying to evade the Defenders, which gets attracted to it when they are in a
certain vicinity of the Goal termed as the sphere of influence, essentially a
circular perimeter. The interaction of the two swarms was studied from a
Dynamical systems perspective by changing the number of Agents making up each
respective swarm. The simulations were strongly investigated for the presence
of chaos by evaluating the Largest Lyapunov Exponent (LLE), implementing phase
space reconstruction. The source of chaos in the system was observed to be
induced by the passively constrained motion of the Defender agents around the
Goal. Multiple local equilibrium points existed for the Defenders in all the
cases and some instances for the Attackers, indicating complex dynamics. LLEs
for all the trials of the Monte Carlo analysis in all the cases revealed the
presence of chaotic and non-chaotic solutions in each case, respectively, with
the majority of the Defenders indicating chaotic behavior. Overall, the swarms
exist in the 'Edge of chaos', thus revealing complex dynamical behavior. The
final system state (i,e, the outcome of the interaction between the swarms in a
particular simulation) is studied for all the cases, which indicated the
presence of binary final states in some. Finally, to evaluate the complexity of
individual swarms, Multiscale Entropy is employed, which revealed a greater
degree of randomness for the Defenders when compared to Attackers.
|
Starting from the moment sequences of classical orthogonal polynomials we
derive the orthogonality purely algebraically. We consider also the moments of
($q=1$) classical orthogonal polynomials, and study those cases in which the
exponential generating function has a nice form. In the opposite direction, we
show that the generalized Dumont-Foata polynomials with six parameters are the
moments of rescaled continuous dual Hahn polynomials.
|
Shift invariance is a critical property of CNNs that improves performance on
classification. However, we show that invariance to circular shifts can also
lead to greater sensitivity to adversarial attacks. We first characterize the
margin between classes when a shift-invariant linear classifier is used. We
show that the margin can only depend on the DC component of the signals. Then,
using results about infinitely wide networks, we show that in some simple
cases, fully connected and shift-invariant neural networks produce linear
decision boundaries. Using this, we prove that shift invariance in neural
networks produces adversarial examples for the simple case of two classes, each
consisting of a single image with a black or white dot on a gray background.
This is more than a curiosity; we show empirically that with real datasets and
realistic architectures, shift invariance reduces adversarial robustness.
Finally, we describe initial experiments using synthetic data to probe the
source of this connection.
|
There are described equations coupling a completely symmetric conformal
Killing or Codazzi tensor to the Einstein equations for a metric, in a manner
analogous to that used to obtain the Einstein-Maxwell equations by coupling a
two-form to the metric. Examples of solutions are constructed from mean
curvature zero immersions, affine spheres, isoparametric polynomials, and
regular graphs. There are deduced some constraints on the scalar curvature of
the metric occurring in a solution. Along the way, there are reviewed
Weitzenb\"ock formulas, vanishing theorems, and related results for conformal
Killing and divergence free Codazzi tensors.
|
We initiate the study of the heterogeneous facility location problem with
limited resources. We mainly focus on the fundamental case where a set of
agents are positioned in the line segment [0,1] and have approval preferences
over two available facilities. A mechanism takes as input the positions and the
preferences of the agents, and chooses to locate a single facility based on
this information. We study mechanisms that aim to maximize the social welfare
(the total utility the agents derive from facilities they approve), under the
constraint of incentivizing the agents to truthfully report their positions and
preferences. We consider three different settings depending on the level of
agent-related information that is public or private. For each setting, we
design deterministic and randomized strategyproof mechanisms that achieve a
good approximation of the optimal social welfare, and complement these with
nearly-tight impossibility results.
|
The main challenge in visible light communications (VLC) is the low
modulation bandwidth of light-emitting diodes (LEDs). This forms a barrier
towards achieving high data rates. Moreover, the implementation of high order
modulation schemes is restricted by the requirements of intensity modulation
(IM) and direct detection (DD), which demand the use of real unipolar signals.
In this paper, we propose a novel amplitude, phase and quadrant (APQ)
modulation scheme that fits into the IM/DD restrictions in VLC systems. The
proposed scheme decomposes the complex and bipolar symbols of high order
modulations into three different symbols that carry the amplitude, phase and
quadrant information of the intended symbol. The constructed symbols are
assigned different power levels and are transmitted simultaneously, i.e.
exploiting the entire bandwidth and time resources. The receiving terminal
performs successive interference cancellation to extract and decode the three
different symbols, and then uses them to decide the intended complex bipolar
symbol. We evaluate the performance of the proposed APQ scheme in terms of
symbol-error-rate and achievable system throughput for different setup
scenarios. The obtained results are compared with generalized spatial shift
keying (GSSK). The presented results show that APQ offers a higher reliability
compared to GSSK across the simulation area, while providing lower hardware
complexity.
|
For artificially intelligent learning systems to have widespread
applicability in real-world settings, it is important that they be able to
operate decentrally. Unfortunately, decentralized control is difficult --
computing even an epsilon-optimal joint policy is a NEXP complete problem.
Nevertheless, a recently rediscovered insight -- that a team of agents can
coordinate via common knowledge -- has given rise to algorithms capable of
finding optimal joint policies in small common-payoff games. The Bayesian
action decoder (BAD) leverages this insight and deep reinforcement learning to
scale to games as large as two-player Hanabi. However, the approximations it
uses to do so prevent it from discovering optimal joint policies even in games
small enough to brute force optimal solutions. This work proposes CAPI, a novel
algorithm which, like BAD, combines common knowledge with deep reinforcement
learning. However, unlike BAD, CAPI prioritizes the propensity to discover
optimal joint policies over scalability. While this choice precludes CAPI from
scaling to games as large as Hanabi, empirical results demonstrate that, on the
games to which CAPI does scale, it is capable of discovering optimal joint
policies even when other modern multi-agent reinforcement learning algorithms
are unable to do so. Code is available at https://github.com/ssokota/capi .
|
Direct correlation functions (DCFs), linked to the second functional
derivative of the free energy with respect to the one-particle density, play a
fundamental role in a statistical mechanics description of matter. This holds
in particular for the ordered phases: DCFs contain information about the local
structure including defects and encode the thermodynamic properties of
crystalline solids; they open a route to the elastic constants beyond low
temperature expansions. Via a numerical tour de force we have explicitly
calculated for the first time the DCF of a solid: based on the fundamental
measure concept we provide results for the DCF of a hard sphere crystal. We
demonstrate that this function differs at coexistence significantly from its
liquid counterpart - both in shape as well as in its order of magnitude -
because it is dominated by vacancies. We provide evidence that the traditional
use of liquid DCFs in functional Taylor expansions of the free energy is
conceptually wrong and show that the emergent elastic constants are in good
agreement with simulation-based results.
|
This paper is concerned with reconstruction issue of some typical inverse
problems and consists of three parts. First a framework of the enclosure method
for an inverse source problem governed by the Helmholtz equation at a fixed
wave number in three dimensions is introduced. It is based on the nonvanishing
of the coefficient of the leading profile of an oscillatory integral over a
domain having a conical singularity. Second an explicit formula of the
coefficient for a domain having a circular cone singularity and its implication
under the framework are given. Third, an application under the framework to an
inverse obstacle problem governed by an inhomogeneous Helmholtz equation at a
fixed wave number in three dimensions is given.
|
We consider a fractal refinement of the Carleson problem for the
Schr\"odinger equation, that is to identify the minimal regularity needed by
the solutions to converge pointwise to their initial data almost everywhere
with respect to the $\alpha$-Hausdorff measure ($\alpha$-a.e.). We extend to
the fractal setting ($\alpha < n$) a recent counterexample of Bourgain
\cite{Bourgain2016}, which is sharp in the Lebesque measure setting ($\alpha =
n$). In doing so we recover the necessary condition from \cite{zbMATH07036806}
for pointwise convergence~$\alpha$-a.e. and we extend it to the range
$n/2<\alpha \leq (3n+1)/4$.
|
Purpose: We implemented the Machine Learning (ML) aided k-t SENSE
reconstruction to enable high resolution quantitative real-time phase contrast
MR (PCMR). Methods: A residual U-net and our U-net M were used to generate the
high resolution x-f space estimate for k-t SENSE regularisation prior. The
networks were judged on their ability to generalise to real undersampled data.
The in-vivo validation was done on 20 real-time 18x prospectively undersmapled
GASperturbed PCMR data. The ML aided k-t SENSE reconstruction results were
compared against the free-breathing Cartesian retrospectively gated sequence
and the compressed sensing (CS) reconstruction of the same data. Results: In
general, the ML aided k-t SENSE generated flow curves that were visually
sharper than those produced using CS. In two exceptional cases, U-net M
predictions exhibited blurring which propagated to the extracted velocity
curves. However, there were no statistical differences in the measured peak
velocities and stroke volumes between the tested methods. The ML aided k-t
SENSE was estimated to be ~3.6x faster in processing than CS. Conclusion: The
ML aided k-t SENSE reconstruction enables artefact suppression on a par with CS
with no significant differences in quantitative measures. The timing results
suggest the on-line implementation could deliver a substantial increase in
clinical throughput.
|
Expected to operate in the imminent future, air taxi service (ATS) is an
aerial on-demand transport for a single passenger or a small group of riders,
which seeks to transform the method of everyday commute. This uncharted
territory in the emerging transportation world is anticipated to enable
consumers bypass traffic congestion in urban road networks. By adopting an
electric vertical takeoff and landing concept (eVTOL), air taxis could be
operational from skyports retrofitted on building rooftops, thus gaining
advantage from an implementation standpoint. Motivated by the potential impact
of ATS, this study provides a review of air taxi systems and associated
operations. We first discuss the current developments in the ATS (demand
prediction, air taxi network design, and vehicle configuration). Next, we
anticipate potential future challenges of ATS from an operations management
perspective, and review the existing literature that could be leveraged to
tackle these problems (ride-matching, pricing strategies, vehicle maintenance
scheduling, and pilot training and recruitment). Finally, we detail future
research opportunities in the air taxi domain.
|
Multi-modal learning, which focuses on utilizing various modalities to
improve the performance of a model, is widely used in video recognition. While
traditional multi-modal learning offers excellent recognition results, its
computational expense limits its impact for many real-world applications. In
this paper, we propose an adaptive multi-modal learning framework, called
AdaMML, that selects on-the-fly the optimal modalities for each segment
conditioned on the input for efficient video recognition. Specifically, given a
video segment, a multi-modal policy network is used to decide what modalities
should be used for processing by the recognition model, with the goal of
improving both accuracy and efficiency. We efficiently train the policy network
jointly with the recognition model using standard back-propagation. Extensive
experiments on four challenging diverse datasets demonstrate that our proposed
adaptive approach yields 35%-55% reduction in computation when compared to the
traditional baseline that simply uses all the modalities irrespective of the
input, while also achieving consistent improvements in accuracy over the
state-of-the-art methods.
|
We present the relation between the star formation rate surface density,
$\Sigma_{\rm SFR}$, and the hydrostatic mid-plane pressure, P$_{\rm h}$, for
4260 star-forming regions of kpc size located in 96 galaxies included in the
EDGE-CALIFA survey covering a wide range of stellar masses and morphologies. We
find that these two parameters are tightly correlated, exhibiting smaller
scatter and strong correlation in comparison to other star-forming scaling
relations. A power-law, with a slightly sub-linear index, is a good
representation of this relation. Locally, the residuals of this correlation
show a significant anti-correlation with both the stellar age and metallicity
whereas the total stellar mass may also play a secondary role in shaping the
$\Sigma_{\rm SFR}$ - P$_{\rm h}$ relation. For our sample of active
star-forming regions (i.e., regions with large values of H$\alpha$ equivalent
width), we find that the effective feedback momentum per unit stellar mass
($p_\ast/m_\ast$),measured from the P$_{\rm h}$ / $\Sigma_{\rm SFR}$ ratio
increases with P$_{\rm h}$. The median value of this ratio for all the sampled
regions is larger than the expected momentum just from supernovae explosions.
Morphology of the galaxies, including bars, does not seem to have a significant
impact in the $\Sigma_{\rm SFR}$ - P$_{\rm h}$ relation. Our analysis suggests
that self regulation of the $\Sigma_{\rm SFR}$ at kpc scales comes mainly from
momentum injection to the interstellar medium from supernovae explosions.
However, other mechanism in disk galaxies may also play a significant role in
shaping the $\Sigma_{\rm SFR}$ at local scales. Our results also suggest that
P$_{\rm h}$ can be considered as the main parameter that modulates star
formation at kpc scales, rather than individual components of the baryonic
mass.
|
A graph is Helly if every family of pairwise intersecting balls has a
nonempty common intersection. The class of Helly graphs is the discrete
analogue of the class of hyperconvex metric spaces. It is also known that every
graph isometrically embeds into a Helly graph, making the latter an important
class of graphs in Metric Graph Theory. We study diameter, radius and all
eccentricity computations within the Helly graphs. Under plausible complexity
assumptions, neither the diameter nor the radius can be computed in truly
subquadratic time on general graphs. In contrast to these negative results, it
was recently shown that the radius and the diameter of an $n$-vertex $m$-edge
Helly graph $G$ can be computed with high probability in $\tilde{\mathcal
O}(m\sqrt{n})$ time (i.e., subquadratic in $n+m$). In this paper, we improve
that result by presenting a deterministic ${\mathcal O}(m\sqrt{n})$ time
algorithm which computes not only the radius and the diameter but also all
vertex eccentricities in a Helly graph. Furthermore, we give a parameterized
linear-time algorithm for this problem on Helly graphs, with the parameter
being the Gromov hyperbolicity $\delta$. More specifically, we show that the
radius and a central vertex of an $m$-edge $\delta$-hyperbolic Helly graph $G$
can be computed in $\mathcal O(\delta m)$ time and that all vertex
eccentricities in $G$ can be computed in $\mathcal O(\delta^2 m)$ time. To show
this more general result, we heavily use our new structural properties obtained
for Helly graphs.
|
We present a graph-convolution-reinforced transformer, named Mesh Graphormer,
for 3D human pose and mesh reconstruction from a single image. Recently both
transformers and graph convolutional neural networks (GCNNs) have shown
promising progress in human mesh reconstruction. Transformer-based approaches
are effective in modeling non-local interactions among 3D mesh vertices and
body joints, whereas GCNNs are good at exploiting neighborhood vertex
interactions based on a pre-specified mesh topology. In this paper, we study
how to combine graph convolutions and self-attentions in a transformer to model
both local and global interactions. Experimental results show that our proposed
method, Mesh Graphormer, significantly outperforms the previous
state-of-the-art methods on multiple benchmarks, including Human3.6M, 3DPW, and
FreiHAND datasets. Code and pre-trained models are available at
https://github.com/microsoft/MeshGraphormer
|
A staged tree model is a discrete statistical model encoding relationships
between events. These models are realised by directed trees with coloured
vertices. In algebro-geometric terms, the model consists of points inside a
toric variety. For certain trees, called balanced, the model is in fact the
intersection of the toric variety and the probability simplex. This gives the
model a straightforward description, and has computational advantages. In this
paper we show that the class of staged tree models with a toric structure
extends far outside of the balanced case, if we allow a change of coordinates.
It is an open problem whether all staged tree models have toric structure.
|
In this work, we firstly investigate how to reproduce and how well one can
reproduce the Woods-Saxon density distribution of initial nuclei in the
framework of the improved quantum molecular dynamics model. Then, we propose a
new treatment for the initialization of nuclei which is correlated with the
nucleonic mean-field potential by using the same potential energy density
functional. In the mean field potential, the three-body force term is
accurately calculated. Based on the new version of the model, the influences of
precise calculations of the three-body force term, the slope of symmetry
energy, the neutron-proton effective mass splitting, and the width of the wave
packet on heavy ion collision observables, such as the neutron to proton yield
ratios for emitted free nucleons [$R(n/p)$] and for coalescence invariant
nucleons [$R_{ci}(n/p)$] for $^{124}$Sn+$^{112}$Sn at the beam energy of 200
MeV per nucleon, are discussed. Our calculations show that the spectra of
neutron to proton yield ratios [$R(n/p)$] can be used to probe the slope of
symmetry energy ($L$) and the neutron-proton effective mass splitting. In
detail, the $R(n/p)$ in the low kinetic energy region can be used to probe the
slope of symmetry energy ($L$). With a given $L$, the inclination of $R(n/p)$
to kinetic energy ($E_k$) can be used to probe the effective mass splitting. In
the case where the neutron-proton effective mass splitting is fixed, $R(n/p)$
at high kinetic energy can also be used to learn the symmetry energy at
suprasaturation density.
|
We study a natural generalization of that given in [arXiv:2005.13198
[hep-th]] to heterotic string. Namely, starting from the generic Gepner models
for Calabi-Yau 3-folds, we construct the non-SUSY heterotic string vacua with
the vanishing cosmological constant at the one loop. We especially focus on the
asymmetric orbifolding based on some discrete subgroup of the chiral
$U(1)$-action which acts on both of the Gepner model and the $SO(32)$ or
$E_8\times E_8$-sector. We present a classification of the relevant orbifold
models leading to the string vacua with the properties mentioned above. In some
cases, the desired vacua can be constructed in the manner quite similar to
those given in [arXiv:2005.13198 [hep-th]] for the type II string, in which the
orbifold groups contain two generators with the discrete torsions. On the other
hand, we also have simpler models that are just realized as the asymmetric
orbifolds of cyclic groups with only one generator.
|
We consider the Hamiltonian renormalisation group flow of discretised
one-dimensional physical theories. In particular, we investigate the influence
the choice of different embedding maps has on the RG flow and the resulting
continuum limit, and show in which sense they are, and in which sense they are
not equivalent as physical theories. We are furthermore elucidating the
interplay of the RG flow and the algebras operators satisfy, both on the
discrete and the continuum. Further, we propose preferred renormalisation
prescriptions for operator algebras guaranteeing to arrive at preferred
algebraic relations in the continuum, if suitable extension properties are
assumed. Finally, we introduce a weaker form of distributional equivalence, and
show how unitarily inequivalent continuum limits, which arise due to a choice
of different embedding maps, can still be weakly equivalent in that sense.
|
Software Quality Assurance (SQA) planning aims to define proactive plans,
such as defining maximum file size, to prevent the occurrence of software
defects in future releases. To aid this, defect prediction models have been
proposed to generate insights as the most important factors that are associated
with software quality. Such insights that are derived from traditional defect
models are far from actionable-i.e., practitioners still do not know what they
should do or avoid to decrease the risk of having defects, and what is the risk
threshold for each metric. A lack of actionable guidance and risk threshold can
lead to inefficient and ineffective SQA planning processes. In this paper, we
investigate the practitioners' perceptions of current SQA planning activities,
current challenges of such SQA planning activities, and propose four types of
guidance to support SQA planning. We then propose and evaluate our AI-Driven
SQAPlanner approach, a novel approach for generating four types of guidance and
their associated risk thresholds in the form of rule-based explanations for the
predictions of defect prediction models. Finally, we develop and evaluate an
information visualization for our SQAPlanner approach. Through the use of
qualitative survey and empirical evaluation, our results lead us to conclude
that SQAPlanner is needed, effective, stable, and practically applicable. We
also find that 80% of our survey respondents perceived that our visualization
is more actionable. Thus, our SQAPlanner paves a way for novel research in
actionable software analytics-i.e., generating actionable guidance on what
should practitioners do and not do to decrease the risk of having defects to
support SQA planning.
|
The threat from ransomware continues to grow both in the number of affected
victims as well as the cost incurred by the people and organisations impacted
in a successful attack. In the majority of cases, once a victim has been
attacked there remain only two courses of action open to them; either pay the
ransom or lose their data. One common behaviour shared between all crypto
ransomware strains is that at some point during their execution they will
attempt to encrypt the users' files. Previous research Penrose et al. (2013);
Zhao et al. (2011) has highlighted the difficulty in differentiating between
compressed and encrypted files using Shannon entropy as both file types exhibit
similar values. One of the experiments described in this paper shows a unique
characteristic for the Shannon entropy of encrypted file header fragments. This
characteristic was used to differentiate between encrypted files and other high
entropy files such as archives. This discovery was leveraged in the development
of a file classification model that used the differential area between the
entropy curve of a file under analysis and one generated from random data. When
comparing the entropy plot values of a file under analysis against one
generated by a file containing purely random numbers, the greater the
correlation of the plots is, the higher the confidence that the file under
analysis contains encrypted data.
|
Learning node representation on dynamically-evolving, multi-relational graph
data has gained great research interest. However, most of the existing models
for temporal knowledge graph forecasting use Recurrent Neural Network (RNN)
with discrete depth to capture temporal information, while time is a continuous
variable. Inspired by Neural Ordinary Differential Equation (NODE), we extend
the idea of continuum-depth models to time-evolving multi-relational graph
data, and propose a novel Temporal Knowledge Graph Forecasting model with NODE.
Our model captures temporal information through NODE and structural information
through a Graph Neural Network (GNN). Thus, our graph ODE model achieves a
continuous model in time and efficiently learns node representation for future
prediction. We evaluate our model on six temporal knowledge graph datasets by
performing link forecasting. Experiment results show the superiority of our
model.
|
Existing skin attributes detection methods usually initialize with a
pre-trained Imagenet network and then fine-tune the medical target task.
However, we argue that such approaches are suboptimal because medical datasets
are largely different from ImageNet and often contain limited training samples.
In this work, we propose Task Agnostic Transfer Learning (TATL), a novel
framework motivated by dermatologists' behaviors in the skincare context. TATL
learns an attribute-agnostic segmenter that detects lesion skin regions and
then transfers this knowledge to a set of attribute-specific classifiers to
detect each particular region's attributes. Since TATL's attribute-agnostic
segmenter only detects abnormal skin regions, it enjoys ample data from all
attributes, allows transferring knowledge among features, and compensates for
the lack of training data from rare attributes. We extensively evaluate TATL on
two popular skin attributes detection benchmarks and show that TATL outperforms
state-of-the-art methods while enjoying minimal model and computational
complexity. We also provide theoretical insights and explanations for why TATL
works well in practice.
|
Protein function may be modulated by an event occurring far away from the
functional site, a phenomenon termed allostery. While classically allostery
involves conformational changes, we recently observed that charge
redistribution within an antibody can also lead to an allosteric effect,
modulating the kinetics of binding to target antigen. In the present study, we
study the association of a poly-histidine tagged enzyme (phosphoglycerate
kinase, PGK) to surface-immobilized anti-His antibodies, finding a significant
Charge-Reorganization Allostery (CRA) effect. We further observe that the
negatively charged nucleotide substrates of PGK modulate CRA substantially,
even though they bind far away from the His-tag-antibody interaction interface.
In particular, binding of ATP reduces CRA by more than 50%. The results
indicate that CRA may be affected by charged substrates bound to a protein and
provide further insight into the role of charge redistribution in protein
function.
|
We study a methodology to tackle the NASA Langley Uncertainty Quantification
Challenge, a model calibration problem under both aleatory and epistemic
uncertainties. Our methodology is based on an integration of robust
optimization, more specifically a recent line of research known as
distributionally robust optimization, and importance sampling in Monte Carlo
simulation. The main computation machinery in this integrated methodology
amounts to solving sampled linear programs. We present theoretical statistical
guarantees of our approach via connections to nonparametric hypothesis testing,
and numerical performances including parameter calibration and downstream
decision and risk evaluation tasks.
|
Continual or lifelong learning has been a long-standing challenge in machine
learning to date, especially in natural language processing (NLP). Although
state-of-the-art language models such as BERT have ushered in a new era in this
field due to their outstanding performance in multitask learning scenarios,
they suffer from forgetting when being exposed to a continuous stream of data
with shifting data distributions. In this paper, we introduce DRILL, a novel
continual learning architecture for open-domain text classification. DRILL
leverages a biologically inspired self-organizing neural architecture to
selectively gate latent language representations from BERT in a
task-incremental manner. We demonstrate in our experiments that DRILL
outperforms current methods in a realistic scenario of imbalanced,
non-stationary data without prior knowledge about task boundaries. To the best
of our knowledge, DRILL is the first of its kind to use a self-organizing
neural architecture for open-domain lifelong learning in NLP.
|
Interacting many-body quantum systems show a rich array of physical phenomena
and dynamical properties, but are notoriously difficult to study: they are
challenging analytically and exponentially difficult to simulate on classical
computers. Small-scale quantum information processors hold the promise to
efficiently emulate these systems, but characterizing their dynamics is
experimentally challenging, requiring probes beyond simple correlation
functions and multi-body tomographic methods. Here, we demonstrate the
measurement of out-of-time-ordered correlators (OTOCs), one of the most
effective tools for studying quantum system evolution and processes like
quantum thermalization. We implement a 3x3 two-dimensional hard-core
Bose-Hubbard lattice with a superconducting circuit, study its
time-reversibility by performing a Loschmidt echo, and measure OTOCs that
enable us to observe the propagation of quantum information. A central
requirement for our experiments is the ability to coherently reverse time
evolution, which we achieve with a digital-analog simulation scheme. In the
presence of frequency disorder, we observe that localization can partially be
overcome with more particles present, a possible signature of many-body
localization in two dimensions.
|
Although recent inpainting approaches have demonstrated significant
improvements with deep neural networks, they still suffer from artifacts such
as blunt structures and abrupt colors when filling in the missing regions. To
address these issues, we propose an external-internal inpainting scheme with a
monochromic bottleneck that helps image inpainting models remove these
artifacts. In the external learning stage, we reconstruct missing structures
and details in the monochromic space to reduce the learning dimension. In the
internal learning stage, we propose a novel internal color propagation method
with progressive learning strategies for consistent color restoration.
Extensive experiments demonstrate that our proposed scheme helps image
inpainting models produce more structure-preserved and visually compelling
results.
|
We derive the equations for the odd and even parity perturbations of coupled
electromagnetic and gravitational fields of a black hole with an electric
charge within the context of general nonlinear electrodynamics. The Lagrangian
density is a generic function of the Lorentz invariant scalar quantities of the
electromagnetic fields. We include the Hodge dual of the electromagnetic field
tensor and the cosmological constant in our calculations. For each type of
parity, we reduce the system of Einstein field equations coupled to nonlinear
electrodynamics to two coupled Schr\"odinger-type wave equations, one for the
gravitational field and one for the electromagnetic field. The stability
conditions in the presence of the Hodge dual of the electromagnetic field are
derived.
|
Terrestrial animals must often negotiate heterogeneous, varying environments.
Accordingly, their locomotive strategies must adapt to a wide range of terrain,
as well as to a range of speeds in order to accomplish different behavioral
goals. Studies in \textit{Drosophila} have found that inter-leg coordination
patterns (ICPs) vary smoothly with walking speed, rather than switching between
distinct gaits as in vertebrates (e.g., horses transitioning between trotting
and galloping). Such a continuum of stepping patterns implies that separate
neural controllers are not necessary for each observed ICP. Furthermore, the
spectrum of \textit{Drosophila} stepping patterns includes all canonical
coordination patterns observed during forward walking in insects. This raises
the exciting possibility that the controller in \textit{Drosophila} is common
to all insects, and perhaps more generally to panarthropod walkers. Here, we
survey and collate data on leg kinematics and inter-leg coordination
relationships during forward walking in a range of arthropod species, as well
as include data from a recent behavioral investigation into the tardigrade
\textit{Hypsibius exemplaris}. Using this comparative dataset, we point to
several functional and morphological features that are shared amongst
panarthropods. The goal of the framework presented in this review is to
emphasize the importance of comparative functional and morphological analyses
in understanding the origins and diversification of walking in Panarthropoda.
|
Most of the works on the dispersion of droplets and their COVID-19
(Coronavirus disease) implications address droplets' dynamics in quiescent
environments. As most droplets in a common situation are immersed in external
flows (such as ambient flows), we consider the effect of canonical flow
profiles namely, shear flow, Poiseuille flow, and unsteady shear flow on the
transport of spherical droplets of radius ranging from 5$\mu$m to 100 $\mu $m,
which are characteristic lengths in human talking, coughing or sneezing
processes. The dynamics we employ satisfies the Maxey-Riley (M-R) equation. An
order-of-magnitude estimate allows us to solve the M-R equation to leading
order analytically, and to higher order (accounting for the Boussinesq-Basset
memory term) numerically. Discarding evaporation, our results to leading order
indicate that the maximum travelled distance for small droplets ($5\mu m$
radius) under a shear/Poiseuille external flow with a maximum flow speed of
$1m/s$ may easily reach more than 250 meters, since those droplets remain in
the air for around 600 seconds. The maximum travelled distance was also
calculated to leading and higher orders, and it is observed that there is a
small difference between the leading and higher order results, and that it
depends on the strength of the flow. For example, this difference for droplets
of radius $5\mu m$ in a shear flow, and with a maximum wind speed of $5m/s$, is
seen to be around $2m$. In general, higher order terms are observed to slightly
enhance droplets' dispersion and their flying time.
|
This paper concerns the structural stability of smooth cylindrically
symmetric transonic flows in a concentric cylinder. Both cylindrical and
axi-symmetric perturbations are considered. The governing system here is of
mixed elliptic-hyperbolic and changes type and the suitable formulation of
boundary conditions at the boundaries is of great importance. First, we
establish the existence and uniqueness of smooth cylindrical transonic spiral
solutions with nonzero angular velocity and vorticity which are close to the
background transonic flow with small perturbations of the Bernoulli's function
and the entropy at the outer cylinder and the flow angles at both the inner and
outer cylinders independent of the symmetric axis, and it is shown that in this
case, the sonic points of the flow are nonexceptional and noncharacteristically
degenerate, and form a cylindrical surface. Second, we also prove the existence
and uniqueness of axi-symmetric smooth transonic rotational flows which are
adjacent to the background transonic flow, whose sonic points form an
axi-symmetric surface. The key elements in our analysis are to utilize the
deformation-curl decomposition for the steady Euler system introduced in
\cite{WengXin19} to deal with the hyperbolicity in subsonic regions and to find
an appropriate multiplier for the linearized second order mixed type equations
which are crucial to identify the suitable boundary conditions and to yield the
important basic energy estimates.
|
Consider a measure $\mu$ on $\R^n$ generating a natural exponential family
$F(\mu)$ with variance function $V_{F(\mu)}(m)$ and Laplace transform $$
\exp(\ell_{\mu}(s))=\int_{\R^n} \exp(-\<s,x\>)\mu(dx).$$ A dual measure $\mu^*$
satisfies $-\ell'_{\mu^*}(-\ell'_{\mu}(s))=s.$ Such a dual measure does not
always exist. One important property is
$\ell"_{\mu^*}(m)=(V_{F(\mu)}(m))^{-1},$ leading to the notion of duality among
exponential families (or rather among the extended notion of T exponential
families $T\hskip-2pt F$ obtained by considering all translations of a given
exponential family $F$).
|
The recently introduced polar codes constitute a breakthrough in coding
theory due to their capacityachieving property. This goes hand in hand with a
quasilinear construction, encoding, and successive cancellation list decoding
procedures based on the Plotkin construction. The decoding algorithm can be
applied with slight modifications to Reed-Muller or eBCH codes, that both
achieve the capacity of erasure channels, although the list size needed for
good performance grows too fast to make the decoding practical even for
moderate block lengths. The key ingredient for proving the capacity-achieving
property of Reed-Muller and eBCH codes is their group of symmetries. It can be
plugged into the concept of Plotkin decomposition to design various permutation
decoding algorithms. Although such techniques allow to outperform the
straightforward polar-like decoding, the complexity stays impractical. In this
paper, we show that although invariance under a large automorphism group is
valuable in a theoretical sense, it also ensures that the list size needed for
good performance grows exponentially. We further establish the bounds that
arise if we sacrifice some of the symmetries. Although the theoretical analysis
of the list decoding algorithm remains an open problem, our result provides an
insight into the factors that impact the decoding complexity.
|
Transfer learning eases the burden of training a well-performed model from
scratch, especially when training data is scarce and computation power is
limited. In deep learning, a typical strategy for transfer learning is to
freeze the early layers of a pre-trained model and fine-tune the rest of its
layers on the target domain. Previous work focuses on the accuracy of the
transferred model but neglects the transfer of adversarial robustness. In this
work, we first show that transfer learning improves the accuracy on the target
domain but degrades the inherited robustness of the target model. To address
such a problem, we propose a novel cooperative adversarially-robust transfer
learning (CARTL) by pre-training the model via feature distance minimization
and fine-tuning the pre-trained model with non-expansive fine-tuning for target
domain tasks. Empirical results show that CARTL improves the inherited
robustness by about 28% at most compared with the baseline with the same degree
of accuracy. Furthermore, we study the relationship between the batch
normalization (BN) layers and the robustness in the context of transfer
learning, and we reveal that freezing BN layers can further boost the
robustness transfer.
|
The outbreak of novel coronavirus pneumonia (COVID-19) has caused mortality
and morbidity worldwide. Oropharyngeal-swab (OP-swab) sampling is widely used
for the diagnosis of COVID-19 in the world. To avoid the clinical staff from
being affected by the virus, we developed a 9-degree-of-freedom (DOF)
rigid-flexible coupling (RFC) robot to assist the COVID-19 OP-swab sampling.
This robot is composed of a visual system, UR5 robot arm, micro-pneumatic
actuator and force-sensing system. The robot is expected to reduce risk and
free up the clinical staff from the long-term repetitive sampling work.
Compared with a rigid sampling robot, the developed force-sensing RFC robot can
facilitate OP-swab sampling procedures in a safer and softer way. In addition,
a varying-parameter zeroing neural network-based optimization method is also
proposed for motion planning of the 9-DOF redundant manipulator. The developed
robot system is validated by OP-swab sampling on both oral cavity phantoms and
volunteers.
|
Deep learning has been broadly applied to imaging in scattering applications.
A common framework is to train a "descattering" neural network for image
recovery by removing scattering artifacts. To achieve the best results on a
broad spectrum of scattering conditions, individual "expert" networks have to
be trained for each condition. However, the performance of the expert sharply
degrades when the scattering level at the testing time differs from the
training. An alternative approach is to train a "generalist" network using data
from a variety of scattering conditions. However, the generalist generally
suffers from worse performance as compared to the expert trained for each
scattering condition. Here, we develop a drastically different approach, termed
dynamic synthesis network (DSN), that can dynamically adjust the model weights
and adapt to different scattering conditions. The adaptability is achieved by a
novel architecture that enables dynamically synthesizing a network by blending
multiple experts using a gating network. Notably, our DSN adaptively removes
scattering artifacts across a continuum of scattering conditions regardless of
whether the condition has been used for the training, and consistently
outperforms the generalist. By training the DSN entirely on a
multiple-scattering simulator, we experimentally demonstrate the network's
adaptability and robustness for 3D descattering in holographic 3D particle
imaging. We expect the same concept can be adapted to many other imaging
applications, such as denoising, and imaging through scattering media. Broadly,
our dynamic synthesis framework opens up a new paradigm for designing highly
adaptive deep learning and computational imaging techniques.
|
We prove that sufficiently low-entropy hypersurfaces can be perturbed so that
their mean curvature flow encounters only spherical and cylindrical
singularities.
|
We take a deeper dive into the geometry and the number theory that underlay
the butterfly graphs of the Harper and the generalized Harper models of Bloch
electrons in a magnetic field. Root of the number theoretical characteristics
of the fractal spectrum is traced to a close relationship between the Farey
tree -- the hierarchical tree that generates all rationals and the Wannier
diagram -- a graph that labels all the gaps of the butterfly graph. The
resulting Farey-Wannier hierarchical lattice of trapezoids provides geometrical
representation of the nested pattern of butterflies in the butterfly graph.
Some features of the energy spectrum such as absence of some of the Wannier
trajectories in the butterfly graph fall outside the number theoretical
framework, can be stated as a simple rule of "minimal violation of mirror
symmetry". In a generalized Harper model, Farey-Wannier representation prevails
as the lattice regroups to form some hexagonal unit cells creating new {\it
species} of butterflies
|
The ethical consequences of, constraints upon and regulation of algorithms
arguably represent the defining challenges of our age, asking us to reckon with
the rise of computational technologies whose potential to radically
transforming social and individual orders and identity in unforeseen ways is
already being realised. Yet despite the multidisciplinary impact of this
algorithmic turn, there remains some way to go in motivating the
crossdisciplinary collaboration that is crucial to advancing feasible proposals
for the ethical design, implementation and regulation of algorithmic and
automated systems. In this work, we provide a framework to assist
cross-disciplinary collaboration by presenting a Four C's Framework covering
key computational considerations researchers across such diverse fields should
consider when approaching these questions: (i) computability, (ii) complexity,
(iii) consistency and (iv) controllability. In addition, we provide examples of
how insights from ethics, philosophy and population ethics are relevant to and
translatable within sciences concerned with the study and design of algorithms.
Our aim is to set out a framework which we believe is useful for fostering
cross-disciplinary understanding of pertinent issues in ethical algorithmic
literature which is relevant considering the feasibility of ethical algorithmic
governance, especially the impact of computational constraints upon algorithmic
governance.
|
Topological defects are one of the most conspicuous features of liquid
crystals. In two dimensional nematics, they have been shown to behave
effectively as particles with both, charge and orientation, which dictate their
interactions. Here, we study "twisted" defects that have a radially dependent
orientation. We find that twist can be partially relaxed through the creation
and annihilation of defect pairs. By solving the equations for defect motion
and calculating the forces on defects, we identify four distinct elements that
govern the relative relaxational motion of interacting topological defects,
namely attraction, repulsion, co-rotation and co-translation. The interaction
of these effects can lead to intricate defect trajectories, which can be
controlled by setting relevant timescales.
|
This article explores the existing normalizing and variance-stabilizing
(NoVaS) method on predicting squared log-returns of financial data. First, we
explore the robustness of the existing NoVaS method for long-term
time-aggregated predictions. Then we develop a more parsimonious variant of the
existing method. With systematic justification and extensive data analysis, our
new method shows better performance than current NoVaS and standard GARCH(1,1)
methods on both short- and long-term time-aggregated predictions.
|
Agent-based models of disease transmission involve stochastic rules that
specify how a number of individuals would infect one another, recover or be
removed from the population. Common yet stringent assumptions stipulate
interchangeability of agents and that all pairwise contact are equally likely.
Under these assumptions, the population can be summarized by counting the
number of susceptible and infected individuals, which greatly facilitates
statistical inference. We consider the task of inference without such
simplifying assumptions, in which case, the population cannot be summarized by
low-dimensional counts. We design improved particle filters, where each
particle corresponds to a specific configuration of the population of agents,
that take either the next or all future observations into account when
proposing population configurations. Using simulated data sets, we illustrate
that orders of magnitude improvements are possible over bootstrap particle
filters. We also provide theoretical support for the approximations employed to
make the algorithms practical.
|
Learning to flexibly follow task instructions in dynamic environments poses
interesting challenges for reinforcement learning agents. We focus here on the
problem of learning control flow that deviates from a strict step-by-step
execution of instructions -- that is, control flow that may skip forward over
parts of the instructions or return backward to previously completed or skipped
steps. Demand for such flexible control arises in two fundamental ways:
explicitly when control is specified in the instructions themselves (such as
conditional branching and looping) and implicitly when stochastic environment
dynamics require re-completion of instructions whose effects have been
perturbed, or opportunistic skipping of instructions whose effects are already
present. We formulate an attention-based architecture that meets these
challenges by learning, from task reward only, to flexibly attend to and
condition behavior on an internal encoding of the instructions. We test the
architecture's ability to learn both explicit and implicit control in two
illustrative domains -- one inspired by Minecraft and the other by StarCraft --
and show that the architecture exhibits zero-shot generalization to novel
instructions of length greater than those in a training set, at a performance
level unmatched by two baseline recurrent architectures and one ablation
architecture.
|
We analyze the axiomatic strength of the following theorem due to Rival and
Sands in the style of reverse mathematics. "Every infinite partial order $P$ of
finite width contains an infinite chain $C$ such that every element of $P$ is
either comparable with no element of $C$ or with infinitely many elements of
$C$." Our main results are the following. The Rival-Sands theorem for infinite
partial orders of arbitrary finite width is equivalent to $\mathsf{I}\Sigma^0_2
+ \mathsf{ADS}$ over $\mathsf{RCA}_0$. For each fixed $k \geq 3$, the
Rival-Sands theorem for infinite partial orders of width $\leq\! k$ is
equivalent to $\mathsf{ADS}$ over $\mathsf{RCA}_0$. The Rival-Sands theorem for
infinite partial orders that are decomposable into the union of two chains is
equivalent to $\mathsf{SADS}$ over $\mathsf{RCA}_0$. Here $\mathsf{RCA}_0$
denotes the recursive comprehension axiomatic system, $\mathsf{I}\Sigma^0_2$
denotes the $\Sigma^0_2$ induction scheme, $\mathsf{ADS}$ denotes the
ascending/descending sequence principle, and $\mathsf{SADS}$ denotes the stable
ascending/descending sequence principle. To our knowledge, these versions of
the Rival-Sands theorem for partial orders are the first examples of theorems
from the general mathematics literature whose strength is exactly characterized
by $\mathsf{I}\Sigma^0_2 + \mathsf{ADS}$, by $\mathsf{ADS}$, and by
$\mathsf{SADS}$. Furthermore, we give a new purely combinatorial result by
extending the Rival-Sands theorem to infinite partial orders that do not have
infinite antichains, and we show that this extension is equivalent to
arithmetical comprehension over $\mathsf{RCA}_0$.
|
Cyber Physical Systems (CPSs) are often black box systems for which no exact
model exists. Automata learning allows to build abstract models of CPSs and is
used in several scenarios, i.e. simulation, monitoring, and test case
generation. Real time localization systems (RTLSs) are an example of
particularly complex and often safety critical CPSs. We present a procedure for
automatic test case generation with automata learning and apply this approach
in a case study to a localization system.
|
This is a survey on stated skein algebras and their representations.
|
Flares are known to play an important role for the evolution of the
atmospheres of young planets. In order to understand the evolution of planets,
it is thus important to study the flare-activity of young stars. This is
particularly the case for young M-stars, because they are very active. We study
photometrically and spectroscopically the highly active M-star 2MASS
J16111534-1757214. We show that it is a member of the Upper Sco OB association,
which has an age of 5-10 Myrs. We also re-evaluate the status of other
bona-fide M-stars in this region and identify 42 members. Analyzing the
K2-light curves, we find that 2MASS J16111534-1757214 has, on average, one
super-flare with E > 1.0E35 erg every 620 hours, and one with E >1.0E34 erg
every 52 hours. Although this is the most active M-star in the Upper Sco
association, the power-law index of its flare-distribution is similar to that
of other M-stars in this region. 2MASS J16111534-1757214 as well as other
M-stars in this region show a broken power-law distribution in the
flare-frequency diagram. Flares larger than E >3E34 erg have a power-law index
beta=-1.3+/-0.1 and flares smaller than that beta=-0.8+/-0.1. We furthermore
conclude that the flare-energy distribution for young M-stars is not that
different from solar-like stars.
|
We report on the study of the magnetic ratchet effect in AlGaN/GaN
heterostructures superimposed with lateral superlattice formed by dual-grating
gate structure. We demonstrate that irradiation of the superlattice with
terahertz beam results in the dc ratchet current, which shows giant
magneto-oscillations in the regime of Shubnikov de Haas oscillations. The
oscillations have the same period and are in phase with the resistivity
oscillations. Remarkably, their amplitude is greatly enhanced as compared to
the ratchet current at zero magnetic field, and the envelope of these
oscillations exhibits large beatings as a function of the magnetic field. We
demonstrate that the beatings are caused by the spin-orbit splitting of the
conduction band. We develop a theory which gives a good qualitative explanation
of all experimental observations and allows us to extract the spin-orbit
splitting constant \alpha_{\rm SO}= 7.5 \pm 1.5 meV \unicode{x212B}. We also
discuss how our results are modified by plasmonic effects and show that these
effects become more pronounced with decreasing the period of the gating gate
structures down to sub-microns.
|
Spectroscopic Amplitudes (SA) in the Interacting Boson Fermion Fermion Model
(IBFFM) are necessary for the computation of $0\nu\beta\beta$ decays but also
for cross sections of heavy-ion reactions, in particular, Double Charge
Exchange reactions for the NUMEN collaboration, if one does not want to use the
closure limit. We present for the first time: i) the formalism and operators to
compute in a general case the spectroscopic amplitudes in the scheme IBFFM from
an even-even to odd-odd nuclei, in a way suited to be used in reaction code,
i.e., extracting the contribution of each orbital; 2) the odd-odd nuclei as
described by the old IBFFM are obtained for the first time with the new
implementation of Machine Learning (ML) techniques for fitting the parameters,
getting a more realistic description. The one body transition densities for
$^{116}$Cd $\rightarrow$ $^{116}$In and $^{116}$In $\rightarrow$ $^{116}$Sn are
part of the experimental program of the NUMEN experiment, which aims to find
constraints on Neutrinoless double beta decay matrix elements.
|
We derive Kubo formulae for first-order spin hydrodynamics based on
non-equilibrium statistical operators method. In first-order spin
hydrodynamics, there are two new transport coefficients besides the ordinary
ones appearing in first-order viscous hydrodynamics. They emerge due to the
incorporation of the spin degree of freedom into fluids and the spin-orbital
coupling. Zubarev's non-equilibrium statistical operator method can be well
applied to investigate these quantum effects in fluids. The Kubo formulae,
based on the method of non-equilibrium statistical operators, are related to
equilibrium (imaginary-time) infrared Green's functions, and all the transport
coefficients can be determined when the microscopic theory is specified.
|
As a promising lensless imaging method for distance objects, intensity
interferometry imaging (III) had been suffering from the unreliable phase
retrieval process, hindering the development of III for decades. Recently, the
introduction of the ptychographic detection in III overcame this challenge, and
a method called ptychographic III (PIII) was proposed. We here experimentally
demonstrate that PIII can image a dynamic distance object. A reasonable image
for the moving object can be retrieved with only two speckle patterns for each
probe, and only 10 to 20 iterations are needed. Meanwhile, PIII exhibits robust
to the inaccurate information of the probe. Furthermore, PIII successfully
recovers the image through a fog obfuscating the imaging light path, under
which a conventional camera relying on lenses fails to provide a recognizable
image.
|
We consider the possibility that the Milky Way's dark matter halo possesses a
non vanishing equation of state. Consequently, we evaluate the contribution due
to the speed of sound, assuming that the dark matter content of the galaxy
behaves like a fluid with pressure. In particular, we model the dark matter
distribution via an exponential sphere profile in the galactic core, and inner
parts of the galaxy whereas we compare the exponential sphere with three
widely-used profiles for the halo, i.e. the Einasto, Burkert and Isothermal
profile. For the galactic core we also compare the effects due to a dark matter
distribution without black hole with the case of a supermassive black hole in
vacuum and show that present observations are unable to distinguish them.
Finally we investigate the expected experimental signature provided by
gravitational lensing due to the presence of dark matter in the core.
|
Pre-trained multilingual language models (LMs) have achieved state-of-the-art
results in cross-lingual transfer, but they often lead to an inequitable
representation of languages due to limited capacity, skewed pre-training data,
and sub-optimal vocabularies. This has prompted the creation of an ever-growing
pre-trained model universe, where each model is trained on large amounts of
language or domain specific data with a carefully curated, linguistically
informed vocabulary. However, doing so brings us back full circle and prevents
one from leveraging the benefits of multilinguality. To address the gaps at
both ends of the spectrum, we propose MergeDistill, a framework to merge
pre-trained LMs in a way that can best leverage their assets with minimal
dependencies, using task-agnostic knowledge distillation. We demonstrate the
applicability of our framework in a practical setting by leveraging
pre-existing teacher LMs and training student LMs that perform competitively
with or even outperform teacher LMs trained on several orders of magnitude more
data and with a fixed model capacity. We also highlight the importance of
teacher selection and its impact on student model performance.
|
We define and study two new kinds of "effective resistances" based on
hubs-biased -- hubs-repelling and hubs-attracting -- models of navigating a
graph/network. We prove that these effective resistances are squared Euclidean
distances between the vertices of a graph. They can be expressed in terms of
the Moore-Penrose pseudoinverse of the hubs-biased Laplacian matrices of the
graph. We define the analogous of the Kirchhoff indices of the graph based of
these resistance distances. We prove several results for the new resistance
distances and the Kirchhoff indices based on spectral properties of the
corresponding Laplacians. After an intensive computational search we conjecture
that the Kirchhoff index based on the hubs-repelling resistance distance is not
smaller than that based on the standard resistance distance, and that the last
is not smaller than the one based on the hubs-attracting resistance distance.
We also observe that in real-world brain and neural systems the efficiency of
standard random walk processes is as high as that of hubs-attracting schemes.
On the contrary, infrastructures and modular software networks seem to be
designed to be navigated by using their hubs.
|
This is a continuation of a previous study initiated by one of us on nonlocal
vertex bialgebras and smash product nonlocal vertex algebras. In this paper, we
study a notion of right $H$-comodule nonlocal vertex algebra for a nonlocal
vertex bialgebra $H$ and give a construction of deformations of vertex algebras
with a right $H$-comodule nonlocal vertex algebra structure and a compatible
$H$-module nonlocal vertex algebra structure. We also give a construction of
$\phi$-coordinated quasi modules for smash product nonlocal vertex algebras. As
an example, we give a family of quantum vertex algebras by deforming the vertex
algebras associated to non-degenerate even lattices.
|
We study the ground state of the Hubbard model on a square lattice with two
degenerate orbitals per site and at integer fillings as a function of onsite
Hubbard repulsion $U$ and Hund's intra-atomic exchange coupling $J$. We use a
variational slave-spin mean field (VSSMF) method which allows symmetry broken
states to be studied within the computationally less intensive slave-spin mean
field formalism, thus making the method more powerful to study strongly
correlated electron physics. The results show that at half-filling, the ground
state at smaller $U$ is a Slater antiferromagnet (AF) with substantial local
charge fluctuations. As $U$ is increased, the AF state develops a Heisenberg
behavior, finally undergoing a first order transition to a Mott insulating AF
state at a critical interaction $U_c$ which is of the order of the bandwidth.
Introducing the Hund's coupling $J$ correlates the system more and reduces
$U_c$ drastically. At quarter-filling with one electron per site, the ground
state at smaller $U$ is paramagnetic metallic. At finite Hund's coupling $J$,
as interaction is increased above a lower critical value $U_{c1}$, it goes to a
fully spin polarized ferromagnetic state coexisting with an antiferro-orbital
order. The system eventually becomes Mott insulating at a higher critical value
$U_{c2}$. The results as a function of $U$ and $J$ are thoroughly discussed.
|
We present an analysis of the $R\lesssim 1.5$ kpc core regions of seven
simulated Milky Way mass galaxies, from the FIRE-2 (Feedback in Realistic
Environments) cosmological zoom-in simulation suite, for a finely sampled
period ($\Delta t = 2.2$ Myr) of 22 Myr at $z \approx 0$, and compare them with
star formation rate (SFR) and gas surface density observations of the Milky
Way's Central Molecular Zone (CMZ). Despite not being tuned to reproduce the
detailed structure of the CMZ, we find that four of these galaxies are
consistent with CMZ observations at some point during this 22 Myr period. The
galaxies presented here are not homogeneous in their central structures,
roughly dividing into two morphological classes; (a) several of the galaxies
have very asymmetric gas and SFR distributions, with intense (compact)
starbursts occurring over a period of roughly 10 Myr, and structures on highly
eccentric orbits through the CMZ, whereas (b) others have smoother gas and SFR
distributions, with only slowly varying SFRs over the period analyzed. In class
(a) centers, the orbital motion of gas and star-forming complexes across small
apertures ($R \lesssim 150$pc, analogously $|l|<1^\circ$ in the CMZ
observations) contributes as much to tracers of star formation/dense gas
appearing in those apertures, as the internal evolution of those structures
does. These asymmetric/bursty galactic centers can simultaneously match CMZ gas
and SFR observations, demonstrating that time-varying star formation can
explain the CMZ's low star formation efficiency.
|
Detecting anomalies in large complex systems is a critical and challenging
task. The difficulties arise from several aspects. First, collecting ground
truth labels or prior knowledge for anomalies is hard in real-world systems,
which often lead to limited or no anomaly labels in the dataset. Second,
anomalies in large systems usually occur in a collective manner due to the
underlying dependency structure among devices or sensors. Lastly, real-time
anomaly detection for high-dimensional data requires efficient algorithms that
are capable of handling different types of data (i.e. continuous and discrete).
We propose a correlation structure-based collective anomaly detection (CSCAD)
model for high-dimensional anomaly detection problem in large systems, which is
also generalizable to semi-supervised or supervised settings. Our framework
utilize graph convolutional network combining a variational autoencoder to
jointly exploit the feature space correlation and reconstruction deficiency of
samples to perform anomaly detection. We propose an extended mutual information
(EMI) metric to mine the internal correlation structure among different data
features, which enhances the data reconstruction capability of CSCAD. The
reconstruction loss and latent standard deviation vector of a sample obtained
from reconstruction network can be perceived as two natural anomalous degree
measures. An anomaly discriminating network can then be trained using low
anomalous degree samples as positive samples, and high anomalous degree samples
as negative samples. Experimental results on five public datasets demonstrate
that our approach consistently outperforms all the competing baselines.
|
Time-periodic driving fields could endow a system with peculiar topological
and transport features. In this work, we find dynamically controlled
localization transitions and mobility edges in non-Hermitian quasicrystals via
shaking the lattice periodically. The driving force dresses the hopping
amplitudes between lattice sites, yielding alternate transitions between
localized, mobility edge and extended non-Hermitian quasicrystalline phases. We
apply our Floquet engineering approach to five representative models of
non-Hermitian quasicrystals, obtain the conditions of photon-assisted
localization transitions and mobility edges, and find the expressions of
Lyapunov exponents for some models. We further introduce topological winding
numbers of Floquet quasienergies to distinguish non-Hermitian quasicrystalline
phases with different localization nature. Our discovery thus extend the study
of quasicrystals to non-Hermitian Floquet systems, and provide an efficient way
of modulating the topological and transport properties of these unique phases.
|
Understanding the behavior of learned classifiers is an important task, and
various black-box explanations, logical reasoning approaches, and
model-specific methods have been proposed. In this paper, we introduce
probabilistic sufficient explanations, which formulate explaining an instance
of classification as choosing the "simplest" subset of features such that only
observing those features is "sufficient" to explain the classification. That
is, sufficient to give us strong probabilistic guarantees that the model will
behave similarly when all features are observed under the data distribution. In
addition, we leverage tractable probabilistic reasoning tools such as
probabilistic circuits and expected predictions to design a scalable algorithm
for finding the desired explanations while keeping the guarantees intact. Our
experiments demonstrate the effectiveness of our algorithm in finding
sufficient explanations, and showcase its advantages compared to Anchors and
logical explanations.
|
In this article, we consider the family of functions $f$ analytic in the unit
disk $|z|<1$ with the normalization $f(0)=0=f'(0)-1$ and satisfying the
condition $\big |\big (z/f(z)\big )^{2}f'(z)-1\big |<\lambda $ for some
$0<\lambda \leq 1$. We denote this class by $\mathcal{U}(\lambda)$ and we are
interested in the relations between $\mathcal{U}(\lambda)$ and other families
of functions holomorphic or harmonic in the unit disk. Our first example in
this direction is the family of functions convex in one direction. Then we are
concerned with the subordinates to the function $1/((1-z)(1-\lambda z))$. We
prove that not all functions $f(z)/z$ $(f \in \mathcal{U}(\lambda))$ belong to
this family. This disproves an assertion from \cite{OPW}. Further, we disprove
a related coefficient conjecture for $\mathcal{U}(\lambda)$. We consider the
intersection of the class of the above subordinates and $\mathcal{U}(\lambda)$
concerning the boundary behaviour of its functions. At last, with the help of
functions from $\mathcal{U}(\lambda)$, we construct functions harmonic and
close-to-convex in the unit disk.
|
In two-dimensional geometric knapsack problem, we are given a set of n
axis-aligned rectangular items and an axis-aligned square-shaped knapsack. Each
item has integral width, integral height and an associated integral profit. The
goal is to find a (non-overlapping axis-aligned) packing of a maximum profit
subset of rectangles into the knapsack. A well-studied and frequently used
constraint in practice is to allow only packings that are guillotine separable,
i.e., every rectangle in the packing can be obtained by recursively applying a
sequence of edge-to-edge axis-parallel cuts that do not intersect any item of
the solution. In this paper we study approximation algorithms for the geometric
knapsack problem under guillotine cut constraints. We present polynomial time
(1 + {\epsilon})-approximation algorithms for the cases with and without
allowing rotations by 90 degrees, assuming that all input numeric data are
polynomially bounded in n. In comparison, the best-known approximation factor
for this setting is 3 + {\epsilon} [Jansen-Zhang, SODA 2004], even in the
cardinality case where all items have the same profit. Our main technical
contribution is a structural lemma which shows that any guillotine packing can
be converted into another structured guillotine packing with almost the same
profit. In this packing, each item is completely contained in one of a constant
number of boxes and L-shaped regions, inside which the items are placed by a
simple greedy routine. In particular, we provide a clean sufficient condition
when such a packing obeys the guillotine cut constraints which might be useful
for other settings where these constraints are imposed.
|
Layered two-dimensional (2D) materials MoTe2 have been paid special attention
due to the rich optoelectronic properties with various phases. The
nonequilibrium carrier dynamics as well as its temperature dependence in MoTe2
are of prime importance, as it can shed light on understanding the anomalous
optical response and potential applications in far infrared (IR)
photodetection. Hereby, we employ time-resolved terahertz (THz) spectroscopy to
study the temperature dependent nonequilibrium carrier dynamics in MoTe2 films.
After photoexcitation of 1.59 eV, the 1T'-phase MoTe2 at high temperature
behaves only THz positive photoconductivity (PPC) with relaxation time of less
than 1 ps. In contrast, the Td-phase MoTe2 at low temperature shows ultrafast
THz PPC initially followed by emerging THz negative photoconductivity (NPC),
and the THz NPC signal relaxes to the equilibrium state in hundreds of ps time
scale. Small polaron formation induced by hot carrier has been proposed to be
ascribed to the THz NPC in the polar semimetal MoTe2 at low temperature. The
polaron formation time after photoexcitation increases slightly with
temperature, which is determined to be ~0.4 ps at 5 K and 0.5 ps at 100 K. Our
experimental result demonstrates for the first time the dynamical formation of
small poalron in MoTe2 Weyl semimetal, this is fundamental importance on the
understanding the temperature dependent electron-phonon coupling and quantum
phase transition, as well as the designing the MoTe2-based far IR
photodetector.
|
Here we explore the structural, magnetic and dielectric properties of Co
based compound Na$_5$Co$_{15.5}$Te$_6$O$_{36}$ as a candidate of short-range
magnetic correlations driven development of dielectric anomaly above
N$\acute{e}$el temperature of ($T_N$=) 50 K. Low temperature neutron powder
diffraction (NPD) in zero applied magnetic field clearly indicates that the
canted spin structure is responsible for the antiferromagnetic transition and
partially occupied Co form short range magnetic correlation with other Co,
which further facilitates the structural distortion and consequent development
of dielectric anomaly above antiferromagnetic transition. Additionally, the
temperature dependent magnetic heat capacity and electron spin resonance
measurements reveal the presence of short-range magnetic correlations which
coincides with an anomaly in the dielectric constant vs temperature curve.
Moreover, significant changes in the lattice parameters are also observed
around the same temperature, indicating presence of noticeable spin-lattice
coupling. Further, sharp jump in the magnetic field dependent magnetization
clearly indicates the presence of metamagnetic transition and magnetic field
dependent NPD confirms that rotations of Co spins with applied magnetic field
are responsible for this metamagnetic phase transition. As a result, this
transition causes the magnetocaloric effect to be developed in the system,
which is suitable for the application in low temperature refrigeration.
|
Scene graphs are a compact and explicit representation successfully used in a
variety of 2D scene understanding tasks. This work proposes a method to
incrementally build up semantic scene graphs from a 3D environment given a
sequence of RGB-D frames. To this end, we aggregate PointNet features from
primitive scene components by means of a graph neural network. We also propose
a novel attention mechanism well suited for partial and missing graph data
present in such an incremental reconstruction scenario. Although our proposed
method is designed to run on submaps of the scene, we show it also transfers to
entire 3D scenes. Experiments show that our approach outperforms 3D scene graph
prediction methods by a large margin and its accuracy is on par with other 3D
semantic and panoptic segmentation methods while running at 35 Hz.
|
A novel user friendly method is proposed for calibrating a procam system from
a single pose of a planar chessboard target. The user simply needs to orient
the chessboard in a single appropriate pose. A sequence of Gray Code patterns
are projected onto the chessboard, which allows correspondences between the
camera, projector and the chessboard to be automatically extracted. These
correspondences are fed as input to a nonlinear optimization method that models
the projector of the principle points onto the chessboard, and accurately
calculates the intrinsic and extrinsic parameters of both the camera and the
projector, as well as the camera's distortion coefficients. The method is
experimentally validated on the procam system, which is shown to be comparable
in accuracy with existing multi-pose approaches. The impact of the orientation
of the chessboard with respect to the procam imaging places is also explored
through extensive simulation.
|
Exploration in environments with sparse rewards is difficult for artificial
agents. Curiosity driven learning -- using feed-forward prediction errors as
intrinsic rewards -- has achieved some success in these scenarios, but fails
when faced with action-dependent noise sources. We present aleatoric mapping
agents (AMAs), a neuroscience inspired solution modeled on the cholinergic
system of the mammalian brain. AMAs aim to explicitly ascertain which dynamics
of the environment are unpredictable, regardless of whether those dynamics are
induced by the actions of the agent. This is achieved by generating separate
forward predictions for the mean and variance of future states and reducing
intrinsic rewards for those transitions with high aleatoric variance. We show
AMAs are able to effectively circumvent action-dependent stochastic traps that
immobilise conventional curiosity driven agents. The code for all experiments
presented in this paper is open sourced:
http://github.com/self-supervisor/Escaping-Stochastic-Traps-With-Aleatoric-Mapping-Agents.
|
We present two a posteriori error estimators for the virtual element method
(VEM) based on global and local flux reconstruction in the spirit of [5]. The
proposed error estimators are reliable and efficient for the $h$-, $p$-, and
$hp$-versions of the VEM. This solves a partial limitation of our former
approach in [6], which was based on solving local nonhybridized mixed problems.
Differently from the finite element setting, the proof of the efficiency turns
out to be simpler, as the flux reconstruction in the VEM does not require the
existence of polynomial, stable, divergence right-inverse operators. Rather, we
only need to construct right-inverse operators in virtual element spaces,
exploiting only the implicit definition of virtual element functions. The
theoretical results are validated by some numerical experiments on a benchmark
problem.
|
We consider the steady-state nonequilibrium behavior of mesoscopic
superconducting wires connected to normal-metal reservoirs. Going beyond the
diffusive limit, we utilize the quasiclassical theory and perform a
self-consistent calculation that guarantees current conservation through the
entire system. Going from the ballistic to the diffusive limit, we investigate
several crucial phenomena such as charge imbalance, momentum-resolved
nonequilbrium distributions, and the current-to-superflow conversion.
Connecting to earlier results for the diffusive case, we find that
superconductivity can break down at a critical bias voltage $V_\mathrm{c}$. We
find that $V_\mathrm{c}$ generally increases as the interface transparency is
reduced, while the dependence on the mean-free path is non-monotonous. We
discussthe key differences of the ballistic and semi-ballistic regimes to the
fully diffusive case.
|
We experimentally and theoretically investigate the non-degenerate two-photon
absorption coefficient $\beta(\omega_1,\omega_2)$ in the prototypical
semiconductor ZnSe. In particular, we provide a comprehensive data set on the
dependence of $\beta(\omega_1,\omega_2)$ on the non-degeneracy parameter
$\omega_1/\omega_2$ with the total frequency sum $\omega_1+\omega_2$ kept
constant. We find a substantial increase of the two-photon absorption strength
with increasing $\omega_1/\omega_2$. In addition, different crystallographic
orientations and polarization configurations are investigated. The nonlinear
optical response is analyzed theoretically by evaluating the multiband
semiconductor Bloch equations including inter- and intraband excitations in the
length gauge. The band structure and the matrix elements are taken an
eight-band k.p model. The simulation results are in very good agreement with
the experiment.
|
We explore the use of a topological manifold, represented as a collection of
charts, as the target space of neural network based representation learning
tasks. This is achieved by a simple adjustment to the output of an encoder's
network architecture plus the addition of a maximal mean discrepancy (MMD)
based loss function for regularization. Most algorithms in representation and
metric learning are easily adaptable to our framework and we demonstrate its
effectiveness by adjusting SimCLR (for representation learning) and standard
triplet loss training (for metric learning) to have manifold encoding spaces.
Our experiments show that we obtain a substantial performance boost over the
baseline for low dimensional encodings. In the case of triplet training, we
also find, independent of the manifold setup, that the MMD loss alone (i.e.
keeping a flat, euclidean target space but using an MMD loss to regularize it)
increases performance over the baseline in the typical, high-dimensional
Euclidean target spaces. Code for reproducing experiments is provided at
https://github.com/ekorman/neurve .
|
Scale-free percolation is a spatial random graph model with vertex set
$\mathbb{Z}^d$. Vertices $x$ and $y$ are connected with probability depending
on i.i.d. vertex weights and the Euclidean distance. Depending on the various
parameters involved, we get a rich phase diagram. We study graph distances (in
comparison to Euclidean distances). Our main attention is on a regime where
graph distances are (poly-)logarithmic in the Euclidean distance. We obtain
improved bounds on the logarithmic exponents. In the light tail regime, the
correct exponent is identified.
|
There is an analogy between the ResNet (Residual Network) architecture for
deep neural networks and an Euler solver for an ODE. The transformation
performed by each layer resembles an Euler step in solving an ODE. We consider
the Heun Method, which involves a single predictor-corrector cycle, and
complete the analogy, building a predictor-corrector variant of ResNet, which
we call a HeunNet. Just as Heun's method is more accurate than Euler's,
experiments show that HeunNet achieves high accuracy with low computational
(both training and test) time compared to both vanilla recurrent neural
networks and other ResNet variants.
|
Central compact objects are young neutron stars emitting thermal X-rays with
bolometric luminosities $L_X$ in the range $10^{32}$-$10^{34}$ erg/s.
Gourgouliatos, Hollerbach and Igoshev recently suggested that peculiar emission
properties of central compact objects can be explained by tangled magnetic
field configurations formed in a stochastic dynamo during the proto-neutron
star stage. In this case the magnetic field consists of multiple small-scale
components with negligible contribution of global dipolar field. We study
numerically three-dimensional magneto-thermal evolution of tangled crustal
magnetic fields in neutron stars. We find that all configurations produce
complicated surface thermal patterns which consist of multiple small hot
regions located at significant separations from each other. The configurations
with initial magnetic energy of $2.5-10\times 10^{47}$ erg have temperatures of
hot regions that reach $\approx 0.2$ keV, to be compared with the bulk
temperature of $\approx 0.1$ keV in our simulations with no cooling. A factor
of two in temperature is also seen in observations of central compact objects.
The hot spots produce periodic modulations in light curve with typical
amplitudes of $\leq 9-11$ %. Therefore, the tangled magnetic field
configuration can explain thermal emission properties of some central compact
objects.
|
We investigate the concept of cylindrical Wiener process subordinated to a
strictly $\alpha$-stable L\'evy process, with $\alpha\in\left(0,1\right)$, in
an infinite dimensional, separable Hilbert space, and consider the related
stochastic convolution. We then introduce the corresponding Ornstein-Uhlenbeck
process, focusing on the regularizing properties of the Markov transition
semigroup defined by it. In particular, we provide an explicit, original
formula -- which is not of Bismut-Elworthy-Li's type -- for the Gateaux
derivatives of the functions generated by the operators of the semigroup, as
well as an upper bound for the norm of their gradients. In the case
$\alpha\in\left(\frac{1}{2},1\right)$, this estimate represents the starting
point for studying the Kolmogorov equation in its mild formulation.
|
Urdu is a widely spoken language in South Asia. Though immoderate literature
exists for the Urdu language still the data isn't enough to naturally process
the language by NLP techniques. Very efficient language models exist for the
English language, a high resource language, but Urdu and other under-resourced
languages have been neglected for a long time. To create efficient language
models for these languages we must have good word embedding models. For Urdu,
we can only find word embeddings trained and developed using the skip-gram
model. In this paper, we have built a corpus for Urdu by scraping and
integrating data from various sources and compiled a vocabulary for the Urdu
language. We also modify fasttext embeddings and N-Grams models to enable
training them on our built corpus. We have used these trained embeddings for a
word similarity task and compared the results with existing techniques.
|
As robots interact with a broader range of end-users, end-user robot
programming has helped democratize robot programming by empowering end-users
who may not have experience in robot programming to customize robots to meet
their individual contextual needs. This article surveys work on end-user robot
programming, with a focus on end-user program specification. It describes the
primary domains, programming phases, and design choices represented by the
end-user robot programming literature. The survey concludes by highlighting
open directions for further investigation to enhance and widen the reach of
end-user robot programming systems.
|
We study an approximation method of stationary characters of a
two-dimensional Markov chain via the Stein method. For this purpose, innovative
methods are developed to estimate the moments of the Markov chain, as well as
the solution to the Poisson equation with a partial differential operator.
|
The (non-uniform) sparsest cut problem is the following graph-partitioning
problem: given a "supply" graph, and demands on pairs of vertices, delete some
subset of supply edges to minimize the ratio of the supply edges cut to the
total demand of the pairs separated by this deletion. Despite much effort,
there are only a handful of nontrivial classes of supply graphs for which
constant-factor approximations are known.
We consider the problem for planar graphs, and give a
$(2+\varepsilon)$-approximation algorithm that runs in quasipolynomial time.
Our approach defines a new structural decomposition of an optimal solution
using a "patching" primitive. We combine this decomposition with a
Sherali-Adams-style linear programming relaxation of the problem, which we then
round. This should be compared with the polynomial-time approximation algorithm
of Rao (1999), which uses the metric linear programming relaxation and
$\ell_1$-embeddings, and achieves an $O(\sqrt{\log n})$-approximation in
polynomial time.
|
The fifth-generation (5G) mobile/cellular technology is a game changer for
industrial systems. Private 5G deployments are promising to address the
challenges faced by industrial networks. Programmability and open-source are
two key aspects which bring unprecedented flexibility and customizability to
private 5G networks. Recent regulatory initiatives are removing barriers for
industrial stakeholders to deploy their own local 5G networks with dedicated
equipment. To this end, this demonstration showcases an open and programmable
5G network-in-a-box solution for private deployments. The network-in-a-box
provides an integrated solution, based on open-source software stack and
general-purpose hardware, for operation in 5G non-standalone (NSA) as well as
4G long-term evolution (LTE) modes. The demonstration also shows the capability
of operation in different sub-6 GHz frequency bands, some of which are
specifically available for private networks. Performance results, in terms of
end-to-end latency and data rates, with a commercial off-the-shelf (COTS) 5G
device are shown as well.
|
We study the impact of the recently computed mixed QCD-electroweak
corrections to the production of $W$ and $Z$ bosons at the LHC on the value of
the $W$ mass extracted from the transverse momentum distribution of charged
leptons from $W$ decays. Using the average lepton transverse momenta in $W$ and
$Z$ decays as simplified observables for the determination of the $W$ mass, we
estimate that mixed QCD-electroweak corrections can shift the extracted value
of the $W$ mass by up to ${\cal O}(20)~{\rm MeV}$, depending on the kinematic
cuts employed to define fiducial cross sections for $Z$ and $W$ production.
Since the target precision of the $W$-mass measurement at the LHC is ${\cal
O}(10)~{\rm MeV}$, our results emphasize the need for fully-differential
computations of mixed QCD-electroweak corrections and a careful analysis of
their potential impact on the determination of the $W$ mass.
|
Principal loading analysis is a dimension reduction method that discards
variables which have only a small distorting effect on the covariance matrix.
We complement principal loading analysis and propose to rather use a mix of
both, the correlation and covariance matrix instead. Further, we suggest to use
rescaled eigenvectors and provide updated algorithms for all proposed changes.
|
In recent years, graph neural networks (GNNs) have shown powerful ability in
collaborative filtering, which is a widely adopted recommendation scenario.
While without any side information, existing graph neural network based methods
generally learn a one-hot embedding for each user or item as the initial input
representation of GNNs. However, such one-hot embedding is intrinsically
transductive, making these methods with no inductive ability, i.e., failing to
deal with new users or new items that are unseen during training. Besides, the
number of model parameters depends on the number of users and items, which is
expensive and not scalable. In this paper, we give a formal definition of
inductive recommendation and solve the above problems by proposing Inductive
representation based Graph Convolutional Network (IGCN) for collaborative
filtering. Specifically, we design an inductive representation layer, which
utilizes the interaction behavior with core users or items as the initial
representation, improving the general recommendation performance while bringing
inductive ability. Note that, the number of parameters of IGCN only depends on
the number of core users or items, which is adjustable and scalable. Extensive
experiments on three public benchmarks demonstrate the state-of-the-art
performance of IGCN in both transductive and inductive recommendation
scenarios, while with remarkably fewer model parameters. Our implementations
are available here in PyTorch.
|
The three key elements of a quantum simulation are state preparation, time
evolution, and measurement. While the complexity scaling of dynamics and
measurements are well known, many state preparation methods are strongly
system-dependent and require prior knowledge of the system's eigenvalue
spectrum. Here, we report on a quantum-classical implementation of the
coupled-cluster Green's function (CCGF) method, which replaces explicit ground
state preparation with the task of applying unitary operators to a simple
product state. While our approach is broadly applicable to a wide range of
models, we demonstrate it here for the Anderson impurity model (AIM). The
method requires a number of T gates that grows as $ \mathcal{O} \left(N^5
\right)$ per time step to calculate the impurity Green's function in the time
domain, where $N$ is the total number of energy levels in the AIM. For
comparison, a classical CCGF calculation of the same order would require
computational resources that grow as $ \mathcal{O} \left(N^6 \right)$ per time
step.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.