abstract
stringlengths 42
2.09k
|
---|
Unfamiliar or esoteric visual forms arise in many areas of visualization.
While such forms can be intriguing, it can be unclear how to make effective use
of them without long periods of practice or costly user studies. In this work
we analyze the table cartogram-a graphic which visualizes tabular data by
bringing the areas of a grid of quadrilaterals into correspondence with the
input data, like a heat map that has been "area-ed" rather than colored.
Despite having existed for several years, little is known about its appropriate
usage. We mend this gap by using Algebraic Visualization Design to show that
they are best suited to relatively small tables with ordinal axes for some
comparison and outlier identification tasks. In doing so we demonstrate a
discount theory-based analysis that can be used to cheaply determine best
practices for unknown visualizations.
|
It is now well established from a variety of studies that there is a
significant benefit from combining video and audio data in detecting active
speakers. However, either of the modalities can potentially mislead audiovisual
fusion by inducing unreliable or deceptive information. This paper outlines
active speaker detection as a multi-objective learning problem to leverage best
of each modalities using a novel self-attention, uncertainty-based multimodal
fusion scheme. Results obtained show that the proposed multi-objective learning
architecture outperforms traditional approaches in improving both mAP and AUC
scores. We further demonstrate that our fusion strategy surpasses, in active
speaker detection, other modality fusion methods reported in various
disciplines. We finally show that the proposed method significantly improves
the state-of-the-art on the AVA-ActiveSpeaker dataset.
|
The XENON collaboration recently reported an excess of electron recoil events
in the low energy region with a significance of around $3.3\sigma$. An
explanation of this excess in terms of thermal dark matter seems challenging.
We propose a scenario where dark matter in the Milky Way halo gets boosted as a
result of scattering with the diffuse supernova neutrino background. This
interaction can accelerate the dark-matter to semi-relativistic velocities, and
this flux, in turn, can scatter with the electrons in the detector, thereby
providing a much better fit to the data. We identify regions in the parameter
space of dark-matter mass and interaction cross-section which satisfy the
excess. Furthermore, considering the data only hypothesis, we also impose
bounds on the dark-matter scattering cross-section, which are competitive with
bounds from other experiments.
|
Deep neural networks are known to have security issues. One particular threat
is the Trojan attack. It occurs when the attackers stealthily manipulate the
model's behavior through Trojaned training samples, which can later be
exploited.
Guided by basic neuroscientific principles we discover subtle -- yet critical
-- structural deviation characterizing Trojaned models. In our analysis we use
topological tools. They allow us to model high-order dependencies in the
networks, robustly compare different networks, and localize structural
abnormalities. One interesting observation is that Trojaned models develop
short-cuts from input to output layers.
Inspired by these observations, we devise a strategy for robust detection of
Trojaned models. Compared to standard baselines it displays better performance
on multiple benchmarks.
|
A factor copula model is proposed in which factors are either simulable or
estimable from exogenous information. Point estimation and inference are based
on a simulated methods of moments (SMM) approach with non-overlapping
simulation draws. Consistency and limiting normality of the estimator is
established and the validity of bootstrap standard errors is shown. Doing so,
previous results from the literature are verified under low-level conditions
imposed on the individual components of the factor structure. Monte Carlo
evidence confirms the accuracy of the asymptotic theory in finite samples and
an empirical application illustrates the usefulness of the model to explain the
cross-sectional dependence between stock returns.
|
For compact complex surfaces (M^4, J) of Kaehler type, it was previously
shown that the sign of the Yamabe invariant Y(M) only depends on the Kodaira
dimension Kod (M, J). In this paper, we prove that this pattern in fact extends
to all compact complex surfaces except those of class VII. In the process, we
give a simplified proof of a result that explains why the exclusion of class
VII is essential here.
|
In literature, there are two different definitions of elliptic divisibility
sequences. The first one says that a sequence of integers $\{h_n\}_{n\geq 0}$
is an elliptic divisibility sequence if it verifies the recurrence relation
$h_{m+n}h_{m-n}h_{r}^2=h_{m+r}h_{m-r}h_{n}^2-h_{n+r}h_{n-r}h_{m}^2$ for every
natural number $m\geq n\geq r$. The second definition says that a sequence of
integers $\{\beta_n\}_{n\geq 0}$ is an elliptic divisibility sequence if it is
the sequence of the square roots (chosen with an appropriate sign) of the
denominators of the abscissas of the iterates of a point on a rational elliptic
curve. It is well-known that the two sequences are not equivalent. Hence, given
a sequence of the denominators $\{\beta_n\}_{n\geq 0}$, in general does not
hold
$\beta_{m+n}\beta_{m-n}\beta_{r}^2=\beta_{m+r}\beta_{m-r}\beta_{n}^2-\beta_{n+r}\beta_{n-r}\beta_{m}^2$
for $m\geq n\geq r$. We will prove that the recurrence relation above holds for
$\{\beta_n\}_{n\geq 0}$ under some conditions on the indexes $m$, $n$, and $r$.
|
In this article we study smooth asymptotically conical self shrinkers in
$\mathbb{R}^4$ with Colding-Minicozzi entropy bounded above by $\Lambda_{1}$.
|
We study the irreversibility \`a la Maxwell from a quantum point of view,
involving an arbitrarily large ensemble of independent particles, with a
daemonic potential that is capable of inducing asymmetries in the evolution,
exhibiting new perspectives on how Maxwell's apparent paradox is posed and
resolved dynamically. In addition, we design an electromagnetic cavity, to
which dielectrics are added, fulfilling the function of a daemon. Thereby, this
physical system is capable of cooling and ordering incident electromagnetic
radiation. This setting can be generalized to many types of waves, without
relying on the concept of measurement in quantum mechanics.
|
We study the inflationary period driven by a fermionic field which is
non-minimally coupled to gravity in the context of the constant-roll approach.
We consider the model for a specific form of coupling and perform the
corresponding inflationary analysis. By comparing the result with the Planck
observations coming from CMB anisotropies, we find the observational
constraints on the parameters space of the model and also the predictions the
model. We find that the values of $r$ and $n_{s}$ for $-1.5<\beta\leq-0.9$ are
in good agreement with the observations when $|\xi|=0.1$ and $N=60$.
|
Organic semiconductor/ferromagnetic bilayer thin films can exhibit novel
properties due to the formation of the spinterface at the interface.
Buckminsterfullerene (C60) has been shown to exhibit ferromagnetism at the
interface when it is placed next to a ferromagnet (FM) such as Fe or Co.
Formation of spinterface occurs due to the orbital hybridization and spin
polarized charge transfer at the interface. In this work, we have demonstrated
that one can enhance the magnetic anisotropy of the low Gilbert damping alloy
CoFeB by introducing a C60 layer. We have shown that anisotropy increases by
increasing the thickness of C60 which might be a result of the formation of
spinterface. However, the magnetic domain structure remains same in the bilayer
samples as compared to the reference CoFeB film.
|
The current trends towards vehicle-sharing, electrification, and autonomy are
predicted to transform mobility. Combined appropriately, they have the
potential of significantly improving urban mobility. However, what will come
after most vehicles are shared, electric, and autonomous remains an open
question, especially regarding the interactions between vehicles and how these
interactions will impact system-level behaviour. Inspired by nature and
supported by swarm robotics and vehicle platooning models, this paper proposes
a future mobility in which shared, electric, and autonomous vehicles behave as
a bio-inspired collaborative system. The collaboration between vehicles will
lead to a system-level behaviour analogous to natural swarms. Natural swarms
can divide tasks, cluster, build together, or transport cooperatively. In this
future mobility, vehicles will cluster by connecting either physically or
virtually, which will enable the possibility of sharing energy, data or
computational power, provide services or transfer cargo, among others. Vehicles
will collaborate either with vehicles that are part of the same fleet, or with
any other vehicle on the road, by finding mutualistic relationships that
benefit both parties. The field of swarm robotics has already translated some
of the behaviours from natural swarms to artificial systems and, if we further
translate these concepts into urban mobility, exciting ideas emerge. Within
mobility-related research, the coordinated movement proposed in vehicle
platooning models can be seen as a first step towards collaborative mobility.
This paper contributes with the proposal of a framework for future mobility
that integrates current research and mobility trends in a novel and unique way.
|
Coronavirus Disease 2019 (COVID-19) demonstrated the need for accurate and
fast diagnosis methods for emergent viral diseases. Soon after the emergence of
COVID-19, medical practitioners used X-ray and computed tomography (CT) images
of patients' lungs to detect COVID-19. Machine learning methods are capable of
improving the identification accuracy of COVID-19 in X-ray and CT images,
delivering near real-time results, while alleviating the burden on medical
practitioners. In this work, we demonstrate the efficacy of a support vector
machine (SVM) classifier, trained with a combination of deep convolutional and
handcrafted features extracted from X-ray chest scans. We use this combination
of features to discriminate between healthy, common pneumonia, and COVID-19
patients. The performance of the combined feature approach is compared with a
standard convolutional neural network (CNN) and the SVM trained with
handcrafted features. We find that combining the features in our novel
framework improves the performance of the classification task compared to the
independent application of convolutional and handcrafted features.
Specifically, we achieve an accuracy of 0.988 in the classification task with
our combined approach compared to 0.963 and 0.983 accuracy for the handcrafted
features with SVM and CNN respectively.
|
A convex polygon $Q$ is inscribed in a convex polygon $P$ if every side of
$P$ contains at least one vertex of $Q$. We present algorithms for finding a
minimum area and a minimum perimeter convex polygon inscribed in any given
convex $n$-gon in $O(n)$ and $O(n^3)$ time, respectively. We also investigate
other variants of this problem.
|
We consider conformal deformations within a class of incomplete Riemannian
metrics which generalize conic orbifold singularities by allowing both warping
and any compact manifold (not just quotients of the sphere) to be the "link" of
the singular set. Within this class of "conic metrics," we determine
obstructions to the existence of conformal deformations to constant scalar
curvature of any sign (positive, negative, or zero). For conic metrics with
negative scalar curvature, we determine sufficient conditions for the existence
of a conformal deformation to a conic metric with constant scalar curvature -1;
moreover, we show that this metric is unique within its conformal class of
conic metrics. Our work is in dimensions three and higher.
|
We study the robustness of reinforcement learning (RL) with adversarially
perturbed state observations, which aligns with the setting of many adversarial
attacks to deep reinforcement learning (DRL) and is also important for rolling
out real-world RL agent under unpredictable sensing noise. With a fixed agent
policy, we demonstrate that an optimal adversary to perturb state observations
can be found, which is guaranteed to obtain the worst case agent reward. For
DRL settings, this leads to a novel empirical adversarial attack to RL agents
via a learned adversary that is much stronger than previous ones. To enhance
the robustness of an agent, we propose a framework of alternating training with
learned adversaries (ATLA), which trains an adversary online together with the
agent using policy gradient following the optimal adversarial attack framework.
Additionally, inspired by the analysis of state-adversarial Markov decision
process (SA-MDP), we show that past states and actions (history) can be useful
for learning a robust agent, and we empirically find a LSTM based policy can be
more robust under adversaries. Empirical evaluations on a few continuous
control environments show that ATLA achieves state-of-the-art performance under
strong adversaries. Our code is available at
https://github.com/huanzhang12/ATLA_robust_RL.
|
A general class of models is proposed that is able to estimate the whole
predictive distribution of a dependent variable $Y$ given a vector of
explanatory variables $\xb$. The models exploit that the strength of
explanatory variables to distinguish between low and high values of the
dependent variable may vary across the thresholds that are used to define low
and high. Simple linear versions of the models are generalizations of classical
linear regression models but also of widely used ordinal regression models.
They allow to visualize the effect of explanatory variables in the form of
parameter functions. More general models are based on efficient nonparametric
approaches like random forests, which are more flexible and are strong
prediction tools. A general estimation method is given that can use all the
estimation tools that have been proposed for binary regression, including
selection methods like the lasso or elastic net. For linearly structured models
maximum likelihood estimates are derived. The usefulness of the models is
illustrated by simulations and several real data set.
|
Synchrotron radiation from hot gas near a black hole results in a polarized
image. The image polarization is determined by effects including the
orientation of the magnetic field in the emitting region, relativistic motion
of the gas, strong gravitational lensing by the black hole, and parallel
transport in the curved spacetime. We explore these effects using a simple
model of an axisymmetric, equatorial accretion disk around a Schwarzschild
black hole. By using an approximate expression for the null geodesics derived
by Beloborodov (2002) and conservation of the Walker-Penrose constant, we
provide analytic estimates for the image polarization. We test this model using
currently favored general relativistic magnetohydrodynamic simulations of M87*,
using ring parameters given by the simulations. For a subset of these with
modest Faraday effects, we show that the ring model broadly reproduces the
polarimetric image morphology. Our model also predicts the polarization
evolution for compact flaring regions, such as those observed from Sgr A* with
GRAVITY. With suitably chosen parameters, our simple model can reproduce the
EVPA pattern and relative polarized intensity in Event Horizon Telescope images
of M87*. Under the physically motivated assumption that the magnetic field
trails the fluid velocity, this comparison is consistent with the clockwise
rotation inferred from total intensity images.
|
This paper is concerned with a novel deep learning method for variational
problems with essential boundary conditions. To this end, we first reformulate
the original problem into a minimax problem corresponding to a feasible
augmented Lagrangian, which can be solved by the augmented Lagrangian method in
an infinite dimensional setting. Based on this, by expressing the primal and
dual variables with two individual deep neural network functions, we present an
augmented Lagrangian deep learning method for which the parameters are trained
by the stochastic optimization method together with a projection technique.
Compared to the traditional penalty method, the new method admits two main
advantages: i) the choice of the penalty parameter is flexible and robust, and
ii) the numerical solution is more accurate in the same magnitude of
computational cost. As typical applications, we apply the new approach to solve
elliptic problems and (nonlinear) eigenvalue problems with essential boundary
conditions, and numerical experiments are presented to show the effectiveness
of the new method.
|
We propose in this work a multi-view learning approach for audio and music
classification. Considering four typical low-level representations (i.e.
different views) commonly used for audio and music recognition tasks, the
proposed multi-view network consists of four subnetworks, each handling one
input types. The learned embedding in the subnetworks are then concatenated to
form the multi-view embedding for classification similar to a simple
concatenation network. However, apart from the joint classification branch, the
network also maintains four classification branches on the single-view
embedding of the subnetworks. A novel method is then proposed to keep track of
the learning behavior on the classification branches and adapt their weights to
proportionally blend their gradients for network training. The weights are
adapted in such a way that learning on a branch that is generalizing well will
be encouraged whereas learning on a branch that is overfitting will be slowed
down. Experiments on three different audio and music classification tasks show
that the proposed multi-view network not only outperforms the single-view
baselines but also is superior to the multi-view baselines based on
concatenation and late fusion.
|
Following a field-theoretical approach, we study the scalar Casimir effect
upon a perfectly conducting cylindrical shell in the presence of spontaneous
Lorentz symmetry breaking. The scalar field is modeled by a Lorentz-breaking
extension of the theory for a real scalar quantum field in the bulk regions.
The corresponding Green's functions satisfying Dirichlet boundary conditions on
the cylindrical shell are derived explicitly. We express the Casimir pressure
(i.e. the vacuum expectation value of the normal-normal component of the
stress-energy tensor) as a suitable second-order differential operator acting
on the corresponding Green's functions at coincident arguments. The divergences
are regulated by making use of zeta function techniques, and our results are
successfully compared with the Lorentz invariant case. Numerical calculations
are carried out for the Casimir pressure as a function of the Lorentz-violating
coefficient, and an approximate analytical expression for the force is
presented as well. It turns out that the Casimir pressure strongly depends on
the Lorentz-violating coefficient and it tends to diminish the force.
|
Person search aims to simultaneously localize and identify a query person
from realistic, uncropped images, which can be regarded as the unified task of
pedestrian detection and person re-identification (re-id). Most existing works
employ two-stage detectors like Faster-RCNN, yielding encouraging accuracy but
with high computational overhead. In this work, we present the Feature-Aligned
Person Search Network (AlignPS), the first anchor-free framework to efficiently
tackle this challenging task. AlignPS explicitly addresses the major
challenges, which we summarize as the misalignment issues in different levels
(i.e., scale, region, and task), when accommodating an anchor-free detector for
this task. More specifically, we propose an aligned feature aggregation module
to generate more discriminative and robust feature embeddings by following a
"re-id first" principle. Such a simple design directly improves the baseline
anchor-free model on CUHK-SYSU by more than 20% in mAP. Moreover, AlignPS
outperforms state-of-the-art two-stage methods, with a higher speed. Code is
available at https://github.com/daodaofr/AlignPS
|
With the growth of natural language processing techniques and demand for
improved software engineering efficiency, there is an emerging interest in
translating intention from human languages to programming languages. In this
survey paper, we attempt to provide an overview of the growing body of research
in this space. We begin by reviewing natural language semantic parsing
techniques and draw parallels with program synthesis efforts. We then consider
semantic parsing works from an evolutionary perspective, with specific analyses
on neuro-symbolic methods, architecture, and supervision. We then analyze
advancements in frameworks for semantic parsing for code generation. In
closing, we present what we believe are some of the emerging open challenges in
this domain.
|
Data sampling acts as a pivotal role in training deep learning models.
However, an effective sampling schedule is difficult to learn due to the
inherently high dimension of parameters in learning the sampling schedule. In
this paper, we propose an AutoSampling method to automatically learn sampling
schedules for model training, which consists of the multi-exploitation step
aiming for optimal local sampling schedules and the exploration step for the
ideal sampling distribution. More specifically, we achieve sampling schedule
search with shortened exploitation cycle to provide enough supervision. In
addition, we periodically estimate the sampling distribution from the learned
sampling schedules and perturb it to search in the distribution space. The
combination of two searches allows us to learn a robust sampling schedule. We
apply our AutoSampling method to a variety of image classification tasks
illustrating the effectiveness of the proposed method.
|
Standard quantum mechanics has been formulated with complex-valued
Schrodinger equations, wave functions, operators, and Hilbert spaces. However,
previous work has shown possible to simulate quantum systems using only real
numbers by adding extra qubits and exploiting an enlarged Hilbert space. A
fundamental question arises: are the complex numbers really necessary for the
quantum mechanical description of nature? To answer this question, a non-local
game has been developed to reveal a contradiction between a multiqubit quantum
experiment and a player using only real numbers. Here, based on deterministic
and high-fidelity entanglement swapping with superconducting qubits, we
experimentally implement the Bell-like game and observe a quantum score of
8.09(1), which beats the real number bound of 7.66 by 43 standard deviations.
Our results disprove the real-number description of nature and establish the
indispensable role of complex numbers in quantum mechanics.
|
We initiate the work towards a comprehensive picture of the smoothed
satisfaction of voting axioms, to provide a finer and more realistic foundation
for comparing voting rules. We adopt the smoothed social choice framework,
where an adversary chooses arbitrarily correlated "ground truth" preferences
for the agents, on top of which random noises are added. We focus on
characterizing the smoothed satisfaction of two well-studied voting axioms:
Condorcet criterion and participation. We prove that for any fixed number of
alternatives, when the number of voters $n$ is sufficiently large, the smoothed
satisfaction of the Condorcet criterion under a wide range of voting rules is
$1$, $1-\exp(-\Theta(n))$, $\Theta(n^{-0.5})$, $ \exp(-\Theta(n))$, or being
$\Theta(1)$ and $1-\Theta(1)$ at the same time; and the smoothed satisfaction
of participation is $1-\Theta(n^{-0.5})$. Our results address open questions by
Berg and Lepelley in 1994 for these rules, and also confirm the following
high-level message: the Condorcet criterion is a bigger concern than
participation under realistic models.
|
The deep-learning-based image restoration and fusion methods have achieved
remarkable results. However, the existing restoration and fusion methods paid
little research attention to the robustness problem caused by dynamic
degradation. In this paper, we propose a novel dynamic image restoration and
fusion neural network, termed as DDRF-Net, which is capable of solving two
problems, i.e., static restoration and fusion, dynamic degradation. In order to
solve the static fusion problem of existing methods, dynamic convolution is
introduced to learn dynamic restoration and fusion weights. In addition, a
dynamic degradation kernel is proposed to improve the robustness of image
restoration and fusion. Our network framework can effectively combine image
degradation with image fusion tasks, provide more detailed information for
image fusion tasks through image restoration loss, and optimize image
restoration tasks through image fusion loss. Therefore, the stumbling blocks of
deep learning in image fusion, e.g., static fusion weight and specifically
designed network architecture, are greatly mitigated. Extensive experiments
show that our method is more superior compared with the state-of-the-art
methods.
|
In the present paper, we introduce a special function on the Drinfeld period
domain $\Omega^{r}$ for $r\geq 2$ which gives the false Eisenstein series of
Gekeler when $r=2$. We also study its functional equation and relation with
quasi-periodic functions of a Drinfeld module as well as transcendence of its
values at CM points.
|
Multifractal systems usually have singularity spectra defined on bounded sets
of H\"older exponents. As a consequence, their associated multifractal scaling
exponents are expected to depend linearly upon statistical moment orders at
high enough orders -- a phenomenon referred to as the {\it{linearization
effect}}. Motivated by general ideas taken from models of turbulent
intermittency and focusing on the case of two-dimensional systems, we
investigate the issue within the framework of Gaussian multiplicative chaos. As
verified by means of Monte Carlo simulations, it turns out that the
linearization effect can be accounted for by Liouville-like random measures
defined in terms of upper-bounded scalar fields. The coarse-grained statistical
properties of Gaussian multiplicative chaos are furthermore found to be
preserved in the linear regime of the scaling exponents. As a related
application, we look at the problem of turbulent circulation statistics, and
obtain a remarkably accurate evaluation of circulation statistical moments,
recently determined with the help of massive numerical simulations.
|
This paper outlines two approaches|based on counterexample-guided abstraction
refinement (CEGAR) and counterexample-guided inductive synthesis (CEGIS),
respectively to the automated synthesis of finite-state probabilistic models
and programs. Our CEGAR approach iteratively partitions the design space
starting from an abstraction of this space and refines this by a light-weight
analysis of verification results. The CEGIS technique exploits critical
subsystems as counterexamples to prune all programs behaving incorrectly on
that input. We show the applicability of these synthesis techniques to
sketching of probabilistic programs, controller synthesis of POMDPs, and
software product lines.
|
The increasing performance requirements of modern applications place a
significant burden on software-based packet processing. Most of today's
software input/output accelerations achieve high performance at the expense of
reserving CPU resources dedicated to continuously poll the Network Interface
Card. This is specifically the case with DPDK (Data Plane Development Kit),
probably the most widely used framework for software-based packet processing
today. The approach presented in this paper, descriptively called Metronome,
has the dual goals of providing CPU utilization proportional to the load, and
allowing flexible sharing of CPU resources between I/O tasks and applications.
Metronome replaces DPDK's continuous polling with an intermittent sleep&wake
mode, and revolves around a new multi-threaded operation, which improves
service continuity. Since the proposed operation trades CPU usage with
buffering delay, we propose an analytical model devised to dynamically adapt
the sleep&wake parameters to the actual traffic load, meanwhile providing a
target average latency. Our experimental results show a significant reduction
of the CPU cycles, improvements in power usage, and robustness to CPU sharing
even when challenged with CPU-intensive applications.
|
In logistics warehouse, since many objects are randomly stacked on shelves,
it becomes difficult for a robot to safely extract one of the objects without
other objects falling from the shelf. In previous works, a robot needed to
extract the target object after rearranging the neighboring objects. In
contrast, humans extract an object from a shelf while supporting other
neighboring objects. In this paper, we propose a bimanual manipulation planner
based on collapse prediction trained with data generated from a physics
simulator, which can safely extract a single object while supporting the other
object. We confirmed that the proposed method achieves more than 80% success
rate for safe extraction by real-world experiments using a dual-arm
manipulator.
|
For many practical computer vision applications, the learned models usually
have high performance on the datasets used for training but suffer from
significant performance degradation when deployed in new environments, where
there are usually style differences between the training images and the testing
images. An effective domain generalizable model is expected to be able to learn
feature representations that are both generalizable and discriminative. In this
paper, we design a novel Style Normalization and Restitution module (SNR) to
simultaneously ensure both high generalization and discrimination capability of
the networks. In the SNR module, particularly, we filter out the style
variations (e.g, illumination, color contrast) by performing Instance
Normalization (IN) to obtain style normalized features, where the discrepancy
among different samples and domains is reduced. However, such a process is
task-ignorant and inevitably removes some task-relevant discriminative
information, which could hurt the performance. To remedy this, we propose to
distill task-relevant discriminative features from the residual (i.e, the
difference between the original feature and the style normalized feature) and
add them back to the network to ensure high discrimination. Moreover, for
better disentanglement, we enforce a dual causality loss constraint in the
restitution step to encourage the better separation of task-relevant and
task-irrelevant features. We validate the effectiveness of our SNR on different
computer vision tasks, including classification, semantic segmentation, and
object detection. Experiments demonstrate that our SNR module is capable of
improving the performance of networks for domain generalization (DG) and
unsupervised domain adaptation (UDA) on many tasks. Code are available at
https://github.com/microsoft/SNR.
|
Multi-agent simulations provide a scalable environment for learning policies
that interact with rational agents. However, such policies may fail to
generalize to the real-world where agents may differ from simulated
counterparts due to unmodeled irrationality and misspecified reward functions.
We introduce Epsilon-Robust Multi-Agent Simulation (ERMAS), a robust
optimization framework for learning AI policies that are robust to such
multiagent sim-to-real gaps. While existing notions of multi-agent robustness
concern perturbations in the actions of agents, we address a novel robustness
objective concerning perturbations in the reward functions of agents. ERMAS
provides this robustness by anticipating suboptimal behaviors from other
agents, formalized as the worst-case epsilon-equilibrium. We show empirically
that ERMAS yields robust policies for repeated bimatrix games and optimal
taxation problems in economic simulations. In particular, in the two-level RL
problem posed by the AI Economist (Zheng et al., 2020) ERMAS learns tax
policies that are robust to changes in agent risk aversion, improving social
welfare by up to 15% in complex spatiotemporal simulations.
|
Feshbach resonances are an invaluable tool in atomic physics, enabling
precise control of interactions and the preparation of complex quantum phases
of matter. Here, we theoretically analyze a solid-state analogue of a Feshbach
resonance in two dimensional semiconductor heterostructures. In the presence of
inter-layer electron tunneling, the scattering of excitons and electrons
occupying different layers can be resonantly enhanced by tuning an applied
electric field. The emergence of an inter-layer Feshbach molecule modifies the
optical excitation spectrum, and can be understood in terms of Fermi polaron
formation. We discuss potential implications for the realization of correlated
Bose-Fermi mixtures in bilayer semiconductors.
|
Precise in-situ measurements of the neutron flux in underground laboratories
is crucial for direct dark matter searches, as neutron induced backgrounds can
mimic the typical dark matter signal. The development of a novel neutron
spectroscopy technique using Spherical Proportional Counters is investigated.
The detector is operated with nitrogen and is sensitive to both fast and
thermal neutrons through the $^{14}$N(n, $\alpha$)$^{11}$B and $^{14}$N(n,
p)$^{14}$C reactions. This method holds potential to be a safe, inexpensive,
effective, and reliable alternative to $^3$He-based detectors. Measurements of
fast and thermal neutrons from an Am-Be source with a Spherical Proportional
Counter operated at pressures up to 2 bar at Birmingham are discussed.
|
Electrostatic reaction inhibition in heterogeneous catalysis emerges if
charged reactants and products are adsorbed on the catalyst and thus repel the
approaching reactants. In this work, we study the effects of electrostatic
inhibition on the reaction rate of unimolecular reactions catalyzed on the
surface of a spherical model nanoparticle by using particle-based
reaction-diffusion simulations. Moreover, we derive closed rate equations based
on approximate Debye-Smoluchowski rate theory, valid for diffusion-controlled
reactions, and a modified Langmuir adsorption isotherm, relevant for
reaction-controlled reactions, to account for electrostatic inhibition in the
Debye-H\"uckel limit. We study the kinetics of reactions ranging from low to
high adsorptions on the nanoparticle surface and from the surface- to
diffusion-controlled limits for charge valencies 1 and 2. In the
diffusion-controlled limit, electrostatic inhibition drastically slows down the
reactions for strong adsorption and low ionic concentration, which is well
described by our theory. In particular, the rate decreases with adsorption
affinity, because in this case the inhibiting products are generated at high
rate. In the (slow) reaction-controlled limit, the effect of electrostatic
inhibition is much weaker, as semi-quantitatively reproduced by our
electrostatic-modified Langmuir theory. We finally propose and verify a simple
interpolation formula that describes electrostatic inhibition for all reaction
speeds (`diffusion-influenced' reactions) in general.
|
This paper addresses outdoor terrain mapping using overhead images obtained
from an unmanned aerial vehicle. Dense depth estimation from aerial images
during flight is challenging. While feature-based localization and mapping
techniques can deliver real-time odometry and sparse points reconstruction, a
dense environment model is generally recovered offline with significant
computation and storage. This paper develops a joint 2D-3D learning approach to
reconstruct local meshes at each camera keyframe, which can be assembled into a
global environment model. Each local mesh is initialized from sparse depth
measurements. We associate image features with the mesh vertices through camera
projection and apply graph convolution to refine the mesh vertices based on
joint 2-D reprojected depth and 3-D mesh supervision. Quantitative and
qualitative evaluations using real aerial images show the potential of our
method to support environmental monitoring and surveillance applications.
|
Intermediate band solar cells (IBSCs) pursue the increase in efficiency by
absorbing below-bandgap energy photons while preserving the output voltage.
Experimental IBSCs based on quantum dots have already demonstrated that both
below-bandgap photon absorption and the output voltage preservation, are
possible. However, the experimental work has also revealed that the
below-bandgap absorption of light is weak and insufficient to boost the
efficiency of the solar cells. The objective of this work is to contribute to
the study of this absorption by manufacturing and characterizing a quantum dot
intermediate band solar cell with a single quantum dot layer with and without
light trapping elements. Using one-dimensional substrate texturing, our results
show a three-fold increase in the absorption of below bandgap energy photons in
the lowest energy region of the spectrum, a region not previously explored
using this approach. Furthermore, we also measure, at 9K, a distinguished split
of quasi-Fermi levels between the conduction and intermediate bands, which is a
necessary condition to preserve the output voltage of the cell.
|
We present a randomized $O(m \log^2 n)$ work, $O(\text{polylog } n)$ depth
parallel algorithm for minimum cut. This algorithm matches the work bounds of a
recent sequential algorithm by Gawrychowski, Mozes, and Weimann [ICALP'20], and
improves on the previously best parallel algorithm by Geissmann and Gianinazzi
[SPAA'18], which performs $O(m \log^4 n)$ work in $O(\text{polylog } n)$ depth.
Our algorithm makes use of three components that might be of independent
interest. Firstly, we design a parallel data structure that efficiently
supports batched mixed queries and updates on trees. It generalizes and
improves the work bounds of a previous data structure of Geissmann and
Gianinazzi and is work efficient with respect to the best sequential algorithm.
Secondly, we design a parallel algorithm for approximate minimum cut that
improves on previous results by Karger and Motwani. We use this algorithm to
give a work-efficient procedure to produce a tree packing, as in Karger's
sequential algorithm for minimum cuts. Lastly, we design an efficient parallel
algorithm for solving the minimum $2$-respecting cut problem.
|
In this paper, the security-aware robust resource allocation in energy
harvesting cognitive radio networks is considered with cooperation between two
transmitters while there are uncertainties in channel gains and battery energy
value. To be specific, the primary access point harvests energy from the green
resource and uses time switching protocol to send the energy and data towards
the secondary access point (SAP). Using power-domain non-orthogonal multiple
access technique, the SAP helps the primary network to improve the security of
data transmission by using the frequency band of the primary network. In this
regard, we introduce the problem of maximizing the proportional-fair energy
efficiency (PFEE) considering uncertainty in the channel gains and battery
energy value subject to the practical constraints. Moreover, the channel gain
of the eavesdropper is assumed to be unknown. Employing the decentralized
partially observable Markov decision process, we investigate the solution of
the corresponding resource allocation problem. We exploit multi-agent with
single reward deep deterministic policy gradient (MASRDDPG) and recurrent
deterministic policy gradient (RDPG) methods. These methods are compared with
the state-of-the-art ones like multi-agent and single-agent DDPG. Simulation
results show that both MASRDDPG and RDPG methods, outperform the
state-of-the-art methods by providing more PFEE to the network.
|
Nanophononic materials are promising to control the transport of sound in the
GHz range and heat in the THz range. Here we are interested in the influence of
a dendritic shape of inclusion on acoustic attenuation. We investigate a Finite
Element numerical simulation of the transient propagation of an acoustic
wave-packet in 2D nanophononic materials with circular or dendritic inclusions
periodically distributed in matrix. By measuring the penetration length,
diffusivity, and instantaneous wave velocity, we find that the multi-branching
tree-like form of dendrites provides a continuous source of phonon-interface
scattering leading to an increasing acoustic attenuation. When the wavelength
is far less than the inter-inclusion distance, we report a strong attenuation
process in the dendritic case which can be fitted by a compressed exponential
function with $\beta>1$.
|
We review the formation and evolution of fossil groups and clusters from both
the theoretical and the observational points of view. In the optical band,
these systems are dominated by the light of the central galaxy. They were
interpreted as old systems that had enough time to merge all the M* galaxies
within the central one. During the last two decades many observational studies
were performed to prove the old and relaxed state of fossil systems. The
majority of these studies, that spans a wide range of topics including halos
global scaling relations, dynamical substructures, stellar populations, and
galaxy luminosity functions, seem to challenge this scenario. The general
picture that can be obtained by reviewing all the observational works is that
the fossil state could be transitional. Indeed, the formation of the large
magnitude gap observed in fossil systems could be related to internal processes
rather than an old formation.
|
Nonlinear fractional differential equations have gained a significant place
in mathematical physics. Finding the solutions to these equations has emerged
as a field of study that has attracted a lot of attention lately. In this work,
semi inverse variation method of He and the ansatz method have been applied to
find the soliton solutions for fractional Korteweg de Vries equation,
fractional equal width equation, and fractional modified equal width equation
defined by conformable derivative of Atangana (beta derivative). These two
methods are effective methods employed to get the soliton solutions of these
nonlinear equations. All of the calculations in this work have been obtained
using the Maple program and the solutions have been replaced in the equations
and their accuracy has been confirmed. In addition, graphics of some of the
solutions are also included. The found solutions in this study have the
potential to be useful in mathematical physics and engineering.
|
Designing a speech-to-intent (S2I) agent which maps the users' spoken
commands to the agents' desired task actions can be challenging due to the
diverse grammatical and lexical preference of different users. As a remedy, we
discuss a user-taught S2I system in this paper. The user-taught system learns
from scratch from the users' spoken input with action demonstration, which
ensure it is fully matched to the users' way of formulating intents and their
articulation habits. The main issue is the scarce training data due to the user
effort involved. Existing state-of-art approaches in this setting are based on
non-negative matrix factorization (NMF) and capsule networks. In this paper we
combine the encoder of an end-to-end ASR system with the prior NMF/capsule
network-based user-taught decoder, and investigate whether pre-training
methodology can reduce training data requirements for the NMF and capsule
network. Experimental results show the pre-trained ASR-NMF framework
significantly outperforms other models, and also, we discuss limitations of
pre-training with different types of command-and-control(C&C) applications.
|
Deep reinforcement Learning for end-to-end driving is limited by the need of
complex reward engineering. Sparse rewards can circumvent this challenge but
suffers from long training time and leads to sub-optimal policy. In this work,
we explore full-control driving with only goal-constrained sparse reward and
propose a curriculum learning approach for end-to-end driving using only
navigation view maps that benefit from small virtual-to-real domain gap. To
address the complexity of multiple driving policies, we learn concurrent
individual policies selected at inference by a navigation system. We
demonstrate the ability of our proposal to generalize on unseen road layout,
and to drive significantly longer than in the training.
|
We systematically study linear and nonlinear wave propagation in a chain
composed of piecewise-linear bistable springs. Such bistable systems are ideal
testbeds for supporting nonlinear wave dynamical features including transition
and (supersonic) solitary waves. We show that bistable chains can support the
propagation of subsonic wavepackets which in turn can be trapped by a
low-energy phase to induce energy localization. The spatial distribution of
these energy foci strongly affects the propagation of linear waves, typically
causing scattering, but, in special cases, leading to a reflectionless mode
analogous to the Ramsauer-Townsend (RT) effect. Further, we show that the
propagation of nonlinear waves can spontaneously generate or remove additional
foci, which act as effective "impurities". This behavior serves as a novel
mechanism for reversibly programming the dynamic response of bistable chains.
|
Today we have quite stringent constraints on possible violations of the Weak
Equivalence Principle from the comparison of the acceleration of test-bodies of
different composition in Earth's gravitational field. In the present paper, we
propose a test of the Weak Equivalence Principle in the strong gravitational
field of black holes. We construct a relativistic reflection model in which
either the massive particles of the gas of the accretion disk or the photons
emitted by the disk may not follow the geodesics of the spacetime. We employ
our model to analyze the reflection features of a NuSTAR spectrum of the black
hole binary EXO 1846-031 and we constrain two parameters that quantify a
possible violation of the Weak Equivalence Principle by massive particles and
X-ray photons, respectively.
|
Internet of Things (IoT) allowed smart homes to improve the quality and the
comfort of our daily lives. However, these conveniences introduced several
security concerns that increase rapidly. IoT devices, smart home hubs, and
gateway raise various security risks. The smart home gateways act as a
centralized point of communication between the IoT devices, which can create a
backdoor into network data for hackers. One of the common and effective ways to
detect such attacks is intrusion detection in the network traffic. In this
paper, we proposed an intrusion detection system (IDS) to detect anomalies in a
smart home network using a bidirectional long short-term memory (BiLSTM) and
convolutional neural network (CNN) hybrid model. The BiLSTM recurrent behavior
provides the intrusion detection model to preserve the learned information
through time, and the CNN extracts perfectly the data features. The proposed
model can be applied to any smart home network gateway.
|
In this paper, a dynamical process in a statistical thermodynamic system of
spins exhibiting a phase transition is described on a contact manifold, where
such a dynamical process is a process that a metastable equilibrium state
evolves into the most stable symmetry broken equilibrium state. Metastable and
equilibrium states in the symmetry broken phase or ordered phase are assumed to
be described as pruned projections of Legendre submanifolds of contact
manifolds, where these pruned projections of the submanifolds express
hysteresis and pseudo-free energy curves. Singularities associated with phase
transitions are naturally arose in this framework as has been suggested by
Legendre singularity theory. Then a particular contact Hamiltonian vector field
is proposed so that a pruned segment of the projected Legendre submanifold is a
stable fixed point set in a region of a contact manifold, and that another
pruned segment is a unstable fixed point set. This contact Hamiltonian vector
field is identified with a dynamical process departing from a metastable
equilibrium state to the most stable equilibrium one. To show the statements
above explicitly an Ising type spin model with long-range interactions, called
the Husimi-Temperley model, is focused, where this model exhibits a phase
transition.
|
Face presentation attack detection plays a critical role in the modern face
recognition pipeline. A face presentation attack detection model with good
generalization can be obtained when it is trained with face images from
different input distributions and different types of spoof attacks. In reality,
training data (both real face images and spoof images) are not directly shared
between data owners due to legal and privacy issues. In this paper, with the
motivation of circumventing this challenge, we propose a Federated Face
Presentation Attack Detection (FedPAD) framework that simultaneously takes
advantage of rich fPAD information available at different data owners while
preserving data privacy. In the proposed framework, each data center locally
trains its own fPAD model. A server learns a global fPAD model by iteratively
aggregating model updates from all data centers without accessing private data
in each of them. To equip the aggregated fPAD model in the server with better
generalization ability to unseen attacks from users, following the basic idea
of FedPAD, we further propose a Federated Generalized Face Presentation Attack
Detection (FedGPAD) framework. A federated domain disentanglement strategy is
introduced in FedGPAD, which treats each data center as one domain and
decomposes the fPAD model into domain-invariant and domain-specific parts in
each data center. Two parts disentangle the domain-invariant and
domain-specific features from images in each local data center, respectively. A
server learns a global fPAD model by only aggregating domain-invariant parts of
the fPAD models from data centers and thus a more generalized fPAD model can be
aggregated in server. We introduce the experimental setting to evaluate the
proposed FedPAD and FedGPAD frameworks and carry out extensive experiments to
provide various insights about federated learning for fPAD.
|
We present how we formalize the waiting tables task in a restaurant as a
robot planning problem. This formalization was used to test our recently
developed algorithms that allow for optimal planning for achieving multiple
independent tasks that are partially observable and evolve over time [1], [2].
|
Recently, a number of backdoor attacks against Federated Learning (FL) have
been proposed. In such attacks, an adversary injects poisoned model updates
into the federated model aggregation process with the goal of manipulating the
aggregated model to provide false predictions on specific adversary-chosen
inputs. A number of defenses have been proposed; but none of them can
effectively protect the FL process also against so-called multi-backdoor
attacks in which multiple different backdoors are injected by the adversary
simultaneously without severely impacting the benign performance of the
aggregated model. To overcome this challenge, we introduce FLGUARD, a poisoning
defense framework that is able to defend FL against state-of-the-art backdoor
attacks while simultaneously maintaining the benign performance of the
aggregated model. Moreover, FL is also vulnerable to inference attacks, in
which a malicious aggregator can infer information about clients' training data
from their model updates. To thwart such attacks, we augment FLGUARD with
state-of-the-art secure computation techniques that securely evaluate the
FLGUARD algorithm. We provide formal argumentation for the effectiveness of our
FLGUARD and extensively evaluate it against known backdoor attacks on several
datasets and applications (including image classification, word prediction, and
IoT intrusion detection), demonstrating that FLGUARD can entirely remove
backdoors with a negligible effect on accuracy. We also show that private
FLGUARD achieves practical runtimes.
|
To demonstrate the ability in standard arithmetic operations to perform a
variety of digit manipulation tasks, a closed-form representation of the Conway
Base-13 Function over the integers is given.
|
Missing data is a challenge in many applications, including intelligent
transportation systems (ITS). In this paper, we study traffic speed and travel
time estimations in ITS, where portions of the collected data are missing due
to sensor instability and communication errors at collection points. These
practical issues can be remediated by missing data analysis, which are mainly
categorized as either statistical or machine learning(ML)-based approaches.
Statistical methods require the prior probability distribution of the data
which is unknown in our application. Therefore, we focus on an ML-based
approach, Multi-Directional Recurrent Neural Network (M-RNN). M-RNN utilizes
both temporal and spatial characteristics of the data. We evaluate the
effectiveness of this approach on a TomTom dataset containing spatio-temporal
measurements of average vehicle speed and travel time in the Greater Toronto
Area (GTA). We evaluate the method under various conditions, where the results
demonstrate that M-RNN outperforms existing solutions,e.g., spline
interpolation and matrix completion, by up to 58% decreases in Root Mean Square
Error (RMSE).
|
When baryon-quark continuity is formulated in terms of a topology change
without invoking "explicit " QCD degrees of freedom at a density higher than
twice the nuclear matter density $n_0$ the core of massive compact stars can be
described in terms of fractionally charged particles, behaving neither like
pure baryons nor deconfined quarks. Hidden symmetries, both local gauge and
pseudo-conformal (or broken scale), lead to the pseudo-conformal (PC) sound
velocity $v_{pcs}^2/c^2\approx 1/3$ at $\gsim 3n_0$ in compact stars. We argue
these symmetries are "emergent" from strong nuclear correlations and conjecture
that they reflect hidden symmetries in QCD proper exposed by nuclear
correlations. We establish a possible link between the quenching of $g_A$ in
superallowed Gamow-Teller transitions in nuclei and the precocious onset at
$n\gsim 3n_0$ of the PC sound velocity predicted at the dilaton limit fixed
point. We propose that bringing in explicit quark degrees of freedom as is done
in terms of the "quarkyonic" and other hybrid hadron-quark structure and our
topology-change strategy represent the "hadron-quark duality" formulated in
terms of the Cheshire-Cat mechanism~\cite{CC} for the smooth cross-over between
hadrons and quarks. Confrontation with currently available experimental
observations is discussed to support this notion.
|
The particle momentum anisotropy ($v_n$) produced in relativistic nuclear
collisions is considered to be a response of the initial geometry or the
spatial anisotropy $\epsilon_n$ of the system formed in these collisions. The
linear correlation between $\epsilon_n$ and $v_n$ quantifies the efficiency at
which the initial spatial eccentricity is converted to final momentum
anisotropy in heavy ion collisions. We study the transverse momentum, collision
centrality, and beam energy dependence of this correlation for different
charged particles using a hydrodynamical model framework. The ($\epsilon_n
-v_n$) correlation is found to be stronger for central collisions and also for
n=2 compared to that for n=3 as expected. However, the transverse momentum
($p_T$) dependent correlation coefficient shows interesting features which
strongly depends on the mass as well as $p_T$ of the emitted particle. The
correlation strength is found to be larger for lighter particles in the lower
$p_T$ region. We see that the relative fluctuation in anisotropic flow depends
strongly in the value of $\eta/s$ specially in the region $p_T <1$ GeV unlike
the correlation coefficient which does not show significant dependence on
$\eta/s$.
|
Multi-party computation (MPC) is promising for privacy-preserving machine
learning algorithms at edge networks, like federated learning. Despite their
potential, existing MPC algorithms fail short of adapting to the limited
resources of edge devices. A promising solution, and the focus of this work, is
coded computation, which advocates the use of error-correcting codes to improve
the performance of distributed computing through "smart" data redundancy. In
this paper, we focus on coded privacy-preserving computation using Shamir's
secret sharing. In particular, we design novel coded privacy-preserving
computation mechanisms; MatDot coded MPC (MatDot-CMPC) and PolyDot coded MPC
(PolyDot-CMPC) by employing recently proposed coded computation algorithms;
MatDot and PolyDot. We take advantage of the "garbage terms" that naturally
arise when polynomials are constructed in the design of MatDot-CMPC and
PolyDot-CMPC to reduce the number of workers needed for privacy-preserving
computation. Also, we analyze MatDot-CMPC and PolyDot-CMPC in terms of their
computation, storage, communication overhead as well as recovery threshold, so
they can easily adapt to the limited resources of edge devices.
|
In label-noise learning, the transition matrix plays a key role in building
statistically consistent classifiers. Existing consistent estimators for the
transition matrix have been developed by exploiting anchor points. However, the
anchor-point assumption is not always satisfied in real scenarios. In this
paper, we propose an end-to-end framework for solving label-noise learning
without anchor points, in which we simultaneously optimize two objectives: the
cross entropy loss between the noisy label and the predicted probability by the
neural network, and the volume of the simplex formed by the columns of the
transition matrix. Our proposed framework can identify the transition matrix if
the clean class-posterior probabilities are sufficiently scattered. This is by
far the mildest assumption under which the transition matrix is provably
identifiable and the learned classifier is statistically consistent.
Experimental results on benchmark datasets demonstrate the effectiveness and
robustness of the proposed method.
|
We study the dimer model on subgraphs of the square lattice in which vertices
on a prescribed part of the boundary (the free boundary) are possibly
unmatched. Each such unmatched vertex is called a monomer and contributes a
fixed multiplicative weight $z>0$ to the total weight of the configuration. A
bijection described by Giuliani, Jauslin and Lieb relates this model to a
standard dimer model but on a non-bipartite graph. The Kasteleyn matrix of this
dimer model describes a walk with transition weights that are negative along
the free boundary. Yet under certain assumptions, which are in particular
satisfied in the infinite volume limit in the upper half-plane, we prove an
effective, true random walk representation for the inverse Kasteleyn matrix. In
this case we further show that, independently of the value of $z>0$, the
scaling limit of the height function is the Gaussian free field with Neumann
(or free) boundary conditions, thereby answering a question of Giuliani et al.
|
In this paper, we present a deep learning model that exploits the power of
self-supervision to perform 3D point cloud completion, estimating the missing
part and a context region around it. Local and global information are encoded
in a combined embedding. A denoising pretext task provides the network with the
needed local cues, decoupled from the high-level semantics and naturally shared
over multiple classes. On the other hand, contrastive learning maximizes the
agreement between variants of the same shape with different missing portions,
thus producing a representation which captures the global appearance of the
shape. The combined embedding inherits category-agnostic properties from the
chosen pretext tasks. Differently from existing approaches, this allows to
better generalize the completion properties to new categories unseen at
training time. Moreover, while decoding the obtained joint representation, we
better blend the reconstructed missing part with the partial shape by paying
attention to its known surrounding region and reconstructing this frame as
auxiliary objective. Our extensive experiments and detailed ablation on the
ShapeNet dataset show the effectiveness of each part of the method with new
state of the art results. Our quantitative and qualitative analysis confirms
how our approach is able to work on novel categories without relying neither on
classification and shape symmetry priors, nor on adversarial training
procedures.
|
In the largest, currently known, class of one Quadrillion globally consistent
F-theory Standard Models with gauge coupling unification and no chiral exotics,
the vector-like spectra are counted by cohomologies of root bundles. In this
work, we apply a previously proposed method to identify toric base 3-folds,
which are promising to establish F-theory Standard Models with exactly three
quark-doublets and no vector-like exotics in this representation. The base
spaces in question are obtained from triangulations of 708 polytopes. By
studying root bundles on the quark doublet curve
$C_{(\mathbf{3},\mathbf{2})_{1/6}}$ and employing well-known results about
desingularizations of toric K3-surfaces, we derive a \emph{triangulation
independent lower bound} $\check{N}_P^{(3)}$ for the number $N_P^{(3)}$ of root
bundles on $C_{(\mathbf{3},\mathbf{2})_{1/6}}$ with exactly three sections. The
ratio $\check{N}_P^{(3)} / N_P$, where $N_P$ is the total number of roots on
$C_{(\mathbf{3},\mathbf{2})_{1/6}}$, is largest for base spaces associated with
triangulations of the 8-th 3-dimensional polytope $\Delta^\circ_8$ in the
Kreuzer-Skarke list. For each of these $\mathcal{O}( 10^{15} )$ 3-folds, we
expect that many root bundles on $C_{(\mathbf{3},\mathbf{2})_{1/6}}$ are
induced from F-theory gauge potentials and that at least every 3000th root on
$C_{(\mathbf{3},\mathbf{2})_{1/6}}$ has exactly three global sections and thus
no exotic vector-like quark-doublet modes.
|
Thomson's multitaper method estimates the power spectrum of a signal from $N$
equally spaced samples by averaging $K$ tapered periodograms. Discrete prolate
spheroidal sequences (DPSS) are used as tapers since they provide excellent
protection against spectral leakage. Thomson's multitaper method is widely used
in applications, but most of the existing theory is qualitative or asymptotic.
Furthermore, many practitioners use a DPSS bandwidth $W$ and number of tapers
that are smaller than what the theory suggests is optimal because the
computational requirements increase with the number of tapers. We revisit
Thomson's multitaper method from a linear algebra perspective involving
subspace projections. This provides additional insight and helps us establish
nonasymptotic bounds on some statistical properties of the multitaper spectral
estimate, which are similar to existing asymptotic results. We show using
$K=2NW-O(\log(NW))$ tapers instead of the traditional $2NW-O(1)$ tapers better
protects against spectral leakage, especially when the power spectrum has a
high dynamic range. Our perspective also allows us to derive an
$\epsilon$-approximation to the multitaper spectral estimate which can be
evaluated on a grid of frequencies using $O(\log(NW)\log\tfrac{1}{\epsilon})$
FFTs instead of $K=O(NW)$ FFTs. This is useful in problems where many samples
are taken, and thus, using many tapers is desirable.
|
Low frequency gravitational waves (GWs) are keys to understanding
cosmological inflation and super massive blackhole (SMBH) formation via
blackhole mergers, while it is difficult to identify the low frequency GWs with
ground-based GW experiments such as the advanced LIGO (aLIGO) and VIRGO due to
the seismic noise. Although quasi-stellar object (QSO) proper motions produced
by the low frequency GWs are measured by pioneering studies of very long
baseline interferometry (VLBI) observations with good positional accuracy, the
low frequency GWs are not strongly constrained by the small statistics with 711
QSOs (Darling et al. 2018). Here we present the proper motion field map of
400,894 QSOs of the Sloan Digital Sky Survey (SDSS) with optical {\it Gaia}
EDR3 proper motion measurements whose positional accuracy is $< 0.4$
milli-arcsec comparable with the one of the radio VLBI observations. We obtain
the best-fit spherical harmonics with the typical field strength of
$\mathcal{O}(0.1)\, \mu$arcsec, and place a tight constraint on the energy
density of GWs, $\Omega_{\rm gw}=(0.964 \pm 3.804) \times 10^{-4}$ (95 \%
confidence level), that is significantly stronger than the one of the previous
VLBI study by two orders of magnitude at the low frequency regime of $f
<10^{-9}\,{\rm [Hz]}\simeq (30\,{\rm yr})^{-1}$ unexplored by the pulsar timing
technique. Our upper limit rules out the existence of SMBH binary systems at
the distance $r < 400$ kpc from the Earth where the Milky Way center and local
group galaxies are included. Demonstrating the limit given by our optical QSO
study, we claim that astrometric satellite data including the forthcoming {\it
Gaia} DR5 data with small systematic errors are powerful to constrain low
frequency GWs.
|
When reasoning about tasks that involve large amounts of data, a common
approach is to represent data items as objects in the Hamming space where
operations can be done efficiently and effectively. Object similarity can then
be computed by learning binary representations (hash codes) of the objects and
computing their Hamming distance. While this is highly efficient, each bit
dimension is equally weighted, which means that potentially discriminative
information of the data is lost. A more expressive alternative is to use
real-valued vector representations and compute their inner product; this allows
varying the weight of each dimension but is many magnitudes slower. To fix
this, we derive a new way of measuring the dissimilarity between two objects in
the Hamming space with binary weighting of each dimension (i.e., disabling
bits): we consider a field-agnostic dissimilarity that projects the vector of
one object onto the vector of the other. When working in the Hamming space,
this results in a novel projected Hamming dissimilarity, which by choice of
projection, effectively allows a binary importance weighting of the hash code
of one object through the hash code of the other. We propose a variational
hashing model for learning hash codes optimized for this projected Hamming
dissimilarity, and experimentally evaluate it in collaborative filtering
experiments. The resultant hash codes lead to effectiveness gains of up to +7%
in NDCG and +14% in MRR compared to state-of-the-art hashing-based
collaborative filtering baselines, while requiring no additional storage and no
computational overhead compared to using the Hamming distance.
|
We report muon spin rotation ($\mu$SR) and neutron diffraction on the
rare-earth based magnets (Mo$_{2/3}$RE$_{1/3}$)$_2$AlC, also predicted as
parent materials for 2D derivatives, where RE = Nd, Gd (only ($\mu$SR), Tb, Dy,
Ho and Er. By crossing information between the two techniques, we determine the
magnetic moment ($m$), structure, and dynamic properties of all compounds. We
find that only for RE = Nd and Gd the moments are frozen on a microsecond time
scale. Out of these two, the most promising compound for a potential 2D high
($m$) magnet is the Gd variant, since the parent crystals are pristine with $m
= 6.5 \pm 0.5 \mu_B$, N\'eel temperature of $29 \pm 1$ K, and the magnetic
anisotropy between in and out of plane coupling is smaller than $10^{-8}$. This
result suggests that magnetic ordering in the Gd variant is dominated by
in-plane magnetic interactions and should therefore remain stable if exfoliated
into 2D sheets.
|
learning algorithms. In this paper, we review the classification algorithms
used in the health care system (chronic diseases) and present the neural
network-based Ensemble learning method. We briefly describe the commonly used
algorithms and describe their critical properties. Materials and Methods: In
this study, modern classification algorithms used in healthcare, examine the
principles of these methods and guidelines, and to accurately diagnose and
predict chronic diseases, superior machine learning algorithms with the neural
network-based ensemble learning Is used. To do this, we use experimental data,
real data on chronic patients (diabetes, heart, cancer) available on the UCI
site. Results: We found that group algorithms designed to diagnose chronic
diseases can be more effective than baseline algorithms. It also identifies
several challenges to further advancing the classification of machine learning
in the diagnosis of chronic diseases. Conclusion: The results show the high
performance of the neural network-based Ensemble learning approach for the
diagnosis and prediction of chronic diseases, which in this study reached 98.5,
99, and 100% accuracy, respectively.
|
Video salient object detection (VSOD) aims to locate and segment the most
attractive object by exploiting both spatial cues and temporal cues hidden in
video sequences. However, spatial and temporal cues are often unreliable in
real-world scenarios, such as low-contrast foreground, fast motion, and
multiple moving objects. To address these problems, we propose a new framework
to adaptively capture available information from spatial and temporal cues,
which contains Confidence-guided Adaptive Gate (CAG) modules and Dual
Differential Enhancement (DDE) modules. For both RGB features and optical flow
features, CAG estimates confidence scores supervised by the IoU between
predictions and the ground truths to re-calibrate the information with a gate
mechanism. DDE captures the differential feature representation to enrich the
spatial and temporal information and generate the fused features. Experimental
results on four widely used datasets demonstrate the effectiveness of the
proposed method against thirteen state-of-the-art methods.
|
We study a dynamic model of Bayesian persuasion in sequential decision-making
settings. An informed principal observes an external parameter of the world and
advises an uninformed agent about actions to take over time. The agent takes
actions in each time step based on the current state, the principal's
advice/signal, and beliefs about the external parameter. The action of the
agent updates the state according to a stochastic process. The model arises
naturally in many applications, e.g., an app (the principal) can advice the
user (the agent) on possible choices between actions based on additional
real-time information the app has. We study the problem of designing a
signaling strategy from the principal's point of view. We show that the
principal has an optimal strategy against a myopic agent, who only optimizes
their rewards locally, and the optimal strategy can be computed in polynomial
time. In contrast, it is NP-hard to approximate an optimal policy against a
far-sighted agent. Further, we show that if the principal has the power to
threaten the agent by not providing future signals, then we can efficiently
design a threat-based strategy. This strategy guarantees the principal's payoff
as if playing against an agent who is far-sighted but myopic to future signals.
|
Scalable quantum information processing requires the ability to tune
multi-qubit interactions. This makes the precise manipulation of quantum states
particularly difficult for multi-qubit interactions because tunability
unavoidably introduces sensitivity to fluctuations in the tuned parameters,
leading to erroneous multi-qubit gate operations. The performance of quantum
algorithms may be severely compromised by coherent multi-qubit errors. It is
therefore imperative to understand how these fluctuations affect multi-qubit
interactions and, more importantly, to mitigate their influence. In this study,
we demonstrate how to implement dynamical-decoupling techniques to suppress the
two-qubit analogue of the dephasing on a superconducting quantum device
featuring a compact tunable coupler, a trending technology that enables the
fast manipulation of qubit--qubit interactions. The pure-dephasing time shows
an up to ~14 times enhancement on average when using robust sequences. The
results are in good agreement with the noise generated from room-temperature
circuits. Our study further reveals the decohering processes associated with
tunable couplers and establishes a framework to develop gates and sequences
robust against two-qubit errors.
|
The Davis-Chandrasekhar-Fermi (DCF) method is widely used to indirectly
estimate the magnetic field strength from the plane-of-sky field orientation.
In this work, we present a set of 3D MHD simulations and synthetic polarization
images using radiative transfer of clustered massive star-forming regions. We
apply the DCF method on the synthetic polarization maps to investigate its
reliability in high-density molecular clumps and dense cores where self-gravity
is significant. We investigate the validity of the assumptions of the DCF
method step by step and compare the model and estimated field strength to
derive the correction factors for the estimated uniform and total (rms)
magnetic field strength at clump and core scales. The correction factors in
different situations are catalogued. We find the DCF method works well in
strong field cases. However, the magnetic field strength in weak field cases
could be significantly overestimated by the DCF method when the turbulent
magnetic energy is smaller than the turbulent kinetic energy. We investigate
the accuracy of the angular dispersion function (ADF, a modified DCF method)
method on the effects that may affect the measured angular dispersion and find
that the ADF method correctly accounts for the ordered field structure, the
beam-smoothing, and the interferometric filtering, but may not be applicable to
account for the signal integration along the line of sight in most cases. Our
results suggest that the DCF methods should be avoided to be applied below
$\sim$0.1 pc scales if the effect of line-of-sight signal integration is not
properly addressed.
|
Pedestrian attribute recognition aims to assign multiple attributes to one
pedestrian image captured by a video surveillance camera. Although numerous
methods are proposed and make tremendous progress, we argue that it is time to
step back and analyze the status quo of the area. We review and rethink the
recent progress from three perspectives. First, given that there is no explicit
and complete definition of pedestrian attribute recognition, we formally define
and distinguish pedestrian attribute recognition from other similar tasks.
Second, based on the proposed definition, we expose the limitations of the
existing datasets, which violate the academic norm and are inconsistent with
the essential requirement of practical industry application. Thus, we propose
two datasets, PETA\textsubscript{$ZS$} and RAP\textsubscript{$ZS$}, constructed
following the zero-shot settings on pedestrian identity. In addition, we also
introduce several realistic criteria for future pedestrian attribute dataset
construction. Finally, we reimplement existing state-of-the-art methods and
introduce a strong baseline method to give reliable evaluations and fair
comparisons. Experiments are conducted on four existing datasets and two
proposed datasets to measure progress on pedestrian attribute recognition.
|
This paper uses both experimental and numerical approaches to revisit the
concept of current transfer length (CTL) in second-generation high-temperature
superconductor coated conductors with a current flow diverter (CFD)
architecture. The CFD architecture has been implemented on eight commercial
coated conductors samples from THEVA. In order to measure the 2-D current
distribution in the silver stabilizer layer of the samples, we first used a
custom-made array of 120 voltage taps to measure the surface potential
distribution. Then, the so-called "static" CTL ($\lambda_s$) was extracted
using a semi-analytical model that fitted well the experimental data. As
defined in this paper, the static CTL on a 2-D domain is a generalization of
the definition commonly used in literature. In addition, we used a 3-D finite
element model to simulate the normal zone propagation in our CFD samples, in
order to quantify their "dynamic" CTL ($\lambda_d$), a new concept introduced
in this paper and defined as the CTL observed during the propagation of a
quenched region. The results show that, for a CFD architecture, $\lambda_d$ is
always larger than $\lambda_s$, whereas $\lambda_d = \lambda_s$ when the
interfacial resistance between the stabilizer and the superconductor layers is
the same everywhere. We proved that the cause of these different behaviors is
related to the shape of the normal zone, which is curved for the CFD
architecture, and rectangular otherwise. Finally, we showed that the NZPV is
proportional to $\lambda_d$, not with $\lambda_s$, which suggests that the
dynamic CTL $\lambda_d$ is the most general definition of the CTL and should
always be used when current crowding and non-uniform heat generation occurs
around a normal zone.
|
Understanding the dynamics of a quantum bit's environment is essential for
the realization of practical systems for quantum information processing and
metrology. We use single nitrogen-vacancy (NV) centers in diamond to study the
dynamics of a disordered spin ensemble at the diamond surface. Specifically, we
tune the density of "dark" surface spins to interrogate their contribution to
the decoherence of shallow NV center spin qubits. When the average surface spin
spacing exceeds the NV center depth, we find that the surface spin contribution
to the NV center free induction decay can be described by a stretched
exponential with variable power n. We show that these observations are
consistent with a model in which the spatial positions of the surface spins are
fixed for each measurement, but some of them reconfigure between measurements.
In particular, we observe a depth-dependent critical time associated with a
dynamical transition from Gaussian (n=2) decay to n=2/3, and show that this
transition arises from the competition between the small decay contributions of
many distant spins and strong coupling to a few proximal spins at the surface.
These observations demonstrate the potential of a local sensor for
understanding complex systems and elucidate pathways for improving and
controlling spin qubits at the surface.
|
Let $a$ and $b$ be relatively prime positive integers. In this paper the
weighted sum $\sum_{n\in{\rm NR}(a,b)}\lambda^{n-1}n^m$ is given explicitly or
in terms of the Apostol-Bernoulli numbers, where $m$ is a nonnegative integer,
and ${\rm NR}(a,b)$ denotes the set of positive integers nonrepresentable in
terms of $a$ and $b$.
|
A recently developed model chemistry (jun-Cheap) has been slightly modified
and proposed as an effective, reliable and parameter-free scheme for the
computation of accurate reaction rates with special reference to astrochemical
and atmospheric processes. Benchmarks with different sets of state-of-the-art
energy barriers spanning a wide range of values show that, in the absence of
strong multi-reference contributions, the proposed model outperforms the most
well-known model chemistries, reaching a sub-chemical accuracy without any
empirical parameter and with affordable computer times. Some test cases show
that geometries, energy barriers, zero point energies and thermal contributions
computed at this level can be used in the framework of the master equation
approach based on ab-initio transition state theory (AITSTME) for obtaining
accurate reaction rates.
|
Recent research in differential privacy demonstrated that (sub)sampling can
amplify the level of protection. For example, for $\epsilon$-differential
privacy and simple random sampling with sampling rate $r$, the actual privacy
guarantee is approximately $r\epsilon$, if a value of $\epsilon$ is used to
protect the output from the sample. In this paper, we study whether this
amplification effect can be exploited systematically to improve the accuracy of
the privatized estimate. Specifically, assuming the agency has information for
the full population, we ask under which circumstances accuracy gains could be
expected, if the privatized estimate would be computed on a random sample
instead of the full population. We find that accuracy gains can be achieved for
certain regimes. However, gains can typically only be expected, if the
sensitivity of the output with respect to small changes in the database does
not depend too strongly on the size of the database. We only focus on
algorithms that achieve differential privacy by adding noise to the final
output and illustrate the accuracy implications for two commonly used
statistics: the mean and the median. We see our research as a first step
towards understanding the conditions required for accuracy gains in practice
and we hope that these findings will stimulate further research broadening the
scope of differential privacy algorithms and outputs considered.
|
It is well known that a universal set of gates for classical computation
augmented with the Hadamard gate results in universal quantum computing. While
this requires the addition of a genuine quantum element to the set of passive
classical gates, here we ask the following: can the same result be attained by
adding a quantum control unit while keeping the circuit itself completely
classical? In other words, can we get universal quantum computation by
coherently controlling classical operations? In this work we provide an
affirmative answer to this question, by considering a computational model that
consists of $2n$ target bits together with a set of classical gates, controlled
by log$(2n+1)$ ancillary qubits. We show that this model is equivalent to a
quantum computer operating on $n$ qubits. Furthermore, we show that even a
primitive computer that is capable of implementing only SWAP gates, can be
lifted to universal quantum computing, if aided with an appropriate quantum
control of logarithmic size. Our results thus exemplify the information
processing power brought forth by the quantum control system.
|
We establish new examples of augmentations of Legendrian twist knots that
cannot be induced by orientable Lagrangian fillings. To do so, we use a version
of the Seidel-Ekholm-Dimitroglou Rizell isomorphism with local coefficients to
show that any Lagrangian filling point in the augmentation variety of a
Legendrian knot must lie in the injective image of an algebraic torus with
dimension equal to the first Betti number of the filling. This is a
Floer-theoretic version of a result from microlocal sheaf theory. For the
augmentations in question, we show that no such algebraic torus can exist.
|
Departure time choice models play a crucial role in determining the traffic
load in transportation systems. This paper introduces a new framework to model
and analyze the departure time user equilibrium (DTUE) problem based on the
so-called Mean Field Games (MFGs) theory. The proposed framework is the
combination of two main components including (i) the reaction of travelers to
the traffic congestion by choosing their departure times to optimize their
travel cost; and (ii) the aggregation of the actions of the travelers, which
determines the system level of service. In this paper, we first present a
continuous departure time choice model and investigate the equilibria of the
system. Specifically, we demonstrate the existence of the equilibrium and
characterize the DTUE. Then, a discrete approximation of the system is provided
based on deterministic differential game models to numerically obtain the
equilibrium of the system. To examine the efficiency of the proposed model, we
compare it with the departure time choice models in the literature. We apply
our framework to a standard test case and observe that the solutions obtained
based on our model are 5.6\% better in terms of relative cost compared to the
solutions determined based on models in the literature. Moreover, our proposed
model converges with less number of iterations than the reference solution
method in the literature. Finally, the model is scaled up to the real test case
corresponding to the whole Lyon Metropolis with real demand pattern. The
results show that the proposed framework is able to tackle much larger test
case than usual to includes multiple preferred travel times and heterogeneous
trip lengths more accurately than existing models in the literature.
|
The $S=1$ Haldane state is constructed from a product of local singlet dimers
in the bulk and topological states at the edges of a chain. It is a fundamental
representative of topological quantum matter. Its well-known representative,
the quasi-one-dimensional SrNi$_2$V$_2$O$_8$ shows both conventional as well as
unconventional magnetic Raman scattering. The former is observed as one- and
two-triplet excitations with small linewidths and energies corresponding to the
Haldane gap $\Delta_H$ and the exchange coupling $J_c$ along the chain,
respectively. Well-defined magnetic quasiparticles are assumed to be stabilized
by interchain interactions and uniaxial single-ion anisotropy. Unconventional
scattering exists as broad continua of scattering with an intensity $I(T)$ that
shows a mixed bosonic / fermionic statistic. Such a mixed statistic has also
been observed in Kitaev spin liquids and could point to a non-Abelian symmetry.
As the ground state in the bulk of SrNi$_2$V$_2$O$_8$ is topologically trivial,
we suggest its fractionalization to be due to light-induced interchain exchange
processes. These processes are supposed to be enhanced due to a proximity to an
Ising ordered state with a quantum critical point. A comparison with
SrCo$_2$V$_2$O$_8$, the $S=1/2$ analogue to our title compound, supports these
statements.
|
We present a "learning to learn" approach for automatically constructing
white-box classification loss functions that are robust to label noise in the
training data. We parameterize a flexible family of loss functions using Taylor
polynomials, and apply evolutionary strategies to search for noise-robust
losses in this space. To learn re-usable loss functions that can apply to new
tasks, our fitness function scores their performance in aggregate across a
range of training dataset and architecture combinations. The resulting
white-box loss provides a simple and fast "plug-and-play" module that enables
effective noise-robust learning in diverse downstream tasks, without requiring
a special training procedure or network architecture. The efficacy of our
method is demonstrated on a variety of datasets with both synthetic and real
label noise, where we compare favourably to previous work.
|
In 2003, Deutsch and Elizalde defined bijective maps between Dyck paths which
are beneficial in investigating some statistics distributions of Dyck paths and
pattern-avoiding permutations. In this paper, we give a generalization of the
maps so that they are generated by permutations in $S_{2n}$. The construction
induces several novel ways to partition $S_{2n}$ which give a new
interpretation of an existing combinatorial identity involving double
factorials and a new integer sequence. Although the generalization does not in
general retain bijectivity, we are able to characterize a class of permutations
that generates bijections and furthermore imposes an algebraic structure to a
certain class of bijections. As a result, we introduce statistics of a Dyck
path involving the number of unpaired steps in some subpath whose distribution
is identical to other well-known height statistics.
|
We investigate the spectrum of linearized excitations of global vortices in
$2+1$ dimensions. After identifying the existence of localized excitation
modes, we compute the decay time scale of the first two and compare the results
to the numerical evolution of the full non-linear equations. We show
numerically how the interaction of vortices with an external source of
radiation or other vortices can excite these modes dynamically. We then
simulate the formation of vortices in a phase transition and their interaction
with a thermal bath estimating the amplitudes of these modes in each case.
These numerical experiments indicate that even though, in principle, vortices
are capable of storing a large amount of energy in these internal excitations,
this does not seem to happen dynamically. We then explore the evolution of a
network of vortices in an expanding (2+1) dimensional background, in particular
in a radiation dominated universe. We find that vortices are still excited
after the course of the cosmological evolution but again the level of
excitation is very small. The extra energy in the vortices in these
cosmological simulations never exceeds the $1\%$ level of the total mass of the
core of the vortex.
|
In this paper we show that if S is a simple classical group, a group G is
contained in inner-diagonal automorphisms of S and contains S, and H is a
solvable Hall subgroup of G, then there exists five conjugates of H, whose
intersection is trivial.
|
We study Markov-modulated affine processes (abbreviated MMAPs), a class of
Markov processes that are created from affine processes by allowing some of
their coefficients to be a function of an exogenous Markov process. MMAPs allow
for richer models in various applications. At the same time MMAPs largely
preserve the tractability of standard affine processes, as their characteristic
function has a computationally convenient functional form. Our setup is a
substantial generalization of earlier work, since we consider the case where
the generator of the exogenous process $X$ is an unbounded operator (as is the
case for diffusions or jump processes with infinite activity). We prove
existence of MMAPs via a martingale problem approach, we derive the formula for
their characteristic function and we study various mathematical properties of
MMAPs. The paper closes with a discussion of several applications of MMAPs in
finance.
|
The vision community is witnessing a modeling shift from CNNs to
Transformers, where pure Transformer architectures have attained top accuracy
on the major video recognition benchmarks. These video models are all built on
Transformer layers that globally connect patches across the spatial and
temporal dimensions. In this paper, we instead advocate an inductive bias of
locality in video Transformers, which leads to a better speed-accuracy
trade-off compared to previous approaches which compute self-attention globally
even with spatial-temporal factorization. The locality of the proposed video
architecture is realized by adapting the Swin Transformer designed for the
image domain, while continuing to leverage the power of pre-trained image
models. Our approach achieves state-of-the-art accuracy on a broad range of
video recognition benchmarks, including on action recognition (84.9 top-1
accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less
pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1
accuracy on Something-Something v2). The code and models will be made publicly
available at https://github.com/SwinTransformer/Video-Swin-Transformer.
|
As an autonomous vehicles, Unmanned Aerial Vehicles (UAVs) are subjected to
several challenges. One of the challenges is for UAV to be able to avoid
collision. Many collision avoidance methods have been proposed to address this
issue. Furthermore, in a multi-UAV system, it is also important to address
communication issue among UAVs for cooperation and collaboration. This issue
can be addressed by setting up an ad-hoc network among UAVs. There is also a
need to consider the challenges in the deployment of UAVs, as well as, in the
development of collision avoidance methods and the establishment of
communication for cooperation and collaboration in a multi-UAV system. In this
paper, we present general challenges in the deployment of UAV and comparison of
UAV communication services based on its operating frequency. We also present
major collision avoidance approaches, and specifically discuss collision
avoidance approaches that are suitable for indoor applications. We also present
the Flying Ad-hoc Networks (FANET) network architecture, communication and
routing protocols for each Open System Interconnection (OSI) communication
layers.
|
Purpose: To update and extend the Carleton Laboratory for Radiotherapy
Physics (CLRP) Eye Plaque (EP) dosimetry database for low-energy
photon-emitting brachytherapy sources using egs_brachy, an open-source EGSnrc
application. The previous database, CLRP_EPv1, contained datasets for the
Collaborative Ocular Melanoma Study (COMS) plaques (2008). The new database,
CLRP EPv2, consists of newly-calculated 3D dose distributions for 17 plaques [8
COMS, 5 Eckert & Ziegler BEBIG, and 4 other representative models] for Pd-103,
I-125, and Cs-131 seeds.
Methods: Plaque models are developed with egs_brachy, based on
published/manufacturer dimensions and material data. The BEBIG plaques are
identical in dimensions to COMS plaques but differ in elemental composition
and/or density. Eye plaques and seeds are simulated at the centre of
full-scatter water phantoms, scoring in (0.05 cm)^3 voxels spanning the eye for
scenarios: (i) HOMO: simulated TG43 conditions; (ii) HETERO: eye plaques and
seeds fully modelled; (iii) HETsi (BEBIG only): one seed is active at a time
with other seed geometries present but not emitting photons (inactive). For
validation, doses are compared to those from CLRP_EPv1 and published data.
Data Format and Access: Data are available at https://physics.carleton.ca/
clrp/eye_plaque_v2, http://doi.org/10.22215/clrp/EPv2. The data consist of 3D
dose distributions (text-based EGSnrc 3ddose file) and graphical presentations
of the comparisons to previously published data.
Potential Applications: The CLRP EPv2 database provides accurate reference 3D
dose distributions to advance ocular brachytherapy dose evaluations. The
fully-benchmarked eye plaque models will be freely-distributed with egs brachy,
supporting adoption of model-based dose evaluations as recommended by TG-129,
TG-186, and TG-221.
|
In recent years, deep neural networks (DNNs) were studied as an alternative
to traditional acoustic echo cancellation (AEC) algorithms. The proposed models
achieved remarkable performance for the separate tasks of AEC and residual echo
suppression (RES). A promising network topology is a fully convolutional
recurrent network (FCRN) structure, which has already proven its performance on
both noise suppression and AEC tasks, individually. However, the combination of
AEC, postfiltering, and noise suppression to a single network typically leads
to a noticeable decline in the quality of the near-end speech component due to
the lack of a separate loss for echo estimation. In this paper, we propose a
two-stage model (Y$^2$-Net) which consists of two FCRNs, each with two inputs
and one output (Y-Net). The first stage (AEC) yields an echo estimate, which -
as a novelty for a DNN AEC model - is further used by the second stage to
perform RES and noise suppression. While the subjective listening test of the
Interspeech 2021 AEC Challenge mostly yielded results close to the baseline,
the proposed method scored an average improvement of 0.46 points over the
baseline on the blind testset in double-talk on the instrumental metric DECMOS,
provided by the challenge organizers.
|
Convolutional Neural Networks have achieved unprecedented success in image
classification, recognition, or detection applications. However, their
large-scale deployment in embedded devices is still limited by the huge
computational requirements, i.e., millions of MAC operations per layer. In this
article, MinConvNets where the multiplications in the forward propagation are
approximated by minimum comparator operations are introduced. Hardware
implementation of minimum operation is much simpler than multipliers. Firstly,
a methodology to find approximate operations based on statistical correlation
is presented. We show that it is possible to replace multipliers by minimum
operations in the forward propagation under certain constraints, i.e. given
similar mean and variances of the feature and the weight vectors. A modified
training method which guarantees the above constraints is proposed. And it is
shown that equivalent precision can be achieved during inference with
MinConvNets by using transfer learning from well trained exact CNNs.
|
The existence of soliton families in non-parity-time-symmetric complex
potentials remains poorly understood, especially in two spatial dimensions. In
this article, we analytically investigate the bifurcation of soliton families
from linear modes in one- and two-dimensional nonlinear Schr\"odinger equations
with localized Wadati-type non-parity-time-symmetric complex potentials. By
utilizing the conservation law of the underlying non-Hamiltonian wave system,
we convert the complex soliton equation into a new real system. For this new
real system, we perturbatively construct a continuous family of low-amplitude
solitons bifurcating from a linear eigenmode to all orders of the small soliton
amplitude. Hence, the emergence of soliton families in these
non-parity-time-symmetric complex potentials is analytically explained. We also
compare these analytically constructed soliton solutions with high-accuracy
numerical solutions in both one and two dimensions, and the asymptotic accuracy
of these perturbation solutions is confirmed.
|
We study imaging of point sources with a quadrupole gravitational lens while
focusing on the formation and evolution of the Einstein cross formed on the
image sensor of an imaging telescope. We use a new type of a diffraction
integral that we developed to study generic, opaque, weakly aspherical
gravitational lenses. To evaluate this integral, we use the method of
stationary phase that yields a quartic equation with respect to a Cartesian
projection of the observer's position vector with respect to the vector of the
impact parameter. The resulting quartic equation can be solved analytically
using the method first published by Cardano in 1545. We find that the resulting
solution provides a good approximation of the electromagnetic (EM) field almost
everywhere in the image plane, yielding the well-known astroid caustic of the
quadrupole lens. The sole exception is the immediate vicinity of the caustic
boundary, where a numerical treatment of the diffraction integral yields better
results. We also convolve the quartic solution for the EM field on the image
plane with the point-spread function of a thin lens imaging telescope. By doing
so, we are able to explore the direct relationship between the algebraic
properties of the quartic solution for the EM field, the geometry of the
astroid caustic, and the geometry and shape of the resulting Einstein-cross
that appear on the image plane of the thin lens telescope. The new quartic
solution leads to significant improvements in numerical modeling as evaluation
of this solution is computationally far less expensive than a direct numerical
treatment of the new diffraction integral. In the case of the solar
gravitational lens (SGL), the new results drastically improve the speed of
numerical simulations related to sensitivity analysis performed in the context
of high-resolution imaging of exoplanets.
|
The Lipschitz constant of neural networks plays an important role in several
contexts of deep learning ranging from robustness certification and
regularization to stability analysis of systems with neural network
controllers. Obtaining tight bounds of the Lipschitz constant is therefore
important. We introduce LipBaB, a branch and bound framework to compute
certified bounds of the local Lipschitz constant of deep neural networks with
ReLU activation functions up to any desired precision. We achieve this by
bounding the norm of the Jacobians, corresponding to different activation
patterns of the network caused within the input domain. Our algorithm can
provide provably exact computation of the Lipschitz constant for any p-norm.
|
Understanding the fluctuations of observables is one of the main goals in
science, be it theoretical or experimental, quantum or classical. We
investigate such fluctuations when only a subregion of the full system can be
observed, focusing on geometries with sharp corners. We report that the
dependence on the opening angle is super-universal: up to a numerical
prefactor, this function does not depend on anything, provided the system under
study is uniform, isotropic, and correlations do not decay too slowly. The
prefactor contains important physical information: we show in particular that
it gives access to the long-wavelength limit of the structure factor. We
illustrate our findings with several examples, including fractional quantum
Hall states, scale invariant quantum critical theories, and metals. Finally, we
discuss connections with quantum entanglement, extensions to three dimensions,
as well as experiments to probe the geometry of fluctuations.
|
Interior permanent magnet synchronous machine drives are widely employed in
electric traction systems and various industrial processes. However, prolonged
exposure to high temperatures while operating can demagnetize the permanent
magnets to the point of irreversible demagnetization. In addition, direct
measurements with infrared sensors or contact-type sensors with wireless
communication can be expensive and intrusive to the motor drive systems. This
paper thus proposes a nonintrusive thermal monitoring scheme for the permanent
magnets inside the direct-torque-controlled interior permanent magnet
synchronous machines. By applying an external high-frequency rotating flux or
torque signal to the hysteresis torque controller in the motor drive, the
high-frequency currents can be injected into the stator windings. The permanent
magnet temperature can thus be monitored based on the induced high-frequency
resistance. The nonintrusive nature of the method is indicated by the
elimination of the extra sensors and no hardware change to the existing system.
Finally, the effectiveness of the proposed method is validated with
experimental results.
|
In domains where users tend to develop long-term preferences that do not
change too frequently, the stability of recommendations is an important factor
of the perceived quality of a recommender system. In such cases, unstable
recommendations may lead to poor personalization experience and distrust,
driving users away from a recommendation service. We propose an incremental
learning scheme that mitigates such problems through the dynamic modeling
approach. It incorporates a generalized matrix form of a partial differential
equation integrator that yields a dynamic low-rank approximation of
time-dependent matrices representing user preferences. The scheme allows
extending the famous PureSVD approach to time-aware settings and significantly
improves its stability without sacrificing the accuracy in standard top-$n$
recommendations tasks.
|
In a recent paper, the last three authors showed that a game-theoretic
$p$-harmonic function $v$ is characterized by an asymptotic mean value property
with respect to a kind of mean value $\nu_p^r[v](x)$ defined variationally on
balls $B_r(x)$. In this paper, in a domain $\Om\subset\RR^N$, $N\ge 2$, we
consider the operator $\mu_p^\ve$, acting on continuous functions on
$\ol{\Om}$, defined by the formula $\mu_p^\ve[v](x)=\nu^{r_\ve(x)}_p[v](x)$,
where $r_\ve(x)=\min[\ve,\dist(x,\Ga)]$ and $\Ga$ denotes the boundary of
$\Omega$. We first derive various properties of $\mu^\ve_p$ such as continuity
and monotonicity. Then, we prove the existence and uniqueness of a function
$u^\ve\in C(\ol{\Om})$ satisfying the Dirichlet-type problem: $$
u(x)=\mu_p^\ve[u](x) \ \mbox{ for every } \ x\in\Om,\quad u=g \ \mbox{ on } \
\Ga, $$ for any given function $g\in C(\Ga)$. This result holds, if we assume
the existence of a suitable notion of barrier for all points in $\Ga$. That
$u^\ve$ is what we call the \textit{variational} $p$-harmonious function with
Dirichlet boundary data $g$, and is obtained by means of a Perron-type method
based on a comparison principle. \par We then show that the family $\{
u^\ve\}_{\ve>0}$ gives an approximation scheme for the viscosity solution $u\in
C(\ol{\Om})$ of $$ \De_p^G u=0 \ \mbox{ in }\Om, \quad u=g \ \mbox{ on } \ \Ga,
$$ where $\De_p^G$ is the so-called game-theoretic (or homogeneous) $p$-Laplace
operator. In fact, we prove that $u^\ve$ converges to $u$, uniformly on
$\ol{\Om}$ as $\ve\to 0$.
|
We carry out a detailed large-scale data analysis of price response functions
in the spot foreign exchange market for different years and different time
scales. Such response functions provide quantitative information on the
deviation from Markovian behavior. The price response functions show an
increase to a maximum followed by a slow decrease as the time lag grows, in
trade time scale and in physical time scale, for all analyzed years.
Furthermore, we use a price increment point (pip) bid-ask spread definition to
group different foreign exchange pairs and analyze the impact of the spread in
the price response functions. We find that large pip spreads have a stronger
impact on the response. This is similar to what has been found in stock
markets.
|
We consider the asymmetric simple exclusion process (ASEP) on $\mathbb{Z}$
started from step initial data and obtain the exact Lyapunov exponents for
$H_0(t)$, the integrated current of ASEP. As a corollary, we derive an explicit
formula for the upper-tail large deviation rate function for $-H_0(t)$. Our
result matches with the rate function for the integrated current of the totally
asymmetric simple exclusion process (TASEP) obtained in [Johansson
00](arXiv:math/9903134).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.