abstract
stringlengths 42
2.09k
|
---|
In this work, we consider the nonlocal obstacle problem with a given obstacle
$\psi$ in a bounded Lipschitz domain $\Omega$ in $\mathbb{R}^{d}$, such that
$\mathbb{K}_\psi^s=\{v\in H^s_0(\Omega):v\geq\psi \text{ a.e. in
}\Omega\}\neq\emptyset$, given by
\[u\in\mathbb{K}_\psi^s:\langle\mathcal{L}_au,v-u\rangle\geq\langle
F,v-u\rangle\quad\forall v\in\mathbb{K}^s_\psi,\] for $F\in H^{-s}(\Omega)$,
the dual space of $H^s_0(\Omega)$, $0<s<1$. The nonlocal operator
$\mathcal{L}_a:H^s_0(\Omega)\to H^{-s}(\Omega)$ is defined with a measurable,
bounded, strictly positive singular kernel $a(x,y)$, possibly not symmetric, by
\[\langle\mathcal{L}_au,v\rangle=P.V.\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}v(x)(u(x)-u(y))a(x,y)dydx=\mathcal{E}_a(u,v),\]
with $\mathcal{E}_a$ being a Dirichlet form. Also, the fractional operator
$\tilde{\mathcal{L}}_A=-D^s\cdot AD^s$ defined with the distributional Riesz
$s$-fractional derivative and a bounded matrix $A(x)$ gives a well defined
integral singular kernel. The corresponding $s$-fractional obstacle problem
converges as $s\nearrow1$ to the obstacle problem in $H^1_0(\Omega)$ with the
operator $-D\cdot AD$ given with the gradient $D$.
We mainly consider problems involving the bilinear form $\mathcal{E}_a$ with
one or two obstacles, and the N-membranes problem, deriving a weak maximum
principle, comparison properties, approximation by bounded penalization, and
the Lewy-Stampacchia inequalities. This provides regularity of the solutions,
including a global estimate in $L^\infty(\Omega)$, local H\"older regularity
when $a$ is symmetric, and local regularity in $W^{2s,p}_{loc}(\Omega)$ and
$C^1(\Omega)$ for fractional $s$-Laplacian obstacle-type problems. These novel
results are complemented with the extension of the Lewy-Stampacchia
inequalities to the order dual of $H^s_0(\Omega)$ and some remarks on the
associated $s$-capacity for general $\mathcal{L}_a$.
|
The application of reliable structural health monitoring (SHM) technologies
to operational wind turbine blades is a challenging task, due to the uncertain
nature of the environments they operate in. In this paper, a novel SHM
methodology, which uses Gaussian Processes (GPs) is proposed. The methodology
takes advantage of the fact that the blades on a turbine are nominally
identical in structural properties and encounter the same environmental and
operational variables (EOVs). The properties of interest are the first edgewise
frequencies of the blades. The GPs are used to predict the edge frequencies of
one blade given that of another, after these relationships between the pairs of
blades have been learned when the blades are in a healthy state. In using this
approach, the proposed SHM methodology is able to identify when the blades
start behaving differently from one another over time. To validate the concept,
the proposed SHM system is applied to real onshore wind turbine blade data,
where some form of damage was known to have taken place. X-bar control chart
analysis of the residual errors between the GP predictions and actual
frequencies show that the system successfully identified early onset of damage
as early as six months before it was identified and remedied.
|
Multi-robot systems are an efficient method to explore and map an unknown
environment. The simulataneous localization and mapping (SLAM) algorithm is
common for single robot systems, however multiple robots can share respective
map data in order to merge a larger global map. This thesis contributes to the
multi-robot mapping problem by considering cases in which robots have
communication range limitations. The architecture coordinates a team of robots
and the central server to explore an unknown environment by exploiting a
hierarchical choice structure. The coordination algorithms ensure that the
hierarchy of robots choose frontier points that provide maximum information
gain, while maintaining viable communication amongst themselves and the central
computer through an ad-hoc relay network. In addition, the robots employ a
backup choice algorithm in cases when no valid frontier points remain by
arranging the communication relay network as a fireline back to the source.
This work contributes a scalable, efficient, and robust architecture towards
hybrid multi-robot mapping systems that take into account communication range
limitations. The architecture is tested in a simulation environment using
various maps.
|
We study pluripotential complex Monge-Amp\`ere flows in big cohomology
classes on compact K{\"a}hler manifolds. We use the Perron method, considering
pluripotential subsolutions to the Cauchy problem. We prove that, under natural
assumptions on the data, the upper envelope of all subsolutions is continuous
in space and semi-concave in time, and provides a unique pluripotential
solution with such regularity. We apply this theory to study pluripotential
K{\"a}hler-Ricci flows on compact K{\"a}hler manifolds of general type as well
as on stable varieties.
|
We present the abundance analyses of 7 Carbon enhanced metal-poor (CEMP)
stars to understand the origin of carbon in them. We used high-resolution
optical spectra to derive abundances of various elements. We also used
low-resolution Near-Infrared (NIR) spectra to derive the abundance of O and
12C/13C from the CO molecular band and compared their values with those derived
from high-resolution optical spectra. We identified a good agreement between
the values. Thus, in cool CEMP stars, the NIR observations complement the
high-resolution optical observations to derive the oxygen abundance and the
12C/13C ratio. This enables us to probe fainter cool CEMP stars using NIR
spectroscopy. C, N, O abundances of all the program stars in this study show
abundances that are consistent with binary mass transfer from a low-mass
low-metallicity Asymptotic Giant Branch (AGB) companion which is further
supported by the presence of enhancement in neutron-capture elements and
detection of radial velocity variation. One of the stars show abundance
patterns similar to a CEMP-s star whereas the abundance pattern of the rest of
the stars satisfy the criteria required to classify them as CEMP-r/s stars. The
sub-classification of some of the stars studied here is revisited. The
abundance of neutron capture elements in these CEMP-r/s stars resembles to that
of i-process models where proton ingestion episodes in the companion low-mass
low-metallicity AGB stars produce the necessary neutron density required for
the onset of i-process.
|
We study the feasibility of reaching the ultrastrong (USC) and deep-strong
coupling (DSC) regimes of light-matter interaction, in particular at resonance
condition, with a superconducting charge qubit, also known as Cooper-Pair box
(CPB). We show that by shunting the charge qubit with a high-impedance
LC-circuit, one can maximally reach both USC and DSC regimes exceeding the
classical upper bound $|g|\leq \sqrt{\omega_q\omega_r}/2$ between two harmonic
systems with frequencies $\omega_q$ and $\omega_r$. In our case, the
fundamental model corresponds to an enhanced quantum Rabi model, which contains
a displacement field operator that breaks its internal parity symmetry.
Furthermore, we consider a multipartite device consisting of two CPBs
ultrastrongly coupled to an oscillator as a mediator and study a quantum state
transfer protocol between a pair of transmon qubits, all of them subjected to
local incoherent noise channels with realistic parameters. This work opens the
door for studying light-matter interactions beyond the quantum Rabi model at
extreme coupling strengths, providing a new building block for applications
within quantum computation and quantum information processing.
|
The observation of beam spin asymmetries in two-pion production in
semi-inclusive deep inelastic scattering off an unpolarized proton target is
reported. The data presented here were taken in the fall of 2018 with the
CLAS12 spectrometer using a 10.6 GeV longitudinally spin-polarized electron
beam delivered by CEBAF at JLab. The measured asymmetries provide the first
opportunity to extract the parton distribution function $e(x)$, which provides
information about the interaction between gluons and quarks, in a collinear
framework that offers cleaner access than previous measurements. The
asymmetries also constitute the first ever signal sensitive to the
helicity-dependent two-pion fragmentation function $G_1^\perp$. A clear sign
change is observed around the $\rho$ mass that appears in model calculations
and is indicative of the dependence of the produced pions on the helicity of
the fragmenting quark.
|
There is a growing interest in the community in making an embodied AI agent
perform a complicated task while interacting with an environment following
natural language directives. Recent studies have tackled the problem using
ALFRED, a well-designed dataset for the task, but achieved only very low
accuracy. This paper proposes a new method, which outperforms the previous
methods by a large margin. It is based on a combination of several new ideas.
One is a two-stage interpretation of the provided instructions. The method
first selects and interprets an instruction without using visual information,
yielding a tentative action sequence prediction. It then integrates the
prediction with the visual information etc., yielding the final prediction of
an action and an object. As the object's class to interact is identified in the
first stage, it can accurately select the correct object from the input image.
Moreover, our method considers multiple egocentric views of the environment and
extracts essential information by applying hierarchical attention conditioned
on the current instruction. This contributes to the accurate prediction of
actions for navigation. A preliminary version of the method won the ALFRED
Challenge 2020. The current version achieves the unseen environment's success
rate of 4.45% with a single view, which is further improved to 8.37% with
multiple views.
|
The behaviour of the generalised Riesz function defined by
\[S_{m,p}(x)=\sum_{k=0}^\infty \frac{(-)^{k-1}x^k}{k! \zeta(mk+p)}\qquad (m\geq
1,\ p\geq 1)\] is considered for large positive values of $x$. A numerical
scheme is given to compute this function which enables the visualisation of its
asymptotic form. The two cases $m=2$, $p=1$ and $m=p=2$ (introduced
respectively by Hardy and Littlewood in 1918 and Riesz in 1915) are examined in
detail. It is found on numerical evidence that these functions appear to
exhibit the $x^{-1/4}$ and $x^{-3/4}$ decay, superimposed on an oscillatory
structure, required for the truth of the Riemann hypothesis.
|
We find stationary thin-brane geometries that are dual to
far-from-equilibrium steady states of two-dimensional holographic interfaces.
The flow of heat at the boundary agrees with the result of CFT and the known
energy-transport coefficients of the thin-brane model. We argue that by
entangling outgoing excitations the interface produces coarse-grained entropy
at a maximal rate, and point out similarities and differences with double-sided
black funnels. The non-compact, non-Killing and far from-equilibrium event
horizon of our solutions coincides with the local (apparent) horizon on the
colder side, but lies behind it on the hotter side of the interface. We also
show that the thermal conductivity of a pair of interfaces jumps at the
Hawking-Page phase transition from a regime described by classical scatterers
to a quantum regime in which heat flows unobstructed.
|
Existing work on automated hate speech classification assumes that the
dataset is fixed and the classes are pre-defined. However, the amount of data
in social media increases every day, and the hot topics changes rapidly,
requiring the classifiers to be able to continuously adapt to new data without
forgetting the previously learned knowledge. This ability, referred to as
lifelong learning, is crucial for the real-word application of hate speech
classifiers in social media. In this work, we propose lifelong learning of hate
speech classification on social media. To alleviate catastrophic forgetting, we
propose to use Variational Representation Learning (VRL) along with a memory
module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural
Network). Experimentally, we show that combining variational representation
learning and the LB-SOINN memory module achieves better performance than the
commonly-used lifelong learning techniques.
|
Q-learning, which seeks to learn the optimal Q-function of a Markov decision
process (MDP) in a model-free fashion, lies at the heart of reinforcement
learning. When it comes to the synchronous setting (such that independent
samples for all state-action pairs are drawn from a generative model in each
iteration), substantial progress has been made towards understanding the sample
efficiency of Q-learning. Consider a $\gamma$-discounted infinite-horizon MDP
with state space $\mathcal{S}$ and action space $\mathcal{A}$: to yield an
entrywise $\varepsilon$-approximation of the optimal Q-function,
state-of-the-art theory for Q-learning requires a sample size exceeding the
order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^5\varepsilon^{2}}$,
which fails to match existing minimax lower bounds. This gives rise to natural
questions: what is the sharp sample complexity of Q-learning? Is Q-learning
provably sub-optimal? This paper addresses these questions for the synchronous
setting: (1) when $|\mathcal{A}|=1$ (so that Q-learning reduces to TD
learning), we prove that the sample complexity of TD learning is minimax
optimal and scales as $\frac{|\mathcal{S}|}{(1-\gamma)^3\varepsilon^2}$ (up to
log factor); (2) when $|\mathcal{A}|\geq 2$, we settle the sample complexity of
Q-learning to be on the order of
$\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^4\varepsilon^2}$ (up to log
factor). Our theory unveils the strict sub-optimality of Q-learning when
$|\mathcal{A}|\geq 2$, and rigorizes the negative impact of over-estimation in
Q-learning. Finally, we extend our analysis to accommodate asynchronous
Q-learning (i.e., the case with Markovian samples), sharpening the horizon
dependency of its sample complexity to be $\frac{1}{(1-\gamma)^4}$.
|
We examine the bound state solutions of the Dirac equation under the spin and
pseudospin symmetries for a new suggested combined potential, Hulten plus a
class of Yukawa potential including a Coulomb-like tensor interaction. An
improved scheme is employed to deal with the centrifugal (pseudo-centrifugal)
term. Using the Nikiforov-Uvarov and SUSYQM methods, we analytically develop
the relativistic energy eigenvalues and associated Dirac spinor components of
wave functions. We find that both methods give entirely the same results.
Modifiable of our results into some particular potential cases, useful for
other physical systems, are also discussed. We obtain complete agreement with
the findings of previous works. The spin and pseudospin bound state energy
spectra for various levels are presented in the absence as well as the presence
of tensor coupling. Both energy spectrums are sensitive with regards to the
quantum numbers $\kappa$ and $n$, as well as the parameter $\delta$. We also
notice that the degeneracies between Dirac spin and pseudospin doublet
eigenstate partners are completely removed by the tensor interaction. Finally,
we present the parameter space of allowable bound state regions of potential
strength $V_0$ with constants for both considered symmetry limits $C_S$ and
$C_{PS}$.
|
Heterogeneous graph is a kind of data structure widely existing in real life.
Nowadays, the research of graph neural network on heterogeneous graph has
become more and more popular. The existing heterogeneous graph neural network
algorithms mainly have two ideas, one is based on meta-path and the other is
not. The idea based on meta-path often requires a lot of manual preprocessing,
at the same time it is difficult to extend to large scale graphs. In this
paper, we proposed the general heterogeneous message passing paradigm and
designed R-GSN that does not need meta-path, which is much improved compared to
the baseline R-GCN. Experiments have shown that our R-GSN algorithm achieves
the state-of-the-art performance on the ogbn-mag large scale heterogeneous
graph dataset.
|
Since model quantization helps to reduce the model size and computation
latency, it has been successfully applied in many applications of mobile
phones, embedded devices and smart chips. The mixed-precision quantization
model can match different quantization bit-precisions according to the
sensitivity of different layers to achieve great performance. However, it is a
difficult problem to quickly determine the quantization bit-precision of each
layer in deep neural networks according to some constraints (e.g., hardware
resources, energy consumption, model size and computation latency). To address
this issue, we propose a novel sequential single path search (SSPS) method for
mixed-precision quantization,in which the given constraints are introduced into
its loss function to guide searching process. A single path search cell is used
to combine a fully differentiable supernet, which can be optimized by
gradient-based algorithms. Moreover, we sequentially determine the candidate
precisions according to the selection certainties to exponentially reduce the
search space and speed up the convergence of searching process. Experiments
show that our method can efficiently search the mixed-precision models for
different architectures (e.g., ResNet-20, 18, 34, 50 and MobileNet-V2) and
datasets (e.g., CIFAR-10, ImageNet and COCO) under given constraints, and our
experimental results verify that SSPS significantly outperforms their uniform
counterparts.
|
This paper studies distributed resource allocation problem in multi-agent
systems, where all the agents cooperatively minimize the sum of their cost
functions with global resource constraints over stochastic communication
networks. This problem arises from many practical domains such as economic
dispatch in smart grid, task assignment, and power allocation in robotic
control. Most of existing works cannot converge to the optimal solution if
states deviate from feasible region due to disturbance caused by environmental
noise, misoperation, malicious attack, etc. To solve this problem, we propose a
distributed deviation-tracking resource allocation algorithm and prove that it
linearly converges to the optimal solution with constant stepsizes. We further
explore its resilience properties of the proposed algorithm. Most importantly,
the algorithm still converges to the optimal solution under the disturbance
injection and random communication failure. In order to improve the convergence
rate, the optimal stepsizes for the fastest convergence rate are established.
We also prove the algorithm converges linearly to the optimal solution in mean
square even with uncoordinated stepsizes, i.e., agents are allowed to employ
different stepsizes. Simulations are provided to verify the theoretical
results.
|
Methods based on convolutional neural networks have improved the performance
of biomedical image segmentation. However, most of these methods cannot
efficiently segment objects of variable sizes and train on small and biased
datasets, which are common in biomedical use cases. While methods exist that
incorporate multi-scale fusion approaches to address the challenges arising
with variable sizes, they usually use complex models that are more suitable for
general semantic segmentation computer vision problems. In this paper, we
propose a novel architecture called MSRF-Net, which is specially designed for
medical image segmentation tasks. The proposed MSRF-Net is able to exchange
multi-scale features of varying receptive fields using a dual-scale dense
fusion block (DSDF). Our DSDF block can exchange information rigorously across
two different resolution scales, and our MSRF sub-network uses multiple DSDF
blocks in sequence to perform multi-scale fusion. This allows the preservation
of resolution, improved information flow, and propagation of both high- and
low-level features to obtain accurate segmentation maps. The proposed MSRF-Net
allows to capture object variabilities and provides improved results on
different biomedical datasets. Extensive experiments on MSRF-Net demonstrate
that the proposed method outperforms most of the cutting-edge medical image
segmentation state-of-the-art methods. MSRF-Net advances the performance on
four publicly available datasets, and also, MSRF-Net is more generalizable as
compared to state-of-the-art methods.
|
Course estimation is a key component for the development of autonomous
navigation systems for robots. While state-of-the-art methods widely use
visual-based algorithms, it is worth noting that they all fail to deal with the
complexity of the real world by being computationally greedy and sometimes too
slow. They often require obstacles to be highly textured to improve the overall
performance, particularly when the obstacle is located within the focus of
expansion (FOE) where the optic flow (OF) is almost null. This study proposes
the FAst ITerative Half-plane (FAITH) method to determine the course of a micro
air vehicle (MAV). This is achieved by means of an event-based camera, along
with a fast RANSAC-based algorithm that uses event-based OF to determine the
FOE. The performance is validated by means of a benchmark on a simulated
environment and then tested on a dataset collected for indoor obstacle
avoidance. Our results show that the computational efficiency of our solution
outperforms state-of-the-art methods while keeping a high level of accuracy.
This has been further demonstrated onboard an MAV equipped with an event-based
camera, showing that our event-based FOE estimation can be achieved online
onboard tiny drones, thus opening the path towards fully neuromorphic solutions
for autonomous obstacle avoidance and navigation onboard MAVs.
|
We investigate the asymptotic fluctuation of three interacting particle
systems: the geometric q-TASEP, the geometric q-PushTASEP and the q-PushASEP.
We prove that the rescaled particle position converges to the GUE Tracy-Widom
distribution in the homogeneous case. If the jump rates of the first finitely
many particles are perturbed in the first two models, we obtain Baik-Ben
Arous-Peche and Gaussian limiting fluctuations.
|
The names of variables and functions serve as implicit documentation and are
instrumental for program comprehension. But choosing good meaningful names is
hard. We perform a sequence of experiments in which a total of 334 subjects are
required to choose names in given programming scenarios. The first experiment
shows that the probability that two developers would select the same name is
low: in the 47 instances in our experiments the median probability was only
6.9%. At the same time, given that a specific name is chosen, it is usually
understood by the majority of developers. Analysis of the names given in the
experiment suggests a model where naming is a (not necessarily cognizant or
serial) three-step process: (1) selecting the concepts to include in the name,
(2) choosing the words to represent each concept, and (3) constructing a name
using these words. A followup experiment, using the same experimental setup,
then checked whether using this model explicitly can improve the quality of
names. The results were that names selected by subjects using the model were
judged by two independent judges to be superior to names chosen in the original
experiment by a ratio of two-to-one. Using the model appears to encourage the
use of more concepts and longer names.
|
We consider the Navier-Stokes system in three dimensions perturbed by a
transport noise which is sufficiently smooth in space and rough in time. The
existence of a weak solution was proved recently, however, as in the
deterministic setting the question of uniqueness remains a major open problem.
An important feature of systems with uniqueness is the semigroup property
satisfied by their solutions. Without uniqueness, this property cannot hold
generally. We select a system of solutions satisfying the semigroup property
with appropriately shifted rough path. In addition, the selected solutions
respect the well accepted admissibility criterium for physical solutions,
namely, maximization of the energy dissipation. Finally, under suitable
assumptions on the driving rough path, we show that the Navier-Stokes system
generates a measurable random dynamical system. To the best of our knowledge,
this is the first construction of a measurable single-valued random dynamical
system in the state space for an SPDE without uniqueness.
|
This paper empirically examines how the opening of K-12 schools and colleges
is associated with the spread of COVID-19 using county-level panel data in the
United States. Using data on foot traffic and K-12 school opening plans, we
analyze how an increase in visits to schools and opening schools with different
teaching methods (in-person, hybrid, and remote) is related to the 2-weeks
forward growth rate of confirmed COVID-19 cases. Our debiased panel data
regression analysis with a set of county dummies, interactions of state and
week dummies, and other controls shows that an increase in visits to both K-12
schools and colleges is associated with a subsequent increase in case growth
rates. The estimates indicate that fully opening K-12 schools with in-person
learning is associated with a 5 (SE = 2) percentage points increase in the
growth rate of cases. We also find that the positive association of K-12 school
visits or in-person school openings with case growth is stronger for counties
that do not require staff to wear masks at schools. These results have a causal
interpretation in a structural model with unobserved county and time
confounders. Sensitivity analysis shows that the baseline results are robust to
timing assumptions and alternative specifications.
|
In cell-free massive multiple-input multiple-output (MIMO) the fluctuations
of the channel gain from the access points to a user are large due to the
distributed topology of the system. Because of these fluctuations, data
decoding schemes that treat the channel as deterministic perform inefficiently.
A way to reduce the channel fluctuations is to design a precoding scheme that
equalizes the effective channel gain seen by the users. Conjugate beamforming
(CB) poorly contributes to harden the effective channel at the users. In this
work, we propose a variant of CB dubbed enhanced normalized CB (ECB), in that
the precoding vector consists of the conjugate of the channel estimate
normalized by its squared norm. For this scheme, we derive an exact closed-form
expression for an achievable downlink spectral efficiency (SE), accounting for
channel estimation errors, pilot reuse and user's lack of channel state
information (CSI), assuming independent Rayleigh fading channels. We also
devise an optimal max-min fairness power allocation based only on large-scale
fading quantities. ECB greatly boosts the channel hardening enabling the users
to reliably decode data relying only on statistical CSI. As the provided
effective channel is nearly deterministic, acquiring CSI at the users does not
yield a significant gain.
|
The healthcare industry has witnessed significant transformations in e-health
services where Electronic Health Records (EHRs) are transferred to mobile edge
clouds to facilitate healthcare. Many edge cloud-based system designs have been
proposed, but some technical challenges still remain, such as low quality of
services (QoS), data privacy and system security due to centralized healthcare
architectures. In this paper, we propose a novel hybrid approach of data
offloading and data sharing for healthcare using edge cloud and blockchain.
First, an efficient data offloading scheme is proposed where IoT health data
can be offloaded to nearby edge servers for data processing with privacy
awareness. Then, a data sharing scheme is integrated to enable data exchange
among healthcare users via blockchain. Particularly, a trustworthy access
control mechanism is developed using smart contracts for access authentication
to achieve secure EHRs sharing. Implementation results from extensive
real-world experiments show the superior advantages of the proposal over the
existing schemes in terms of improved QoS, enhanced data privacy and security,
and low smart contract costs.
|
Skew completable unimodular rows of odd length are completable over
polynomial extension of a local ring if dimension of local ring and length of
unimodular rows are same.
|
In this paper, we design receivers for filter bank multicarrier-based
(FBMC-based) massive MIMO considering practical aspects such as channel
estimation and equalization. In particular, we propose a spectrally efficient
pilot structure and a channel estimation technique in the uplink to jointly
estimate all the users' channel impulse responses. We mathematically analyze
our proposed channel estimator and find the statistics of the channel
estimation errors. These statistics are incorporated into our proposed
equalizers to deal with the imperfect channel state information (CSI) effect.
We revisit the channel equalization problem for FBMC-based massive MIMO,
address the shortcomings of the existing equalizers in the literature, and make
them more applicable to practical scenarios. The proposed receiver in this
paper consists of two stages. In the first stage, a linear combining of the
received signals at the base station (BS) antennas provides a coarse channel
equalization and removes any multiuser interference. In the second stage, a per
subcarrier fractionally spaced equalizer (FSE) takes care of any residual
distortion of the channel for the user of interest. We propose an FSE design
based on the equivalent channel at the linear combiner output. This enables the
applicability of our proposed technique to small and/or distributed antenna
setups such as cell-free massive MIMO. Finally, the efficacy of the proposed
techniques is corroborated through numerical analysis.
|
We study the band structure of self-adjoint elliptic operators $\mathbb{A}_g=
-\nabla \cdot \sigma_{g} \nabla$, where $\sigma_g$ has the symmetries of a
honeycomb tiling of $\mathbb{R}^2$. We focus on the case where $\sigma_{g}$ is
a real-valued scalar: $\sigma_{g}=1$ within identical, disjoint "inclusions",
centered at vertices of a honeycomb lattice, and $\sigma_{g}=g \gg1 $ (high
contrast) in the complement of the inclusion set (bulk). Such operators govern,
e.g. transverse electric (TE) modes in photonic crystal media consisting of
high dielectric constant inclusions (semi-conductor pillars) within a
homogeneous lower contrast bulk (air), a configuration used in many physical
studies. Our approach, which is based on monotonicity properties of the
associated energy form, extends to a class of high contrast elliptic operators
that model heterogeneous and anisotropic honeycomb media.
Our results concern the global behavior of dispersion surfaces, and the
existence of conical crossings (Dirac points) occurring in the lowest two
energy bands as well as in bands arbitrarily high in the spectrum. Dirac points
are the source of important phenomena in fundamental and applied physics, e.g.
graphene and its artificial analogues, and topological insulators. The key
hypotheses are the non-vanishing of the Dirac (Fermi) velocity $v_D(g)$,
verified numerically, and a spectral isolation condition, verified analytically
in many configurations. Asymptotic expansions, to any order in $g^{-1}$, of
Dirac point eigenpairs and $v_D(g)$ are derived with error bounds.
Our study illuminates differences between the high contrast behavior of
$\mathbb{A}_g$ and the corresponding strong binding regime for Schroedinger
operators.
|
Effective theories describing black hole exteriors contain many open-system
features due to the large number of gapless degrees of freedom that lie beyond
reach across the horizon. A simple solvable Caldeira-Leggett type model of a
quantum field interacting within a small area with many unmeasured thermal
degrees of freedom was recently proposed in arXiv:2106.09854 to provide a toy
model of this kind of dynamics against which more complete black hole
calculations might be compared. We here compute the response of a simple
Unruh-DeWitt detector (or qubit) interacting with a massless quantum field
$\phi$ coupled to such a hotspot. Our treatment differs from traditional
treatments of Unruh-DeWitt detectors by using Open-EFT tools to reliably
calculate the qubit's late-time behaviour. We use these tools to determine the
efficiency with which the qubit thermalizes as a function of its proximity to
the hotspot. We identify a Markovian regime in which thermalization does occur,
though only for qubits closer to the hotspot than a characteristic distance
scale set by the $\phi$-hotspot coupling. We compute the thermalization time,
and find that it varies inversely with the $\phi$-qubit coupling strength in
the standard way.
|
The realization of an efficient quantum optical interface for multi-qubit
systems is an outstanding challenge in science and engineering. Using two atoms
in individually-controlled optical tweezers coupled to a nanofabricated
photonic crystal cavity, we demonstrate entanglement generation, fast
non-destructive readout, and full quantum control of atomic qubits. The
entangled state is verified in free space after being transported away from the
cavity by encoding the qubits into long-lived states and using dynamical
decoupling. Our approach bridges quantum operations at an optical link and in
free space with a coherent one-way transport, potentially enabling an
integrated optical interface for atomic quantum processors.
|
This paper provides a general overview of different perspectives and studies
on trust, offers a definition of trust, and provides factors that play a
substantial role in developing social trust, and shows from which perspectives
it can be fostered. The results showed that trust is playing an important role
in success for organizations involved in cross-national strategic partnerships.
Trust can reduce transaction costs, promotes inter-organizational
relationships, and improve subordinate relationships between managers.
|
Artificial Intelligence (AI) technologies have long been positioned as a tool
to provide crucial data-driven decision support to people. In this survey
paper, we look at how AI in general, and collaboration assistants (CAs or
chatbots for short) in particular, have been used during a true global exigency
- the COVID-19 pandemic. The key observation is that chatbots missed their
"Apollo moment" when they could have really provided contextual, personalized,
reliable decision support at scale that the state-of-the-art makes possible. We
review the existing capabilities that are feasible and methods, identify the
potential that chatbots could have met, the use-cases they were deployed on,
the challenges they faced and gaps that persisted, and draw lessons that, if
implemented, would make them more relevant in future health emergencies.
|
Knowledge distillation~(KD) is an effective learning paradigm for improving
the performance of lightweight student networks by utilizing additional
supervision knowledge distilled from teacher networks. Most pioneering studies
either learn from only a single teacher in their distillation learning methods,
neglecting the potential that a student can learn from multiple teachers
simultaneously, or simply treat each teacher to be equally important, unable to
reveal the different importance of teachers for specific examples. To bridge
this gap, we propose a novel adaptive multi-teacher multi-level knowledge
distillation learning framework~(AMTML-KD), which consists two novel insights:
(i) associating each teacher with a latent representation to adaptively learn
instance-level teacher importance weights which are leveraged for acquiring
integrated soft-targets~(high-level knowledge) and (ii) enabling the
intermediate-level hints~(intermediate-level knowledge) to be gathered from
multiple teachers by the proposed multi-group hint strategy. As such, a student
model can learn multi-level knowledge from multiple teachers through AMTML-KD.
Extensive results on publicly available datasets demonstrate the proposed
learning framework ensures student to achieve improved performance than strong
competitors.
|
The vibrational quenching cross sections and corresponding low-temperature
rate constants for the v = 1 and v = 2 states of CN- colliding with He and Ar
atoms have been computed ab initio using new three dimensional potential energy
surfaces. Little work has so far been carried out on low-energy vibrationally
inelastic collisions for anions with neutral atoms. The cross sections and
rates calculated at energies and temperatures relevant for both ion traps and
astrochemical modelling, are found by the present calculations to be even
smaller than those of the similar C2- /He and C2-/Ar systems which are in turn
of the order of those existing for the collisions involving neutral diatom-atom
systems. The implications of our finding in the present case rather small
computed rate constants are discussed for their possible role in the dynamics
of molecular cooling and in the evolution of astrochemical modelling networks.
|
Non-convex optimization is ubiquitous in modern machine learning. Researchers
devise non-convex objective functions and optimize them using off-the-shelf
optimizers such as stochastic gradient descent and its variants, which leverage
the local geometry and update iteratively. Even though solving non-convex
functions is NP-hard in the worst case, the optimization quality in practice is
often not an issue -- optimizers are largely believed to find approximate
global minima. Researchers hypothesize a unified explanation for this
intriguing phenomenon: most of the local minima of the practically-used
objectives are approximately global minima. We rigorously formalize it for
concrete instances of machine learning problems.
|
Nowadays, analysis of Transparent Environmental Microorganism Images (T-EM
images) in the field of computer vision has gradually become a new and
interesting spot. This paper compares different deep learning classification
performance for the problem that T-EM images are challenging to analyze. We
crop the T-EM images into 8 * 8 and 224 * 224 pixel patches in the same
proportion and then divide the two different pixel patches into foreground and
background according to ground truth. We also use four convolutional neural
networks and a novel ViT network model to compare the foreground and background
classification experiments. We conclude that ViT performs the worst in
classifying 8 * 8 pixel patches, but it outperforms most convolutional neural
networks in classifying 224 * 224 pixel patches.
|
Given finite configurations $P_1, \dots, P_n \subset \mathbb{R}^d$, let us
denote by $\mathbf{m}_{\mathbb{R}^d}(P_1, \dots, P_n)$ the maximum density a
set $A \subseteq \mathbb{R}^d$ can have without containing congruent copies of
any $P_i$. We will initiate the study of this geometrical parameter, called the
independence density of the considered configurations, and give several results
we believe are interesting. For instance we show that, under suitable size and
non-degeneracy conditions, $\mathbf{m}_{\mathbb{R}^d}(t_1 P_1, t_2 P_2, \dots,
t_n P_n)$ progressively `untangles' and tends to $\prod_{i=1}^n
\mathbf{m}_{\mathbb{R}^d}(P_i)$ as the ratios $t_{i+1}/t_i$ between consecutive
dilation parameters grow large; this shows an exponential decay on the density
when forbidding multiple dilates of a given configuration, and gives a common
generalization of theorems by Bourgain and by Bukh in geometric Ramsey theory.
We also consider the analogous parameter $\mathbf{m}_{S^d}(P_1, \dots, P_n)$ on
the more complicated framework of sets on the unit sphere $S^d$, obtaining the
corresponding results in this setting.
|
In this paper, we present a distributed resource allocation mechanism in
cognitive radio networks, based on a new coopeti-tion methodology, which
combines advantages of nodes competition and cooperation. We postulate that
this new method allows for fully distributed resource management between
cognitive radio devices. The presented framework is generic, however, we
consider it for the application in OFDMA networks. Coopetition takes the best
from cooperative and competitive problem formulation and provides the
opportunity to control the balance between fairness and spectral efficiency
(SE) of resource allocation. Simulation results confirm that coopetition allows
for efficient resource utilization, and may be used practically in wireless
cognitive networks.
|
We advocate profunctors between posets compared to order preserving maps. We
introduce the graph and ascent of such profunctors. We apply this in
commutative algebra where these give classes of Alexander dual square-free
monomial ideals giving the full and natural generalized setting of isotonian
ideals and letterplace ideals for posets. We study the poset of profunctors
from ${\mathbb N}$ to ${\mathbb N}$. Such profunctors identify as order
presering maps $f : {\mathbb N} \rightarrow {\mathbb N} \cup \{\infty \}$. For
our applications to infinite posets we also introduce a topology on the set of
profunctors between two posets and study its properties.
|
The quasi-one-dimensional organic conductors (TMTTF)$_2X$ with
non-centrosymmetric anions commonly undergo charge- and anion-order transitions
upon cooling. While for compounds with tetrahedral anions ($X$ = BF$_4^-$,
ReO$_4^-$, and ClO$_4^-$) the charge-ordered phase is rather well understood,
the situation is less clear in the case of planar triangular anions, such as
(TMTTF)$_2$NO$_3$. Here we explore the electronic and structural transitions by
transport experiments, optical and magnetic spectroscopy. This way we analyze
the temperature dependence of the charge imbalance 2$\delta$ and an activated
behavior of $\rho(T)$ with $\Delta_{\rm CO}\approx 530$~K below $T_{\rm CO} =
250$~K. Since (TMTTF)$_2$NO$_3$ follows the universal relation between charge
imbalance 2$\delta$ and size of the gap $\Delta_{\rm CO}$, our findings suggest
that charge order is determined by TMTTF stacks with little influence of the
anions. Clear signatures of anion ordering are detected at $T_{\rm AO}=50$~K.
The tetramerization affects the dc transport, the vibrational features of
donors and acceptors, and leads to formation of spin singlets.
|
Improving our knowledge of global Milky Way (MW) properties is critical for
connecting the detailed measurements only possible from within our Galaxy to
our understanding of the broader galaxy population. We train Gaussian Process
Regression (GPR) models on SDSS galaxies to map from galaxy properties (stellar
mass, apparent axis ratio, star formation rate, bulge-to-total ratio, disk
scale length, and bar vote fraction) to UV (GALEX $FUV/NUV$), optical (SDSS
$ugriz$) and IR (2MASS $JHKs$ and WISE $W1/W2/W3/W4$) fluxes and uncertainties.
With these models we estimate the photometric properties of the MW, resulting
in a full UV-to-IR spectral energy distribution (SED) as it would be measured
externally, viewed face-on. We confirm that the Milky Way lies in the green
valley in optical diagnostic diagrams, but show for the first time that the MW
is in the star-forming region in standard UV and IR diagnostics --
characteristic of the population of red spiral galaxies. Although our GPR
method predicts one band at a time, the resulting MW UV--IR SED is consistent
with SEDs of local spirals with characteristics broadly similar to the MW,
suggesting that these independent predictions can be combined reliably. Our
UV--IR SED will be invaluable for reconstructing the MW's star formation
history using the same tools employed for external galaxies, allowing
comparisons of results from \textit{in situ} measurements to those from the
methods used for extra-galactic objects.
|
Preserving privacy is a growing concern in our society where sensors and
cameras are ubiquitous. In this work, for the first time, we propose a
trainable image acquisition method that removes the sensitive identity
revealing information in the optical domain before it reaches the image sensor.
The method benefits from a trainable optical convolution kernel which transmits
the desired information while filters out the sensitive content. As the
sensitive content is suppressed before it reaches the image sensor, it does not
enter the digital domain therefore is unretrievable by any sort of privacy
attack. This is in contrast with the current digital privacy-preserving methods
that are all vulnerable to direct access attack. Also, in contrast with the
previous optical privacy-preserving methods that cannot be trained, our method
is data-driven and optimized for the specific application at hand. Moreover,
there is no additional computation, memory, or power burden on the acquisition
system since this processing happens passively in the optical domain and can
even be used together and on top of the fully digital privacy-preserving
systems. The proposed approach is adaptable to different digital neural
networks and content. We demonstrate it for several scenarios such as smile
detection as the desired attribute while the gender is filtered out as the
sensitive content. We trained the optical kernel in conjunction with two
adversarial neural networks where the analysis network tries to detect the
desired attribute and the adversarial network tries to detect the sensitive
content. We show that this method can reduce 65.1% of sensitive content when it
is selected to be the gender and it only loses 7.3% of the desired content.
Moreover, we reconstruct the original faces using the deep reconstruction
method that confirms the ineffectiveness of reconstruction attacks to obtain
the sensitive content.
|
Sixth-generation wireless communication (6G) will be an integrated
architecture of "space, air, ground and sea". One of the most difficult part of
this architecture is the underwater information acquisition which need to
transmitt information cross the interface between water and air.In this
senario, ocean of things (OoT) will play an important role, because it can
serve as a hub connecting Internet of things (IoT) and Internet of underwater
things (IoUT). OoT device not only can collect data through underwater methods,
but also can utilize radio frequence over the air. For underwater
communications, underwater acoustic communications (UWA COMMs) is the most
effective way for OoT devices to exchange information, but it is always
tormented by doppler shift and synchronization errors. In this paper, in order
to overcome UWA tough conditions, a deep neural networks based receiver for
underwater acoustic chirp communication, called C-DNN, is proposed. Moreover,
to improve the performance of DL-model and solve the problem of model
generalization, we also proposed a novel federated meta learning (FML) enhanced
acoustic radio cooperative (ARC) framework, dubbed ARC/FML, to do transfer.
Particularly, tractable expressions are derived for the convergence rate of FML
in a wireless setting, accounting for effects from both scheduling ratio, local
epoch and the data amount on a single node.From our analysis and simulation
results, it is shown that, the proposed C-DNN can provide a better BER
performance and lower complexity than classical matched filter (MF) in
underwater acoustic communications scenario. The ARC/FML framework has good
convergence under a variety of channels than federated learning (FL). In
summary, the proposed ARC/FML for OoT is a promising scheme for information
exchange across water and air.
|
The U.S. Food & Drug Administration (FDA) requires that e-cigarette
advertisements include a prominent warning label that reminds consumers that
nicotine is addictive. However, the high volume of vaping-related posts on
social media makes compliance auditing expensive and time-consuming, suggesting
that an automated, scalable method is needed. We sought to develop and evaluate
a deep learning system designed to automatically determine if an Instagram post
promotes vaping, and if so, if an FDA-compliant warning label was included or
if a non-compliant warning label was visible in the image. We compiled and
labeled a dataset of 4,363 Instagram images, of which 44% were vaping-related,
3% contained FDA-compliant warning labels, and 4% contained non-compliant
labels. Using a 20% test set for evaluation, we tested multiple neural network
variations: image processing backbone model (Inceptionv3, ResNet50,
EfficientNet), data augmentation, progressive layer unfreezing, output bias
initialization designed for class imbalance, and multitask learning. Our final
model achieved an area under the curve (AUC) and [accuracy] of 0.97 [92%] on
vaping classification, 0.99 [99%] on FDA-compliant warning labels, and 0.94
[97%] on non-compliant warning labels. We conclude that deep learning models
can effectively identify vaping posts on Instagram and track compliance with
FDA warning label requirements.
|
We describe the deployment and first tests on Sky of CONCERTO, a large
field-of-view (18.6arc-min) spectral-imaging instrument. The instrument
operates in the range 130-310GHz from the APEX 12-meters telescope located at
5100m a.s.l. on the Chajnantor plateau. Spectra with R=1-300 are obtained using
a fast (2.5Hz mechanical frequency) Fourier Transform Spectrometer (FTS),
coupled to a continuous dilution cryostat with a base temperature of 60mK. Two
2152-pixels arrays of Lumped Element Kinetic Inductance Detectors (LEKID) are
installed in the cryostat that also contains the cold optics and the front-end
electronics. CONCERTO, installed in April 2021, generates more than 20k spectra
per second during observations. We describe the final development phases, the
installation and the first results obtained on Sky.
|
Discovery of quantized electric conductance by the group of van Wees in 1988
was a major breakthrough in physics. Later, the group of Schwab has proven the
existence of quantized thermal conductance. Advancing one step further, we
present that quantized entropy current can be interpreted and it ease the
description of a transferred quantized energy package. This might yield a
universal transport behavior of the microscopic world. During the transfer of a
single energy quantum, $h \nu$ between two neighboring domains the minimum
entropy increment is calculated. Furthermore, the possible existence of the
minimum entropy production can be formulated.
|
For the one-dimensional mass-critical/supercritical pseudo-relativistic
nonlinear Schrodinger equation, a stationary solution can be constructed as an
energy minimizer under an additional kinetic energy constraint and the set of
energy minimizers is orbitally stable in \cite{BGV}. In this study, we proved
the local uniqueness and established the orbital stability of the solitary wave
by improving that of the energy minimizer set. A key aspect thereof is the
reformulation of the variational problem in the non-relativistic regime, which
we consider to be more natural because the proof extensively relies on the
subcritical nature of the limiting model. Thus, the role of the additional
constraint is clarified, a more suitable Gagliardo-Nirenberg inequality is
introduced, and the non-relativistic limit is proved. Subsequently, this limit
is employed to derive the local uniqueness and orbital stability.
|
This paper proposes an ecological adaptive cruise control (EACC) concept with
the primary goal to minimize the fuel consumption in a city bus with an
internal combustion engine (ICE). A hybrid model predictive control (HMPC) is
implemented in this work to control both continuous and discrete-time
variables. Moreover, a multi-objective optimization problem for EACC is
formulated in time-domain as a mixed-integer quadratically constrained
quadratic programming (MIQCQP) problem. The proposed HMPC-EACC performs robust
vehicle-following while tracking a leading vehicle and plans fuel-efficient
acceleration and deceleration maneuvers for the host vehicle. Additionally, it
uses the signal phase and timing (SPaT) information to compute a green wave
reference speed for the host vehicle to cross the signalized intersections at a
green phase. Moreover, the proposed controller performs pulse and glide (PnG)
to optimally control the engine ON and OFF states and save additional fuel.
Furthermore, the performance of the proposed strategy is evaluated on a
real-world driving profile and compared against a baseline controller from the
literature. Finally, the influence of different prediction horizons on the fuel
savings and computation times are studied. The results reveal significant
reduction in fuel consumption with HMPC-EACC and demonstrate that the proposed
controller is real-time capable.
|
Learning to model how the world changes as time elapses has proven a
challenging problem for the computer vision community. We propose a
self-supervised solution to this problem using temporal cycle consistency
jointly in vision and language, training on narrated video. Our model learns
modality-agnostic functions to predict forward and backward in time, which must
undo each other when composed. This constraint leads to the discovery of
high-level transitions between moments in time, since such transitions are
easily inverted and shared across modalities. We justify the design of our
model with an ablation study on different configurations of the cycle
consistency problem. We then show qualitatively and quantitatively that our
approach yields a meaningful, high-level model of the future and past. We apply
the learned dynamics model without further training to various tasks, such as
predicting future action and temporally ordering sets of images. Project page:
https://dave.ml/mmcc
|
We present two Dialectica-like constructions for models of intensional
Martin-L\"of type theory based on G\"odel's original Dialectica interpretation
and the Diller-Nahm variant, bringing dependent types to categorical proof
theory. We set both constructions within a logical predicates style theory for
display map categories where we show that 'quasifibred' versions of dependent
products and universes suffice to construct their standard counterparts. To
support the logic required for dependent products in the first construction, we
propose a new semantic notion of finite sum for dependent types, generalizing
finitely-complete extensive categories. The second avoids extensivity
assumptions using biproducts in a Kleisli category for a fibred additive monad.
|
In this paper, we propose a novel multi-color balance adjustment for color
constancy. The proposed method, called "n-color balancing," allows us not only
to perfectly correct n target colors on the basis of corresponding ground truth
colors but also to correct colors other than the n colors. In contrast,
although white-balancing can perfectly adjust white, colors other than white
are not considered in the framework of white-balancing in general. In an
experiment, the proposed multi-color balancing is demonstrated to outperform
both conventional white and multi-color balance adjustments including
Bradford's model.
|
The Peer-to-Peer systems (P2P) were led these last years as the major
technology of access upon various resources on Internet. These systems build a
cluster witch contains a very large number of peers. As the result the
selection of peers who can answer for a given query is a very difficult
problem. The efficiency of the selection algorithms can be improved by
introducing of semantics into the process of queries routing. We present in
this paper a novel improved version of our semantic routing algorithm
LearningPeerSelection (LPS) presented in CORIA 2009, an incremental strategy of
updating knowledge bases and an advanced experimental study. To test the
proposed algorithm, we defined a layer of routing on the PeerSim simulator.
|
Transit surveys have revealed a significant population of compact
multi-planet systems, containing several sub-Neptune-mass planets on close-in,
tightly-packed orbits. These systems are thought to have formed through a final
phase of giant impacts, which would tend to leave systems close to the edge of
stability. Here, we assess this hypothesis, comparing observed eccentricities
in systems exhibiting transit-timing variations (TTVs), with the maximum
eccentricities compatible with long-term stability. We use the machine-learning
classifier SPOCK (Tamayo et al. 2020) to rapidly classify the stability of
numerous initial configurations and hence determine these stability limits.
While previous studies have argued that multi-planet systems are often
maximally packed, in the sense that they could not host any additional planets,
we find that the existing planets in these systems have measured eccentricities
below the limits allowed by stability by a factor of 2--10. We compare these
results against predictions from the giant impact theory of planet formation,
derived from both $N$-body integrations and theoretical expectations that in
the absence of dissipation, the orbits of such planets should be distributed
uniformly throughout the phase space volume allowed by stability. We find that
the observed systems have systematically lower eccentricities than this
scenario predicts, with a median eccentricity about 4 times lower than
predicted. These findings suggest that if such systems formed through giant
impacts, then some dissipation must occur to damp their eccentricities. This
may take place during formation, perhaps through interactions with the natal
gas disk or a leftover population of planetesimals, or over longer timescales
through the coupling of tidal and secular processes.
|
Let $\mathcal{O}$ be a discrete valuation ring with unique maximal ideal
$\mathfrak{p}$ and with finite residue field $\mathbb{F}_{q}$, the field with
$q$ elements where $q$ is a power of a prime $p$. For $r \ge 1$, we write
$\mathcal{O}_r$ for the reduction of $\mathcal{O}$ modulo the ideal
$\mathfrak{p}^r$. An irreducible representation of the finite group
$G_r=\mathrm{GL}_{N}(\mathcal{O}_{r})$ is called stable if its restriction to
the principal congruence kernel
$K^l=1+\mathfrak{p}^{l}\mathrm{M}_{N}(\mathcal{O}_r)$, where
$l=[\frac{r+1}{2}]$, consists of representations whose stabilisers modulo
$K^{l'}$ are centralisers of Jordan canonical matrix in
$\mathfrak{g}_{l'}=\mathrm{M}_{N}(\mathcal{{O}}_{l'})$, where $l'=r-l$.
Their study is motivated by constructions of strongly semisimple
representations, introduced by the work of Hill, which is a special case of
stable representations. In this paper, we explore the construction of stable
(ordinary) irreducible representations of the finite group
$G_r=\mathrm{GL}_{N}(\mathcal{O}_{r})$ for $N \ge 2$.
|
Current voice conversion (VC) methods can successfully convert timbre of the
audio. As modeling source audio's prosody effectively is a challenging task,
there are still limitations of transferring source style to the converted
speech. This study proposes a source style transfer method based on
recognition-synthesis framework. Previously in speech generation task, prosody
can be modeled explicitly with prosodic features or implicitly with a latent
prosody extractor. In this paper, taking advantages of both, we model the
prosody in a hybrid manner, which effectively combines explicit and implicit
methods in a proposed prosody module. Specifically, prosodic features are used
to explicit model prosody, while VAE and reference encoder are used to
implicitly model prosody, which take Mel spectrum and bottleneck feature as
input respectively. Furthermore, adversarial training is introduced to remove
speaker-related information from the VAE outputs, avoiding leaking source
speaker information while transferring style. Finally, we use a modified
self-attention based encoder to extract sentential context from bottleneck
features, which also implicitly aggregates the prosodic aspects of source
speech from the layered representations. Experiments show that our approach is
superior to the baseline and a competitive system in terms of style transfer;
meanwhile, the speech quality and speaker similarity are well maintained.
|
Safety-critical software systems are in many cases designed and implemented
as families of products, usually referred to as Software Product Lines (SPLs).
Products within an SPL vary from each other in terms of which features they
include. Applying existing analysis techniques to SPLs and their safety cases
is usually challenging because of the potentially exponential number of
products with respect to the number of supported features. In this paper, we
present a methodology and infrastructure for certified \emph{lifting} of
existing single-product safety analyses to product lines. To ensure certified
safety of our infrastructure, we implement it in an interactive theorem prover,
including formal definitions, lemmas, correctness criteria theorems, and
proofs. We apply this infrastructure to formalize and lift a Change Impact
Assessment (CIA) algorithm. We present a formal definition of the lifted
algorithm, outline its correctness proof (with the full machine-checked proof
available online), and discuss its implementation within a model management
framework.
|
The functional form of Coulomb interactions in the transition metal
dichalcogenides and other van der Waals solids is critical to many of their
unique properties, e.g. strongly-correlated electron states, superconductivity
and emergent ferromagnetism. This paper presents measurements of key excitonic
energy levels in MoSe2/WSe2 heterostructures. These measurements are obtained
from resonance Raman experiments on specific Raman peaks only observed at
excited states of the excitons. This data is used to validate a model of the
Coulomb potential in these structures which predicts the exciton energies to
within ~5 meV / 2.5%. This model is used to determine the effect of
heterostructure formation on the single-particle band gaps of the layers and
will have a wide applicability in designing the next generation of more complex
transition metal dichalcogenide structures.
|
This paper deals with the fully parabolic chemotaxis system of local sensing
in higher dimensions. Despite the striking similarity between this system and
the Keller--Segel system, we prove the absence of finite-time blow-up
phenomenon in this system even in the supercritical case. It means that for any
regular initial data, independently of the magnitude of mass, the classical
solution exists globally in time in the higher dimensional setting. Moreover,
for the exponential decaying motility case, it is established that solutions
may blow up at infinite time for any magnitude of mass. In order to prove our
theorem, we deal with some auxiliary identity as an evolution equation with a
time dependent operator. In view of this new perspective, the direct
consequence of the abstract theory is rich enough to establish global existence
of the system.
|
The Internet of Medical Things (IoMT) paradigm is becoming mainstream in
multiple clinical trials and healthcare procedures. Cardiovascular diseases
monitoring, usually involving electrocardiogram (ECG) traces analysis, is one
of the most promising and high-impact applications. Nevertheless, to fully
exploit the potential of IoMT in this domain, some steps forward are needed.
First, the edge-computing paradigm must be added to the picture. A certain
level of near-sensor processing has to be enabled, to improve the scalability,
portability, reliability, responsiveness of the IoMT nodes. Second, novel,
increasingly accurate, data analysis algorithms, such as those based on
artificial intelligence and Deep Learning, must be exploited. To reach these
objectives, designers and programmers of IoMT nodes, have to face challenging
optimization tasks, in order to execute fairly complex computing tasks on
low-power wearable and portable processing systems, with tight power and
battery lifetime budgets. In this work, we explore the implementation of a
cognitive data analysis algorithm, based on a convolutional neural network
trained to classify ECG waveforms, on a resource-constrained
microcontroller-based computing platform. To minimize power consumption, we add
an adaptivity layer that dynamically manages the hardware and software
configuration of the device to adapt it at runtime to the required operating
mode. Our experimental results show that adapting the node setup to the
workload at runtime can save up to 50% power consumption. Our optimized and
quantized neural network reaches an accuracy value higher than 97% for
arrhythmia disorders detection on MIT-BIH Arrhythmia dataset.
|
We present the first BVR photometry, period variation, and photometric
light-curve analysis of two poorly studied eclipsing binaries V1321 Cyg and CR
Tau. Observations were carried out from November 2017 to January 2020 at the
observatory of Uzhhorod National University. Period variations were studied
using all available early published as well as our minima times. We have used
newly developed ELISa code for the light curve analysis and determination of
photometric parameters of both systems. We found that V1321 Cyg is a close
detached eclipsing system with a low photometric mass ratio of $q=0.28$ which
suggests that the binary is a post mass transfer system. No significant period
changes in this system are detected. CR Tau is, on the other hand, a
semi-detached system where the secondary component almost fills its Roche lobe.
We detected a long-term period increase at a rate of $1.49 \times 10^{-7} d/y$,
which support mass transfer from lower mass secondary component to the more
massive primary.
|
Environmental Sound Classification (ESC) is a challenging field of research
in non-speech audio processing. Most of current research in ESC focuses on
designing deep models with special architectures tailored for specific audio
datasets, which usually cannot exploit the intrinsic patterns in the data.
However recent studies have surprisingly shown that transfer learning from
models trained on ImageNet is a very effective technique in ESC. Herein, we
propose SoundCLR, a supervised contrastive learning method for effective
environment sound classification with state-of-the-art performance, which works
by learning representations that disentangle the samples of each class from
those of other classes. Our deep network models are trained by combining a
contrastive loss that contributes to a better probability output by the
classification layer with a cross-entropy loss on the output of the classifier
layer to map the samples to their respective 1-hot encoded labels. Due to the
comparatively small sizes of the available environmental sound datasets, we
propose and exploit a transfer learning and strong data augmentation pipeline
and apply the augmentations on both the sound signals and their log-mel
spectrograms before inputting them to the model. Our experiments show that our
masking based augmentation technique on the log-mel spectrograms can
significantly improve the recognition performance. Our extensive benchmark
experiments show that our hybrid deep network models trained with combined
contrastive and cross-entropy loss achieved the state-of-the-art performance on
three benchmark datasets ESC-10, ESC-50, and US8K with validation accuracies of
99.75\%, 93.4\%, and 86.49\% respectively. The ensemble version of our models
also outperforms other top ensemble methods. The code is available at
https://github.com/alireza-nasiri/SoundCLR.
|
We study the Morse index of minimal surfaces with free boundary in a
half-space. We improve previous estimates relating the Neumann index to the
Dirichlet index and use this to answer a question of Ambrozio, Buzano,
Carlotto, and Sharp concerning the non-existence of index two embedded minimal
surfaces with free boundary in a half-space. We also give a simplified proof of
a result of Chodosh and Maximo concerning lower bounds for the index of the
Costa deformation family.
|
Deep Neural Networks (DNN) are increasingly commonly used in software
engineering and code intelligence tasks. These are powerful tools that are
capable of learning highly generalizable patterns from large datasets through
millions of parameters. At the same time, training DNNs means walking a knife's
edges, because their large capacity also renders them prone to memorizing data
points. While traditionally thought of as an aspect of over-training, recent
work suggests that the memorization risk manifests especially strongly when the
training datasets are noisy and memorization is the only recourse.
Unfortunately, most code intelligence tasks rely on rather noise-prone and
repetitive data sources, such as GitHub, which, due to their sheer size, cannot
be manually inspected and evaluated. We evaluate the memorization and
generalization tendencies in neural code intelligence models through a case
study across several benchmarks and model families by leveraging established
approaches from other fields that use DNNs, such as introducing targeted noise
into the training dataset. In addition to reinforcing prior general findings
about the extent of memorization in DNNs, our results shed light on the impact
of noisy dataset in training.
|
We study the question which Boolean algebras have the property that for every
generating set there is an ultrafilter selecting maximal number of its
elements. We call it the ultrafilter selection property. For cardinality
aleph-one the property is equivalent to the fact that the space of ultrafilters
is not Corson compact. We also consider the pointwise topology on a Boolean
algebra, proving a result on the Lindel\"of number in the context of the
ultrafilter selection property. Finally, we discuss poset Boolean algebras,
interval algebras, and semilattices in the context of ultrafilter selection
properties.
|
Non-intrusive load monitoring (NILM) helps disaggregate the household's main
electricity consumption to energy usages of individual appliances, thus greatly
cutting down the cost in fine-grained household load monitoring. To address the
arisen privacy concern in NILM applications, federated learning (FL) could be
leveraged for NILM model training and sharing. When applying the FL paradigm in
real-world NILM applications, however, we are faced with the challenges of edge
resource restriction, edge model personalization and edge training data
scarcity.
In this paper we present FedNILM, a practical FL paradigm for NILM
applications at the edge client. Specifically, FedNILM is designed to deliver
privacy-preserving and personalized NILM services to large-scale edge clients,
by leveraging i) secure data aggregation through federated learning, ii)
efficient cloud model compression via filter pruning and multi-task learning,
and iii) personalized edge model building with unsupervised transfer learning.
Our experiments on real-world energy data show that, FedNILM is able to achieve
personalized energy disaggregation with the state-of-the-art accuracy, while
ensuring privacy preserving at the edge client.
|
Let $q$ be a power of the prime number $p$, let $K={\mathbb F}_q(t)$, and let
$r\ge 2$ be an integer. For points ${\mathbf a}, {\mathbf b}\in K$ which are
$\mathbb{F}_q$-linearly independent, we show that there exist positive
constants $N_0$ and $c_0$ such that for each integer $\ell\ge N_0$ and for each
generator $\tau$ of ${\mathbb F}_{q^\ell}/{\mathbb F}_q$, we have that for all
except $N_0$ values $\lambda\in{\overline{\mathbb{F}_q}}$, the corresponding
specializations ${\mathbf a}, {\mathbf b}(\tau)$ and ${\mathbf b}(\tau)$ cannot
have orders of degrees less than $c_0\log\log\ell$ as torsion points for the
Drinfeld module $\Phi^{(\tau,\lambda)}:\mathbb{F}_q[T] {\longrightarrow}
{\mathrm{End}}_{\overline{\mathbb{F}_q}}({\mathbb G}_a)$ (where ${\mathbb G}_a$
is the additive group scheme), given by $\Phi^{(\tau,\lambda)}_T(x)=\tau
x+\lambda x^q + x^{q^r}$.
|
We consider a system of static spin qubits embedded in a one-dimensional spin
coherent channel and develop a scheme to readout the state of one and two
qubits separately. We use unpolarized flying qubits for this purpose that
scatter off from the static qubits due to the Heisenberg exchange interaction.
Analysing the transmission coefficient as a function of density matrix elements
along with additional unitary gates we reconstruct the state of static qubits.
|
We propose a novel and effective purification based adversarial defense
method against pre-processor blind white- and black-box attacks. Our method is
computationally efficient and trained only with self-supervised learning on
general images, without requiring any adversarial training or retraining of the
classification model. We first show an empirical analysis on the adversarial
noise, defined to be the residual between an original image and its adversarial
example, has almost zero mean, symmetric distribution. Based on this
observation, we propose a very simple iterative Gaussian Smoothing (GS) which
can effectively smooth out adversarial noise and achieve substantially high
robust accuracy. To further improve it, we propose Neural Contextual Iterative
Smoothing (NCIS), which trains a blind-spot network (BSN) in a self-supervised
manner to reconstruct the discriminative features of the original image that is
also smoothed out by GS. From our extensive experiments on the large-scale
ImageNet using four classification models, we show that our method achieves
both competitive standard accuracy and state-of-the-art robust accuracy against
most strong purifier-blind white- and black-box attacks. Also, we propose a new
benchmark for evaluating a purification method based on commercial image
classification APIs, such as AWS, Azure, Clarifai and Google. We generate
adversarial examples by ensemble transfer-based black-box attack, which can
induce complete misclassification of APIs, and demonstrate that our method can
be used to increase adversarial robustness of APIs.
|
We present a unified approach for constructing Slepian functions - also known
as prolate spheroidal wave functions - on the sphere for arbitrary tensor ranks
including scalar, vectorial, and rank 2 tensorial Slepian functions, using
spin-weighted spherical harmonics. For the special case of spherical cap
regions, we derived commuting operators, allowing for a numerically stable and
computationally efficient construction of the spin-weighted
spherical-harmonic-based Slepian functions. Linear relationships between the
spin-weighted and the classical scalar, vectorial, tensorial, and higher-rank
spherical harmonics allow the construction of classical
spherical-harmonic-based Slepian functions from their spin-weighted
counterparts, effectively rendering the construction of spherical-cap Slepian
functions for any tensorial rank a computationally fast and numerically stable
task.
|
A metastable cosmic-string network is a generic consequence of many grand
unified theories (GUTs) when combined with cosmic inflation. Metastable cosmic
strings are not topologically stable, but decay on cosmic time scales due to
pair production of GUT monopoles. This leads to a network consisting of
metastable long strings on superhorizon scales as well as of string loops and
segments on subhorizon scales. We compute for the first time the complete
stochastic gravitational-wave background (SGWB) arising from all these network
constituents, including several technical improvements to both the derivation
of the loop and segment contributions. We find that the gravitational waves
emitted by string loops provide the main contribution to the gravitational-wave
spectrum in the relevant parameter space. The resulting spectrum is consistent
with the tentative signal observed by the NANOGrav and Parkes pulsar timing
collaborations for a string tension of G\mu ~ 10^-11...-7 and has ample
discovery space for ground- and space-based detectors. For GUT-scale string
tensions, G\mu ~ 10^-8...-7, metastable strings predict a SGWB in the
LIGO-Virgo-KAGRA band that could be discovered in the near future.
|
We propose a novel transformer-based styled handwritten text image generation
approach, HWT, that strives to learn both style-content entanglement as well as
global and local writing style patterns. The proposed HWT captures the long and
short range relationships within the style examples through a self-attention
mechanism, thereby encoding both global and local style patterns. Further, the
proposed transformer-based HWT comprises an encoder-decoder attention that
enables style-content entanglement by gathering the style representation of
each query character. To the best of our knowledge, we are the first to
introduce a transformer-based generative network for styled handwritten text
generation. Our proposed HWT generates realistic styled handwritten text images
and significantly outperforms the state-of-the-art demonstrated through
extensive qualitative, quantitative and human-based evaluations. The proposed
HWT can handle arbitrary length of text and any desired writing style in a
few-shot setting. Further, our HWT generalizes well to the challenging scenario
where both words and writing style are unseen during training, generating
realistic styled handwritten text images.
|
This paper extends Bayesian mortality projection models for multiple
populations considering the stochastic structure and the effect of spatial
autocorrelation among the observations. We explain high levels of
overdispersion according to adjacent locations based on the conditional
autoregressive model. In an empirical study, we compare different hierarchical
projection models for the analysis of geographical diversity in mortality
between the Japanese counties in multiple years, according to age. By a Markov
chain Monte Carlo (MCMC) computation, results have demonstrated the flexibility
and predictive performance of our proposed model.
|
The knowledge on attacks contained in Cyber Threat Intelligence (CTI) reports
is very important to effectively identify and quickly respond to cyber threats.
However, this knowledge is often embedded in large amounts of text, and
therefore difficult to use effectively. To address this challenge, we propose a
novel approach and tool called EXTRACTOR that allows precise automatic
extraction of concise attack behaviors from CTI reports. EXTRACTOR makes no
strong assumptions about the text and is capable of extracting attack behaviors
as provenance graphs from unstructured text. We evaluate EXTRACTOR using
real-world incident reports from various sources as well as reports of DARPA
adversarial engagements that involve several attack campaigns on various OS
platforms of Windows, Linux, and FreeBSD. Our evaluation results show that
EXTRACTOR can extract concise provenance graphs from CTI reports and show that
these graphs can successfully be used by cyber-analytics tools in
threat-hunting.
|
The current tension between the direct and the early Universe measurements of
the Hubble Constant, $H_0$, requires detailed scrutiny of all the data and
methods used in the studies on both sides of the debate. The Cepheids in the
type Ia supernova (SNIa) host galaxy NGC 5584 played a key role in the local
measurement of $H_0$. The SH0ES project used the observations of this galaxy to
derive a relation between Cepheids' periods and ratios of their amplitudes in
different optical bands of the Hubble Space Telescope (HST), and used these
relations to analyse the light curves of the Cepheids in around half of the
current sample of local SNIa host galaxies. In this work, we present an
independent detailed analysis of the Cepheids in NGC 5584. We employ different
tools for our photometric analysis and a completely different method for our
light curve analysis, and we do not find a systematic difference between our
period and mean magnitude measurements compared to those reported by SH0ES. By
adopting a period-luminosity relation calibrated by the Cepheids in the Milky
Way, we measure a distance modulus $\mu=31.810\pm0.047$ (mag) which is in
agreement with $\mu=31.786\pm0.046$ (mag) measured by SH0ES. In addition, the
relations we find between periods and amplitude ratios of the Cepheids in NGC
5584 are significantly tighter than those of SH0ES and their potential impact
on the direct $H_0$ measurement will be investigated in future studies.
|
A commonly cited inefficiency of neural network training using
back-propagation is the update locking problem: each layer must wait for the
signal to propagate through the full network before updating. Several
alternatives that can alleviate this issue have been proposed. In this context,
we consider a simple alternative based on minimal feedback, which we call
Decoupled Greedy Learning (DGL). It is based on a classic greedy relaxation of
the joint training objective, recently shown to be effective in the context of
Convolutional Neural Networks (CNNs) on large-scale image classification. We
consider an optimization of this objective that permits us to decouple the
layer training, allowing for layers or modules in networks to be trained with a
potentially linear parallelization. With the use of a replay buffer we show
that this approach can be extended to asynchronous settings, where modules can
operate and continue to update with possibly large communication delays. To
address bandwidth and memory issues we propose an approach based on online
vector quantization. This allows to drastically reduce the communication
bandwidth between modules and required memory for replay buffers. We show
theoretically and empirically that this approach converges and compare it to
the sequential solvers. We demonstrate the effectiveness of DGL against
alternative approaches on the CIFAR-10 dataset and on the large-scale ImageNet
dataset.
|
Precise control over the electronic and optical properties of defect centers
in solid-state materials is necessary for their applications as quantum
sensors, transducers, memories, and emitters. In this study, we demonstrate,
from first principles, how to tune these properties via the formation of defect
polaritons. Specifically, we investigate three defect types -- CHB, CB-CB, and
CB-VN -- in monolayer hexagonal boron nitride (hBN). The lowest-lying
electronic excitation of these systems is coupled to an optical cavity where we
explore the strong light-matter coupling regime. For all defect systems, we
show that the polaritonic splitting that shifts the absorption energy of the
lower polariton is much higher than can be expected from a Jaynes-Cummings
interaction. In addition, we find that the absorption intensity of the lower
polariton increases by several orders of magnitude, suggesting a possible route
toward overcoming phonon-limited single photon emission from defect centers.
Finally, we find that initially localized electronic transition densities can
become delocalized across the entire material under strong light-matter
coupling. These findings are a result of an effective continuum of electronic
transitions near the lowest-lying electronic transition for both pristine hBN
and hBN with defect centers that dramatically enhances the strength of the
light-matter interaction. We expect our findings to spur experimental
investigations of strong light-matter coupling between defect centers and
cavity photons for applications in quantum technologies.
|
We propose a method for the blind separation of sounds of musical instruments
in audio signals. We describe the individual tones via a parametric model,
training a dictionary to capture the relative amplitudes of the harmonics. The
model parameters are predicted via a U-Net, which is a type of deep neural
network. The network is trained without ground truth information, based on the
difference between the model prediction and the individual time frames of the
short-time Fourier transform. Since some of the model parameters do not yield a
useful backpropagation gradient, we model them stochastically and employ the
policy gradient instead. To provide phase information and account for
inaccuracies in the dictionary-based representation, we also let the network
output a direct prediction, which we then use to resynthesize the audio signals
for the individual instruments. Due to the flexibility of the neural network,
inharmonicity can be incorporated seamlessly and no preprocessing of the input
spectra is required. Our algorithm yields high-quality separation results with
particularly low interference on a variety of different audio samples, both
acoustic and synthetic, provided that the sample contains enough data for the
training and that the spectral characteristics of the musical instruments are
sufficiently stable to be approximated by the dictionary.
|
We propose a novel approach to dimensionality reduction combining techniques
of metric geometry and distributed persistent homology, in the form of a
gradient-descent based method called DIPOLE. DIPOLE is a
dimensionality-reduction post-processing step that corrects an initial
embedding by minimizing a loss functional with both a local, metric term and a
global, topological term. By fixing an initial embedding method (we use
Isomap), DIPOLE can also be viewed as a full dimensionality-reduction pipeline.
This framework is based on the strong theoretical and computational properties
of distributed persistent homology and comes with the guarantee of almost sure
convergence. We observe that DIPOLE outperforms popular methods like UMAP,
t-SNE, and Isomap on a number of popular datasets, both visually and in terms
of precise quantitative metrics.
|
Continual relation extraction is an important task that focuses on extracting
new facts incrementally from unstructured text. Given the sequential arrival
order of the relations, this task is prone to two serious challenges, namely
catastrophic forgetting and order-sensitivity. We propose a novel
curriculum-meta learning method to tackle the above two challenges in continual
relation extraction. We combine meta learning and curriculum learning to
quickly adapt model parameters to a new task and to reduce interference of
previously seen tasks on the current task. We design a novel relation
representation learning method through the distribution of domain and range
types of relations. Such representations are utilized to quantify the
difficulty of tasks for the construction of curricula. Moreover, we also
present novel difficulty-based metrics to quantitatively measure the extent of
order-sensitivity of a given model, suggesting new ways to evaluate model
robustness. Our comprehensive experiments on three benchmark datasets show that
our proposed method outperforms the state-of-the-art techniques. The code is
available at the anonymous GitHub repository:
https://github.com/wutong8023/AAAI_CML.
|
A methodology to generate sparse Galerkin models of chaotic/unsteady fluid
flows containing a minimal number of active triadic interactions is proposed.
The key idea is to find an appropriate set of basis functions for the
projection representing elementary flow structures that interact minimally one
with the other and thus result in a triadic interaction coefficient tensor with
sparse structure. Interpretable and computationally efficient Galerkin models
can be thus obtained, since a reduced number of triadic interactions needs to
be computed to evaluate the right hand side of the model. To find the basis
functions, a subspace rotation technique is used, whereby a set of Proper
Orthogonal Decomposition (POD) modes is rotated into a POD subspace of larger
dimension using coordinates associated to low-energy dissipative scales to
alter energy paths and the structure of the triadic interaction coefficient
tensor. This rotation is obtained as the solution of a non-convex optimisation
problem that maximises the energy captured by the new basis, promotes sparsity
and ensures long-term temporal stability of the sparse Galerkin system. We
demonstrate the approach on two-dimensional lid-driven cavity flow at $Re = 2
\times 10^4$ where the motion is chaotic. We show that the procedure generates
Galerkin models with a reduced set of active triadic interactions, distributed
in modal space according to established knowledge of scale interactions in
two-dimensional flows. This property, however, is only observed if long-term
temporal stability is explicitly included in the formulation, indicating that a
dynamical constraint is necessary to obtain a physically consistent
sparsification.
|
This memo describes NTR-TSU submission for SIGTYP 2021 Shared Task on
predicting language IDs from speech.
Spoken Language Identification (LID) is an important step in a multilingual
Automated Speech Recognition (ASR) system pipeline. For many low-resource and
endangered languages, only single-speaker recordings may be available,
demanding a need for domain and speaker-invariant language ID systems. In this
memo, we show that a convolutional neural network with a Self-Attentive Pooling
layer shows promising results for the language identification task.
|
Industrial cyber-physical systems (ICPSs) manage critical infrastructures by
controlling the processes based on the "physics" data gathered by edge sensor
networks. Recent innovations in ubiquitous computing and communication
technologies have prompted the rapid integration of highly interconnected
systems to ICPSs. Hence, the "security by obscurity" principle provided by
air-gapping is no longer followed. As the interconnectivity in ICPSs increases,
so does the attack surface. Industrial vulnerability assessment reports have
shown that a variety of new vulnerabilities have occurred due to this
transition while the most common ones are related to weak boundary protection.
Although there are existing surveys in this context, very little is mentioned
regarding these reports. This paper bridges this gap by defining and reviewing
ICPSs from a cybersecurity perspective. In particular, multi-dimensional
adaptive attack taxonomy is presented and utilized for evaluating real-life
ICPS cyber incidents. We also identify the general shortcomings and highlight
the points that cause a gap in existing literature while defining future
research directions.
|
Policy-based reinforcement learning methods suffer from the policy collapse
problem. We find valued-based reinforcement learning methods with
{\epsilon}-greedy mechanism are capable of enjoying three characteristics,
Closed-form Diversity, Objective-invariant Exploration and Adaptive Trade-off,
which help value-based methods avoid the policy collapse problem. However,
there does not exist a parallel mechanism for policy-based methods that
achieves all three characteristics. In this paper, we propose an entropy
regularization free mechanism that is designed for policy-based methods, which
achieves Closed-form Diversity, Objective-invariant Exploration and Adaptive
Trade-off. Our experiments show that our mechanism is super sample-efficient
for policy-based methods and boosts a policy-based baseline to a new
State-Of-The-Art on Arcade Learning Environment.
|
Despite the recent successes of reinforcement learning in games and robotics,
it is yet to become broadly practical. Sample efficiency and unreliable
performance in rare but challenging scenarios are two of the major obstacles.
Drawing inspiration from the effectiveness of deliberate practice for achieving
expert-level human performance, we propose a new adversarial sampling approach
guided by a failure predictor named "CoachNet". CoachNet is trained online
along with the agent to predict the probability of failure. This probability is
then used in a stochastic sampling process to guide the agent to more
challenging episodes. This way, instead of wasting time on scenarios that the
agent has already mastered, training is focused on the agent's "weak spots". We
present the design of CoachNet, explain its underlying principles, and
empirically demonstrate its effectiveness in improving sample efficiency and
test-time robustness in common continuous control tasks.
|
We propose the study of the inclusive hadroproduction of a heavy-flavored jet
in association with a light jet, as a probe channel of strong interactions at
high energies. We build up a hybrid factorization that encodes genuine
high-energy effects, provided by a partial next-to-leading BFKL resummation,
inside the standard collinear structure of the cross section. We present a
detailed analysis of different distributions, shaped on kinematic ranges
typical of experimental analyses at the Large Hadron Collider, and differential
in rapidity, azimuthal angle and transverse momentum. The fair stability that
these distributions exhibit under higher-order corrections motivates our
interest toward future studies. Here, the hybrid factorization could help to
deepen our understanding of heavy-flavor physics in wider kinematic ranges,
like the ones accessible at the Electron-Ion Collider.
|
Random samples of quantum states are an important resource for various tasks
in quantum information science, and samples in accordance with a
problem-specific distribution can be indispensable ingredients. Some algorithms
generate random samples by a lottery that follows certain rules and yield
samples from the set of distributions that the lottery can access. Other
algorithms, which use random walks in the state space, can be tailored to any
distribution, at the price of autocorrelations in the sample and with
restrictions to low-dimensional systems in practical implementations. In this
paper, we present a two-step algorithm for sampling from the quantum state
space that overcomes some of these limitations.
We first produce a CPU-cheap large proposal sample, of uncorrelated entries,
by drawing from the family of complex Wishart distributions, and then reject or
accept the entries in the proposal sample such that the accepted sample is
strictly in accordance with the target distribution. We establish the explicit
form of the induced Wishart distribution for quantum states. This enables us to
generate a proposal sample that mimics the target distribution and, therefore,
the efficiency of the algorithm, measured by the acceptance rate, can be many
orders of magnitude larger than that for a uniform sample as the proposal.
We demonstrate that this sampling algorithm is very efficient for one-qubit
and two-qubit states, and reasonably efficient for three-qubit states, while it
suffers from the "curse of dimensionality" when sampling from structured
distributions of four-qubit states.
|
Recently, in topological insulators (TIs) the phenomenon of planar Hall
effect (PHE) wherein a current driven in presence an in-plane magnetic field
generates a transverse voltage has been experimentally witnessed. There have
been a couple of theoretical explanations of this phenomenon. We investigate
this phenomenon based on scattering theory on a normal metal-TI-normal metal
hybrid structure and calculate the conductances in longitudinal and transverse
directions to the applied bias. The transverse conductance depends on the
spatial location between the two NM-TI junctions where it is calculated. It is
zero in the drain electrode when the chemical potentials of the top and the
bottom TI surfaces ($\mu_t$ and $\mu_b$ respectively) are equal. The
longitudinal conductance is $\pi$-periodic in $\phi$-the angle between the bias
direction and the direction of the in-plane magnetic field. The transverse
conductance is $\pi$-periodic in $\phi$ when $\mu_t=\mu_b$ whereas it is
$2\pi$-periodic in $\phi$ when $\mu_t\neq\mu_b$. As a function of the magnetic
field, the magnitude of transverse conductance increases initially and peaks.
At higher magnetic fields, it decays for angles $\phi$ closer to $0,\pi$
whereas oscillates for angles $\phi$ close to $\pi/2$. The conductances
oscillate with the length of the TI region. A finite width of the system makes
the transport separate into finitely many channels. The features of the
conductances are similar to those in the limit of infinitely wide system except
when the width is so small that only one channel participates in the transport.
When only one channel participates in transport, the transverse conductance in
the region $0<x<L$ is zero for $\mu_t=\mu_b$ and the transverse conductance in
the region $x>L$ is zero even for the case $\mu_t\neq\mu_b$. We understand the
features in the obtained results.
|
In this paper we establish the generalized Beukers integral
$I_{m}(a_{1},...,a_{n})$ with some methods of partial fraction decomposition.
Thus one obtains an explicit expression of the generalized Beukers integral.
Further, we estimate the rational denominator of $I$ and. In the second section
of this paper, we provide some estimates of the upper and lower bound of the
value $J_{3}$, which involves the generalized Beukers integral and is related
to $\zeta(5)$.
|
Discoveries of ordered quantum states of matter are of great fundamental
interest, and often lead to unique applications. The most well known example --
superconductivity -- is caused by the formation and condensation of pairs of
electrons. A key property of superconductors is diamagnetism: magnetic fields
are screened by dissipationless currents. Fundamentally, what distinguishes
superconducting states from normal states is a spontaneously broken symmetry
corresponding to long-range coherence of fermion pairs. Here we report a set of
experimental observations in hole doped Ba$_{1-x}$K$_x$Fe$_2$As$_2$ which are
not consistent with conventional superconducting behavior. Our specific-heat
measurements indicate the formation of fermionic bound states when the
temperature is lowered from the normal state. However, for $x \sim 0.8$,
instead of the standard for superconductors, zero resistance and diamagnetic
screening, for a range of temperatures, we observe the opposite effect: the
generation of self-induced magnetic fields measured by spontaneous Nernst
effect and muon spin rotation experiments. The finite resistance and the lack
of any detectable diamagnetic screening in this state exclude the spontaneously
broken symmetry associated with superconducting two-fermion correlations.
Instead, combined evidence from transport and thermodynamic measurements
indicates that the formation of fermionic bound states leads to spontaneous
breaking of time-reversal symmetry above the superconducting transition
temperature. These results demonstrate the existence of a
broken-time-reversal-symmetry bosonic metal state. In the framework of a
multiband theory, such a state is characterized by quartic correlations: the
long-range order exists only for {\it pairs} of fermion pairs.
|
Deep Neural Networks (NNs) have been widely utilized in contact-rich
manipulation tasks to model the complicated contact dynamics. However, NN-based
models are often difficult to decipher which can lead to seemingly inexplicable
behaviors and unidentifiable failure cases. In this work, we address the
interpretability of NN-based models by introducing the kinodynamic images. We
propose a methodology that creates images from the kinematic and dynamic data
of a contact-rich manipulation task. Our formulation visually reflects the
task's state by encoding its kinodynamic variations and temporal evolution. By
using images as the state representation, we enable the application of
interpretability modules that were previously limited to vision-based tasks. We
use this representation to train Convolution-based Networks and we extract
interpretations of the model's decisions with Grad-CAM, a technique that
produces visual explanations. Our method is versatile and can be applied to any
classification problem using synchronous features in manipulation to visually
interpret which parts of the input drive the model's decisions and distinguish
its failure modes. We evaluate this approach on two examples of real-world
contact-rich manipulation: pushing and cutting, with known and unknown objects.
Finally, we demonstrate that our method enables both detailed visual
inspections of sequences in a task, as well as high-level evaluations of a
model's behavior and tendencies. Data and code for this work are available at
https://github.com/imitsioni/interpretable_manipulation.
|
The goal of Question Answering over Knowledge Graphs (KGQA) is to find
answers for natural language questions over a knowledge graph. Recent KGQA
approaches adopt a neural machine translation (NMT) approach, where the natural
language question is translated into a structured query language. However, NMT
suffers from the out-of-vocabulary problem, where terms in a question may not
have been seen during training, impeding their translation. This issue is
particularly problematic for the millions of entities that large knowledge
graphs describe. We rather propose a KGQA approach that delegates the
processing of entities to entity linking (EL) systems. NMT is then used to
create a query template with placeholders that are filled by entities
identified in an EL phase. Slot filling is used to decide which entity fills
which placeholder. Experiments for QA over Wikidata show that our approach
outperforms pure NMT: while there remains a strong dependence on having seen
similar query templates during training, errors relating to entities are
greatly reduced.
|
Real world applications such as economics and policy making often involve
solving multi-agent games with two unique features: (1) The agents are
inherently asymmetric and partitioned into leaders and followers; (2) The
agents have different reward functions, thus the game is general-sum. The
majority of existing results in this field focuses on either symmetric solution
concepts (e.g. Nash equilibrium) or zero-sum games. It remains open how to
learn the Stackelberg equilibrium -- an asymmetric analog of the Nash
equilibrium -- in general-sum games efficiently from noisy samples.
This paper initiates the theoretical study of sample-efficient learning of
the Stackelberg equilibrium, in the bandit feedback setting where we only
observe noisy samples of the reward. We consider three representative
two-player general-sum games: bandit games, bandit-reinforcement learning
(bandit-RL) games, and linear bandit games. In all these games, we identify a
fundamental gap between the exact value of the Stackelberg equilibrium and its
estimated version using finitely many noisy samples, which can not be closed
information-theoretically regardless of the algorithm. We then establish sharp
positive results on sample-efficient learning of Stackelberg equilibrium with
value optimal up to the gap identified above, with matching lower bounds in the
dependency on the gap, error tolerance, and the size of the action spaces.
Overall, our results unveil unique challenges in learning Stackelberg
equilibria under noisy bandit feedback, which we hope could shed light on
future research on this topic.
|
With increasing data and model complexities, the time required to train
neural networks has become prohibitively large. To address the exponential rise
in training time, users are turning to data parallel neural networks (DPNN) to
utilize large-scale distributed resources on computer clusters. Current DPNN
approaches implement the network parameter updates by synchronizing and
averaging gradients across all processes with blocking communication
operations. This synchronization is the central algorithmic bottleneck. To
combat this, we introduce the Distributed Asynchronous and Selective
Optimization (DASO) method which leverages multi-GPU compute node architectures
to accelerate network training. DASO uses a hierarchical and asynchronous
communication scheme comprised of node-local and global networks while
adjusting the global synchronization rate during the learning process. We show
that DASO yields a reduction in training time of up to 34% on classical and
state-of-the-art networks, as compared to other existing data parallel training
methods.
|
We introduce GEM, a living benchmark for natural language Generation (NLG),
its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly
evolving ecosystem of automated metrics, datasets, and human evaluation
standards. Due to this moving target, new models often still evaluate on
divergent anglo-centric corpora with well-established, but flawed, metrics.
This disconnect makes it challenging to identify the limitations of current
models and opportunities for progress. Addressing this limitation, GEM provides
an environment in which models can easily be applied to a wide set of tasks and
in which evaluation strategies can be tested. Regular updates to the benchmark
will help NLG research become more multilingual and evolve the challenge
alongside models. This paper serves as the description of the data for which we
are organizing a shared task at our ACL 2021 Workshop and to which we invite
the entire NLG community to participate.
|
We present an improved scheme for absorption imaging of alkali atoms at
moderate magnetic fields, where the excited state is well in the Paschen-Back
regime but the ground state hyperfine manifold is not. It utilizes four atomic
levels to obtain an approximately closed optical cycle. With the resulting
absorption of the corresponding two laser frequencies we extract the atomic
column density of a $^{39}$K Bose-Einstein condensate. The scheme can be
readily applied to all other alkali-like species.
|
Aims: We present the first measurements of the solar-wind angular-momentum
(AM) flux recorded by the Solar Orbiter spacecraft. Our aim is the validation
of these measurements to support future studies of the Sun's AM loss. Methods:
We combine 60-minute averages of the proton bulk moments and the magnetic field
measured by the Solar Wind Analyser (SWA) and the magnetometer (MAG) onboard
Solar Orbiter. We calculate the AM flux per solid-angle element using data from
the first orbit of the mission's cruise phase during 2020. We separate the
contributions from protons and from magnetic stresses to the total AM flux.
Results: The AM flux varies significantly over time. The particle contribution
typically dominates over the magnetic-field contribution during our measurement
interval. The total AM flux shows the largest variation and is typically
anti-correlated with the radial solar-wind speed. We identify a compression
region, potentially associated with a co-rotating interaction region or a
coronal mass ejection, that leads to a significant localised increase in the AM
flux, yet without a significant increase in the AM per unit mass. We repeat our
analysis using the density estimate from the Radio and Plasma Waves (RPW)
instrument. Using this independent method, we find a decrease in the peaks of
positive AM flux but otherwise consistent results. Conclusions: Our results
largely agree with previous measurements of the solar-wind AM flux in terms of
amplitude, variability, and dependence on radial solar-wind bulk speed. Our
analysis highlights the potential for future, more detailed, studies of the
solar wind's AM and its other large-scale properties with data from Solar
Orbiter. We emphasise the need to study the radial evolution and latitudinal
dependence of the AM flux in combination with data from Parker Solar Probe and
assets at heliocentric distances of 1 au and beyond.
|
In this paper, we present a semi-supervised training technique using
pseudo-labeling for end-to-end neural diarization (EEND). The EEND system has
shown promising performance compared with traditional clustering-based methods,
especially in the case of overlapping speech. However, to get a well-tuned
model, EEND requires labeled data for all the joint speech activities of every
speaker at each time frame in a recording. In this paper, we explore a
pseudo-labeling approach that employs unlabeled data. First, we propose an
iterative pseudo-label method for EEND, which trains the model using unlabeled
data of a target condition. Then, we also propose a committee-based training
method to improve the performance of EEND. To evaluate our proposed method, we
conduct the experiments of model adaptation using labeled and unlabeled data.
Experimental results on the CALLHOME dataset show that our proposed
pseudo-label achieved a 37.4% relative diarization error rate reduction
compared to a seed model. Moreover, we analyzed the results of semi-supervised
adaptation with pseudo-labeling. We also show the effectiveness of our approach
on the third DIHARD dataset.
|
We investigate Nagaoka ferromagnetism in the two-dimensional Hubbard model
with one hole using the spin-adapted ($SU(2)$ conserving) full configuration
interaction quantum Monte Carlo method. This methodology gives us access to the
ground state energies of all possible spin states $S$ of finite Hubbard
lattices, here obtained for lattices up to 24 sites, for various interaction
strengths ($U$). The critical interaction strength, $U_c$, at which the Nagaoka
transition occurs is determined for each lattice and is found to be
proportional to the lattice size for the larger lattices. Below $U_c$ the
overall ground states are found to favour the minimal total spin ($S=\frac 1
2$), and no intermediate spin state is found to be the overall ground state on
lattices larger than 16 sites. However, at $U_c$, the energies of all the spin
states are found to be nearly degenerate, implying that large fluctuations in
total spin can be expected in the vicinity of the Nagaoka transition.
|
Lepton flavor violated process are strongly suppressed in the Standard Model
due to very small neutrino mass, but can be sizable in some extended models.
The current experimental bounds on decay modes $\ell^{'\pm} \to a \ell^{\pm} $
are much weaker than other flavor violated processes because of the huge
irreducible backgrounds $\ell' \to \ell \bar{\nu}_{\ell} \nu_{\ell'}$. In this
paper, we give the full helicity density matrix of both the signal and
backgrounds, and then study polarization effects. Particularly, we treat
inclusively the two missing neutrinos in the background, and we find that both
longitudinal and transverse polarization effects survives even the relative
kinematical degrees of freedom of the two neutrinos are integrated out.
Furthermore, we have show that signal and backgrounds have distinctive
polarization effects which can be measured by using energy fractions of the
charged decay products. This is particularly useful because kinematical
reconstruction is not required. In case of that the decaying lepton is not
polarized, we show that correlations between polarizations of lepton pair
generated at colliders are still useful to search for the signals. More
interestingly, polarization correlations depends on product of scalar and
pseudo-scalar ALP couplings, and hence are sensitive to their relative sign. We
demonstrate that how the polarization correlation effects can be used to
investigate flavor violating decays $\tau^{\pm} \to a \ell^{\pm} $ at the
BelleII experiment.
|
We show that the resource theory of contextuality does not admit catalysts,
i.e., there are no correlations that can enable an otherwise impossible
resource conversion and still be recovered afterward. As a corollary, we
observe that the same holds for nonlocality. As entanglement allows for
catalysts, this adds a further example to the list of "anomalies of
entanglement," showing that nonlocality and entanglement behave differently as
resources. We also show that catalysis remains impossible even if, instead of
classical randomness, we allow some more powerful behaviors to be used freely
in the free transformations of the resource theory.
|
Off-policy sampling and experience replay are key for improving sample
efficiency and scaling model-free temporal difference learning methods. When
combined with function approximation, such as neural networks, this combination
is known as the deadly triad and is potentially unstable. Recently, it has been
shown that stability and good performance at scale can be achieved by combining
emphatic weightings and multi-step updates. This approach, however, is
generally limited to sampling complete trajectories in order, to compute the
required emphatic weighting. In this paper we investigate how to combine
emphatic weightings with non-sequential, off-line data sampled from a replay
buffer. We develop a multi-step emphatic weighting that can be combined with
replay, and a time-reversed $n$-step TD learning algorithm to learn the
required emphatic weighting. We show that these state weightings reduce
variance compared with prior approaches, while providing convergence
guarantees. We tested the approach at scale on Atari 2600 video games, and
observed that the new X-ETD($n$) agent improved over baseline agents,
highlighting both the scalability and broad applicability of our approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.