abstract
stringlengths 42
2.09k
|
---|
Specific topological excitations of energetically stable "core-and-mantle"
configurations of trapped two-component immiscible Bose-Einstein condensates
are studied numerically within the coupled Gross-Pitaevskii equations.
Non-stationary long-lived coherent structures, that consist of several quantum
vortex filaments penetrating the "mantle" from outside to inside and vice-versa
and demonstrate quite nontrivial dynamics, are observed in simulations for the
first time. The ends of filaments can remain attached to the interface between
the "mantle" and the "core" if the latter is large enough while the surface
tension is not small. The shapes of such "bubbles" are strongly affected by the
vortices and sometimes are far from being spherical.
|
We extend previous work concerning rest-frame partial-wave mixing in
Hamiltonian effective field theory to both elongated and moving systems, where
two particles are in a periodic elongated cube or have nonzero total momentum,
respectively. We also consider the combination of the two systems when
directions of the elongation and the moving momentum are aligned. This
extension should also be applicable in any Hamiltonian formalism. As a
demonstration, we analyze lattice QCD results for the spectrum of an isospin-2
$\pi\pi$ scattering system and determine the $s$, $d$, and $g$ partial-wave
scattering information. The inclusion of lattice simulation results from moving
frames significantly improves the uncertainty in the scattering information.
|
We study Smith-Purcell radiation from a conducting grating generated by a
vortex electron with an orbital angular momentum $\ell \hbar$, described as a
generalized Laguerre-Gaussian packet, which has an intrinsic magnetic dipole
moment and an electric quadrupole moment. By using a multipole expansion of the
electromagnetic field of such an electron, we employ a generalized
surface-current method, applicable for a wide range of parameters. The radiated
energy contains contributions from the charge, from the magnetic moment, and
from the electric quadrupole moment, as well as from their interference. The
quadrupole contribution grows as the packet spreads while propagating, and it
is enhanced for large $\ell$. In contrast to the linear growth of the radiation
intensity from the charge with a number of strips $N$, the quadrupole
contribution reveals an $N^3$ dependence, which puts a limit on the maximal
grating length for which the radiation losses stay small. We study
spectral-angular distributions of the Smith-Purcell radiation both analytically
and numerically and demonstrate that the electron's vorticity can give rise to
detectable effects for non-relativistic and moderately relativistic electrons.
On a practical side, preparing the incoming electron's state in a form of a
non-Gaussian packet with a quadrupole moment -- such as the vortex electron, an
Airy beam, a Schr\"odinger cat state, and so on -- one can achieve quantum
enhancement of the radiation power compared to the classical linear regime.
Such an enhancement would be a hallmark of a previously unexplored quantum
regime of radiation, in which non-Gaussianity of the packet influences the
radiation properties much stronger than the quantum recoil.
|
The QED initial state corrections are calculated to the forward-backward
asymmetry for $e^+e^- \rightarrow \gamma^*/{Z^{0}}^*$ in the leading
logarithmic approximation to $O(\alpha^6 L^6)$ extending the known corrections
up to $O(\alpha^2 L^2)$ in analytic form. We use the method of massive on-shell
operator matrix elements and present the radiators both in Mellin-$N$ and
momentum fraction $z$-space. Numerical results are presented for various
energies around the $Z$-peak by also including energy cuts. These corrections
are of relevance for the precision measurements at the FCC$\_$ee.
|
Based on a recent work on traveling waves in spatially nonlocal
reaction-diffusion equations, we investigate the existence of traveling fronts
in reaction-diffusion equations with a memory term. We will explain how such
memory terms can arise from reduction of reaction-diffusion systems if the
diffusion constants of the other species can be neglected. In particular, we
show that two-scale homogenization of spatially periodic systems can induce
spatially homogeneous systems with temporal memory.
The existence of fronts is proved using comparison principles as well as a
reformulation trick involving an auxiliary speed that allows us to transform
memory terms into spatially nonlocal terms. Deriving explicit bounds and
monotonicity properties of the wave speed of the arising travelingfront, we are
able to establish the existence of true traveling fronts for the original
problem with memory. Our results are supplemented by numerical simulations.
|
In recent years, constant applied potential molecular dynamics has allowed to
study the structure and dynamics of the electrochemical double-layer of a large
variety of nanoscale capacitors. Nevertheless it remained impossible to
simulate polarized electrodes at fixed total charge. Here we show that
combining a constant potential electrode with a finite electric displacement
fills this gap by allowing to simulate open circuit conditions. The method can
be extended by applying an electric displacement ramp to perform computational
amperometry experiments at different current intensities. As in experiments,
the full capacitance of the system is obtained at low intensity, but this
quantity decreases when the applied ramp becomes too fast with respect to the
microscopic dynamics of the liquid.
|
The main difficulty that arises in the analysis of most machine learning
algorithms is to handle, analytically and numerically, a large number of
interacting random variables. In this Ph.D manuscript, we revisit an approach
based on the tools of statistical physics of disordered systems. Developed
through a rich literature, they have been precisely designed to infer the
macroscopic behavior of a large number of particles from their microscopic
interactions. At the heart of this work, we strongly capitalize on the deep
connection between the replica method and message passing algorithms in order
to shed light on the phase diagrams of various theoretical models, with an
emphasis on the potential differences between statistical and algorithmic
thresholds. We essentially focus on synthetic tasks and data generated in the
teacher-student paradigm. In particular, we apply these mean-field methods to
the Bayes-optimal analysis of committee machines, to the worst-case analysis of
Rademacher generalization bounds for perceptrons, and to empirical risk
minimization in the context of generalized linear models. Finally, we develop a
framework to analyze estimation models with structured prior informations,
produced for instance by deep neural networks based generative models with
random weights.
|
Influence competition finds its significance in many applications, such as
marketing, politics and public events like COVID-19. Existing work tends to
believe that the stronger influence will always win and dominate nearly the
whole network, i.e., "winner takes all". However, this finding somewhat
contradicts with our common sense that many competing products are actually
coexistent, e.g., Android vs. iOS. This contradiction naturally raises the
question: will the winner take all?
To answer this question, we make a comprehensive study into influence
competition by identifying two factors frequently overlooked by prior art: (1)
the incomplete observation of real diffusion networks; (2) the existence of
information overload and its impact on user behaviors. To this end, we attempt
to recover possible diffusion links based on user similarities, which are
extracted by embedding users into a latent space. Following this, we further
derive the condition under which users will be overloaded, and formulate the
competing processes where users' behaviors differ before and after information
overload. By establishing the explicit expressions of competing dynamics, we
disclose that information overload acts as the critical "boundary line", before
which the "winner takes all" phenomenon will definitively occur, whereas after
information overload the share of influences gradually stabilizes and is
jointly affected by their initial spreading conditions, influence powers and
the advent of overload. Numerous experiments are conducted to validate our
theoretical results where favorable agreement is found. Our work sheds light on
the intrinsic driving forces behind real-world dynamics, thus providing useful
insights into effective information engineering.
|
With the increasing adoption of private blockchain platforms, consortia
operating in various sectors such as trade, finance, logistics, etc., are
becoming common. Despite having the benefits of a completely decentralized
architecture which supports transparency and distributed control, existing
private blockchains limit the data, assets, and processes within its closed
boundary, which restricts secure and verifiable service provisioning to the
end-consumers. Thus, platforms such as e-commerce with multiple sellers or
cloud federation with a collection of cloud service providers cannot be
decentralized with the existing blockchain platforms. This paper proposes a
decentralized gateway architecture interfacing private blockchain with
end-users by leveraging the unique combination of public and private blockchain
platforms through interoperation. Through the use case of decentralized cloud
federations, we have demonstrated the viability of the solution. Our testbed
implementation with Ethereum and Hyperledger Fabric, with three service
providers, shows that such consortium can operate within an acceptable response
latency while scaling up to 64 parallel requests per second for cloud
infrastructure provisioning. Further analysis over the Mininet emulation
platform indicates that the platform can scale well with minimal impact over
the latency as the number of participating service providers increases.
|
While Machine Comprehension (MC) has attracted extensive research interests
in recent years, existing approaches mainly belong to the category of Machine
Reading Comprehension task which mines textual inputs (paragraphs and
questions) to predict the answers (choices or text spans). However, there are a
lot of MC tasks that accept audio input in addition to the textual input, e.g.
English listening comprehension test. In this paper, we target the problem of
Audio-Oriented Multimodal Machine Comprehension, and its goal is to answer
questions based on the given audio and textual information. To solve this
problem, we propose a Dynamic Inter- and Intra-modality Attention (DIIA) model
to effectively fuse the two modalities (audio and textual). DIIA can work as an
independent component and thus be easily integrated into existing MC models.
Moreover, we further develop a Multimodal Knowledge Distillation (MKD) module
to enable our multimodal MC model to accurately predict the answers based only
on either the text or the audio. As a result, the proposed approach can handle
various tasks including: Audio-Oriented Multimodal Machine Comprehension,
Machine Reading Comprehension and Machine Listening Comprehension, in a single
model, making fair comparisons possible between our model and the existing
unimodal MC models. Experimental results and analysis prove the effectiveness
of the proposed approaches. First, the proposed DIIA boosts the baseline models
by up to 21.08% in terms of accuracy; Second, under the unimodal scenarios, the
MKD module allows our multimodal MC model to significantly outperform the
unimodal models by up to 18.87%, which are trained and tested with only audio
or textual data.
|
Geometry problem solving has attracted much attention in the NLP community
recently. The task is challenging as it requires abstract problem understanding
and symbolic reasoning with axiomatic knowledge. However, current datasets are
either small in scale or not publicly available. Thus, we construct a new
large-scale benchmark, Geometry3K, consisting of 3,002 geometry problems with
dense annotation in formal language. We further propose a novel geometry
solving approach with formal language and symbolic reasoning, called
Interpretable Geometry Problem Solver (Inter-GPS). Inter-GPS first parses the
problem text and diagram into formal language automatically via rule-based text
parsing and neural object detecting, respectively. Unlike implicit learning in
existing methods, Inter-GPS incorporates theorem knowledge as conditional rules
and performs symbolic reasoning step by step. Also, a theorem predictor is
designed to infer the theorem application sequence fed to the symbolic solver
for the more efficient and reasonable searching path. Extensive experiments on
the Geometry3K and GEOS datasets demonstrate that Inter-GPS achieves
significant improvements over existing methods. The project with code and data
is available at https://lupantech.github.io/inter-gps.
|
Heterogeneous multi-task learning (HMTL) is an important topic in multi-task
learning (MTL). Most existing HMTL methods usually solve either scenario where
all tasks reside in the same input (feature) space yet unnecessarily the
consistent output (label) space or scenario where their input (feature) spaces
are heterogeneous while the output (label) space is consistent. However, to the
best of our knowledge, there is limited study on twofold heterogeneous MTL
(THMTL) scenario where the input and the output spaces are both inconsistent or
heterogeneous. In order to handle this complicated scenario, in this paper, we
design a simple and effective multi-task adaptive learning (MTAL) network to
learn multiple tasks in such THMTL setting. Specifically, we explore and
utilize the inherent relationship between tasks for knowledge sharing from
similar convolution kernels in individual layers of the MTAL network. Then in
order to realize the sharing, we weightedly aggregate any pair of convolutional
kernels with their similarity greater than some threshold $\rho$, consequently,
our model effectively performs cross-task learning while suppresses the
intra-redundancy of the entire network. Finally, we conduct end-to-end
training. Our experimental results demonstrate the effectiveness of our method
in comparison with the state-of-the-art counterparts.
|
eBPF is a new technology which allows dynamically loading pieces of code into
the Linux kernel. It can greatly speed up networking since it enables the
kernel to process certain packets without the involvement of a userspace
program. So far eBPF has been used for simple packet filtering applications
such as firewalls or Denial of Service protection. We show that it is possible
to develop a flow based network intrusion detection system based on machine
learning entirely in eBPF. Our solution uses a decision tree and decides for
each packet whether it is malicious or not, considering the entire previous
context of the network flow. We achieve a performance increase of over 20\%
compared to the same solution implemented as a userspace program.
|
We explore the interplay of electron-electron correlations and surface
effects in the prototypical correlated insulating material, NiO. In particular,
we compute the electronic structure, magnetic properties, and surface energies
of the $(001)$ and $(110)$ surfaces of paramagnetic NiO using a fully charge
self-consistent DFT+DMFT method. Our results reveal a complex interplay between
electronic correlations and surface effects in NiO, with the electronic
structure of the $(001)$ and $(110)$ NiO surfaces being significantly different
from that in bulk NiO. We obtain a sizeable reduction of the band gap at the
surface of NiO, which is most significant for the $(110)$ NiO surface. This
suggests a higher catalytic activity of the $(110)$ NiO surface than that of
the $(001)$ NiO one. Our results reveal a charge-transfer character of the
$(001)$ and $(110)$ surfaces of NiO. Most notably, for the $(110)$ NiO surface
we observe a remarkable electronic state characterized by an alternating
charge-transfer and Mott-Hubbard character of the band gap in the surface and
subsurface NiO layers, respectively. This novel form of electronic order
stabilized by strong correlations is not driven by lattice reconstructions but
of purely electronic origin. We notice the importance of
orbital-differentiation of the Ni $e_g$ states to characterize the Mott-Hubbard
insulating state of the $(001)$ and $(110)$ NiO surfaces. The unoccupied Ni
$e_g$ surface states are seen to split from the lower edge of the conduction
band to form strongly localized states in the fundamental gap of bulk NiO. Our
results for the surface energies of the $(001)$ and $(110)$ NiO surfaces show
that the $(001)$ facet of NiO has significantly lower energy. This implies that
the relative stability of different surfaces, at least from a purely energetic
point of view, does not depend on the presence or absence of magnetic order in
NiO.
|
The foundation for the research of summarization in the Czech language was
laid by the work of Straka et al. (2018). They published the SumeCzech, a large
Czech news-based summarization dataset, and proposed several baseline
approaches. However, it is clear from the achieved results that there is a
large space for improvement. In our work, we focus on the impact of named
entities on the summarization of Czech news articles. First, we annotate
SumeCzech with named entities. We propose a new metric ROUGE_NE that measures
the overlap of named entities between the true and generated summaries, and we
show that it is still challenging for summarization systems to reach a high
score in it. We propose an extractive summarization approach Named Entity
Density that selects a sentence with the highest ratio between a number of
entities and the length of the sentence as the summary of the article. The
experiments show that the proposed approach reached results close to the solid
baseline in the domain of news articles selecting the first sentence. Moreover,
we demonstrate that the selected sentence reflects the style of reports
concisely identifying to whom, when, where, and what happened. We propose that
such a summary is beneficial in combination with the first sentence of an
article in voice applications presenting news articles. We propose two
abstractive summarization approaches based on Seq2Seq architecture. The first
approach uses the tokens of the article. The second approach has access to the
named entity annotations. The experiments show that both approaches exceed
state-of-the-art results previously reported by Straka et al. (2018), with the
latter achieving slightly better results on SumeCzech's out-of-domain testing
set.
|
Lattice reconstruction in twisted transition-metal dichalcogenide (TMD)
bilayers gives rise to piezo- and ferroelectric moir\'e potentials for
electrons and holes, as well as a modulation of the hybridisation across the
bilayer. Here, we develop hybrid $\mathbf{k}\cdot \mathbf{p}$ tight-binding
models to describe electrons and holes in the relevant valleys of twisted TMD
homobilayers with parallel (P) and anti-parallel (AP) orientations of the
monolayer unit cells. We apply these models to describe moir\'e superlattice
effects in twisted WSe${}_2$ bilayers, in conjunction with microscopic \emph{ab
initio} calculations, and considering the influence of encapsulation, pressure
and an electric displacement field. Our analysis takes into account mesoscale
lattice relaxation, interlayer hybridisation, piezopotentials, and a weak
ferroelectric charge transfer between the layers, and describes a multitude of
possibilities offered by this system, depending on the choices of P or AP
orientation, twist angle magnitude, and electron/hole valley.
|
We consider the following time-independent nonlinear $L^2$-critical
Schr\"{o}dinger equation \[ -\Delta
u(x)+V(x)u(x)-a|x|^{-b}|u|^{1+\frac{4-2b}{N}}=\mu u(x)\,\ \hbox{in}\,\
\mathbb{R}^N, \] where $\mu\in\mathbb{R}$, $a>0$, $N\geq 1$, $0<b<\min\{2,N\}$,
and $V(x)$ is an external potential. It is shown that ground states of the
above equation can be equivalently described by minimizers of the corresponding
minimization problem. In this paper, we prove that there is a threshold $a^*>0$
such that minimizer exists for $0<a<a^*$ and minimizer does not exist for any
$a>a^*$. However if $a=a^*$, it is proved that whether minimizer exists depends
sensitively on the value of $V(0)$. Moreover, when there is no minimizer at
threshold $a^*$, we give a detailed concentration behavior of minimizers as
$a\nearrow a^*$, based on which we finally prove that there is a unique
minimizer as $a\nearrow a^*$.
|
We study the compactness of composition operators on the Bergman spaces of
certain bounded convex domains in $\mathbb{C}^n$ with non-trivial analytic
discs contained in the boundary. As a consequence we characterize that
compactness of the composition operator with a continuous symbol (up to the
closure) on the Bergman space of the polydisc.
|
We examine the possibility of "soft cosmology", namely small deviations from
the usual framework due to the effective appearance of soft-matter properties
in the Universe sectors. One effect of such a case would be the dark energy to
exhibit a different equation-of-state parameter at large scales (which
determine the universe expansion) and at intermediate scales (which determine
the sub-horizon clustering and the large scale structure formation). Concerning
soft dark matter, we show that it can effectively arise due to the dark-energy
clustering, even if dark energy is not soft. We propose a novel parametrization
introducing the "softness parameters" of the dark sectors. As we see, although
the background evolution remains unaffected, due to the extreme sensitivity and
significant effects on the global properties even a slightly non-trivial
softness parameter can improve the clustering behavior and alleviate e.g. the
$f\sigma_8$ tension. Lastly, an extension of the cosmological perturbation
theory and a detailed statistical mechanical analysis, in order to incorporate
complexity and estimate the scale-dependent behavior from first principles, is
necessary and would provide a robust argumentation in favour of soft cosmology.
|
Recently, several universal methods have been proposed for online convex
optimization, and attain minimax rates for multiple types of convex functions
simultaneously. However, they need to design and optimize one surrogate loss
for each type of functions, which makes it difficult to exploit the structure
of the problem and utilize the vast amount of existing algorithms. In this
paper, we propose a simple strategy for universal online convex optimization,
which avoids these limitations. The key idea is to construct a set of experts
to process the original online functions, and deploy a meta-algorithm over the
\emph{linearized} losses to aggregate predictions from experts. Specifically,
we choose Adapt-ML-Prod to track the best expert, because it has a second-order
bound and can be used to leverage strong convexity and exponential concavity.
In this way, we can plug in off-the-shelf online solvers as black-box experts
to deliver problem-dependent regret bounds. Furthermore, our strategy inherits
the theoretical guarantee of any expert designed for strongly convex functions
and exponentially concave functions, up to a double logarithmic factor. For
general convex functions, it maintains the minimax optimality and also achieves
a small-loss bound.
|
An important limitation of standard multiple testing procedures is that the
null distribution should be known. Here, we consider a
null distribution-free approach for multiple testing in the following
semi-supervised setting: the user does not know the null distribution, but has
at hand a sample drawn from this null distribution. In practical situations,
this null training sample (NTS) can come from previous experiments, from a part
of the data under test, from specific simulations, or from a sampling process.
In this work, we present theoretical results that handle such a framework, with
a focus on the false discovery rate (FDR) control and the Benjamini-Hochberg
(BH) procedure. First, we provide upper and lower bounds for the FDR of the BH
procedure based on empirical $p$-values. These bounds match when $\alpha
(n+1)/m$ is an integer, where $n$ is the NTS sample size and $m$ is the number
of tests. Second, we give a power analysis for that procedure suggesting that
the price to pay for ignoring the null distribution is low when $n$ is
sufficiently large in front of $m$; namely $n\gtrsim m/(\max(1,k))$, where $k$
denotes the number of ``detectable'' alternatives. Third, to complete the
picture, we also present a negative result that evidences an intrinsic
transition phase to the general semi-supervised multiple testing problem {and
shows that the empirical BH method is optimal in the sense that its performance
boundary follows this transition phase}. Our theoretical properties are
supported by numerical experiments, which also show that the delineated
boundary is of correct order without further tuning any constant. Finally, we
demonstrate that our work provides a theoretical ground for standard practice
in astronomical data analysis, and in particular for the procedure proposed in
\cite{Origin2020} for galaxy detection.
|
Classically, data interpolation with a parametrized model class is possible
as long as the number of parameters is larger than the number of equations to
be satisfied. A puzzling phenomenon in deep learning is that models are trained
with many more parameters than what this classical theory would suggest. We
propose a theoretical explanation for this phenomenon. We prove that for a
broad class of data distributions and model classes, overparametrization is
necessary if one wants to interpolate the data smoothly. Namely we show that
smooth interpolation requires $d$ times more parameters than mere
interpolation, where $d$ is the ambient data dimension. We prove this universal
law of robustness for any smoothly parametrized function class with polynomial
size weights, and any covariate distribution verifying isoperimetry. In the
case of two-layers neural networks and Gaussian covariates, this law was
conjectured in prior work by Bubeck, Li and Nagaraj. We also give an
interpretation of our result as an improved generalization bound for model
classes consisting of smooth functions.
|
Motivated by the recent interest in cyber-physical and autonomous robotic
systems, we study the problem of dynamically coupled multi-agent systems under
a set of signal temporal logic tasks. In particular, the satisfaction of each
of these signal temporal logic tasks depends on the behavior of a distinct set
of agents. Instead of abstracting the agent dynamics and the temporal logic
tasks into a discrete domain and solving the problem therein or using
optimization-based methods, we derive collaborative feedback control laws.
These control laws are based on a decentralized control barrier function
condition that results in discontinuous control laws, as opposed to a
centralized condition resembling the single-agent case. The benefits of our
approach are inherent robustness properties typically present in feedback
control as well as satisfaction guarantees for continuous-time multi-agent
systems. More specifically, time-varying control barrier functions are used
that account for the semantics of the signal temporal logic tasks at hand. For
a certain fragment of signal temporal logic tasks, we further propose a
systematic way to construct such control barrier functions. Finally, we show
the efficacy and robustness of our framework in an experiment including a group
of three omnidirectional robots.
|
We present the first searches for gravitational waves from r-modes of the
Crab pulsar, coherently and separately integrating data from three stretches of
the first two observing runs of Advanced LIGO using the F-statistic. The second
run was divided in two by a glitch of the pulsar roughly halfway through. The
frequencies and derivatives searched were based on radio measurements of the
pulsar's spin-down parameters as described in Caride et al., Phys. Rev. D 100,
064013 (2019). We did not find any evidence of gravitational waves. Our best
90% confidence upper limits on gravitational wave intrinsic strain were 1.5e-25
for the first run, 1.3e-25 for the first stretch of the second run, and 1.1e-25
for the second stretch of the second run. These are the first upper limits on
gravitational waves from r-modes of a known pulsar to beat its spin-down limit,
and they do so by more than an order of magnitude in amplitude or two orders of
magnitude in luminosity.
|
Probabilistic models such as Gaussian processes (GPs) are powerful tools to
learn unknown dynamical systems from data for subsequent use in control design.
While learning-based control has the potential to yield superior performance in
demanding applications, robustness to uncertainty remains an important
challenge. Since Bayesian methods quantify uncertainty of the learning results,
it is natural to incorporate these uncertainties into a robust design. In
contrast to most state-of-the-art approaches that consider worst-case
estimates, we leverage the learning method's posterior distribution in the
controller synthesis. The result is a more informed and, thus, more efficient
trade-off between performance and robustness. We present a novel controller
synthesis for linearized GP dynamics that yields robust controllers with
respect to a probabilistic stability margin. The formulation is based on a
recently proposed algorithm for linear quadratic control synthesis, which we
extend by giving probabilistic robustness guarantees in the form of credibility
bounds for the system's stability.Comparisons to existing methods based on
worst-case and certainty-equivalence designs reveal superior performance and
robustness properties of the proposed method.
|
Competitive board games have provided a rich and diverse testbed for
artificial intelligence. This paper contends that collaborative board games
pose a different challenge to artificial intelligence as it must balance
short-term risk mitigation with long-term winning strategies. Collaborative
board games task all players to coordinate their different powers or pool their
resources to overcome an escalating challenge posed by the board and a
stochastic ruleset. This paper focuses on the exemplary collaborative board
game Pandemic and presents a rolling horizon evolutionary algorithm designed
specifically for this game. The complex way in which the Pandemic game state
changes in a stochastic but predictable way required a number of specially
designed forward models, macro-action representations for decision-making, and
repair functions for the genetic operations of the evolutionary algorithm.
Variants of the algorithm which explore optimistic versus pessimistic game
state evaluations, different mutation rates and event horizons are compared
against a baseline hierarchical policy agent. Results show that an evolutionary
approach via short-horizon rollouts can better account for the future dangers
that the board may introduce, and guard against them. Results highlight the
types of challenges that collaborative board games pose to artificial
intelligence, especially for handling multi-player collaboration interactions.
|
Hazy images are often subject to color distortion, blurring, and other
visible quality degradation. Some existing CNN-based methods have great
performance on removing homogeneous haze, but they are not robust in
non-homogeneous case. The reasons are mainly in two folds. Firstly, due to the
complicated haze distribution, texture details are easy to be lost during the
dehazing process. Secondly, since the training pairs are hard to be collected,
training on limited data can easily lead to over-fitting problem. To tackle
these two issues, we introduce a novel dehazing network using 2D discrete
wavelet transform, namely DW-GAN. Specifically, we propose a two-branch network
to deal with the aforementioned problems. By utilizing wavelet transform in DWT
branch, our proposed method can retain more high-frequency knowledge in feature
maps. In order to prevent over-fitting, ImageNet pre-trained Res2Net is adopted
in the knowledge adaptation branch. Owing to the robust feature representations
of ImageNet pre-training, the generalization ability of our network is improved
dramatically. Finally, a patch-based discriminator is used to reduce artifacts
of the restored images. Extensive experimental results demonstrate that the
proposed method outperforms the state-of-the-arts quantitatively and
qualitatively.
|
This expository survey is based on my online talk at the ICCM 2020. It aims
to sketch key steps of the recent proof of the uniform Mordell-Lang conjecture
for curves embedded into Jacobians (a question of Mazur). The full version of
this conjecture is proved by combining Dimitrov-Gao-Habegger
(https://annals.math.princeton.edu/articles/17715) and K\"{u}hne
(arXiv:2101.10272). We include in this survey a detailed proof on how to
combine these two results, which was implicitly done in another short paper of
Dimitrov-Gao-Habegger (arXiv:2009.08505) but not explicitly written in existing
literature. At the end of the survey we state some future aspects.
|
In this paper, we study the long-time behavior of global solutions to the
Schr\"odinger-Choquard equation $$i\partial_tu+\Delta
u=-(I_\alpha\ast|\cdot|^b|u|^{p})|\cdot|^b|u|^{p-2}u.$$
Inspired by Murphy, who gave a simple proof of scattering for the non-radial
inhomogeneous NLS, we prove scattering theory below the ground state for the
intercritical case in energy space without radial assumption.
|
We prove that the free energy of directed polymer in Bernoulli environment
converges to the growth rate for the number of open paths in super-critical
oriented percolation as the temperature tends to zero. Our proof is based on
rate of convergence results which hold uniformly in the temperature. We also
prove that the convergence rate is locally uniform in the percolation parameter
inside the super-critical phase, which implies that the growth rate depends
continuously on the percolation parameter.
|
We propose a new method for the visual quality assessment of 360-degree
(omnidirectional) videos. The proposed method is based on computing multiple
spatio-temporal objective quality features on viewports extracted from
360-degree videos. A new model is learnt to properly combine these features
into a metric that closely matches subjective quality scores. The main
motivations for the proposed approach are that: 1) quality metrics computed on
viewports better captures the user experience than metrics computed on the
projection domain; 2) the use of viewports easily supports different projection
methods being used in current 360-degree video systems; and 3) no individual
objective image quality metric always performs the best for all types of visual
distortions, while a learned combination of them is able to adapt to different
conditions. Experimental results, based on both the largest available
360-degree videos quality dataset and a cross-dataset validation, demonstrate
that the proposed metric outperforms state-of-the-art 360-degree and 2D video
quality metrics.
|
In the context of scalar-tensor theories, the inclusion of new degrees of
freedom coupled non-minimally to the gravitational sector might produce
additional imprints on cosmological observables. We investigate this premise by
including a canonical SU(2) Yang-Mills field to the total energy budget of the
universe coupled to the standard quintessential field by a disformal
transformation. The background dynamics study is addressed by a dynamical
system analysis from which novel anisotropic scaling solutions with a
non-vanishing gauge field, supporting a preferred spatial direction, are
obtained. After establishing the dynamical character of the fixed points, the
phenomenological consequences of the model on the background evolution of the
universe are assessed by means of numerical analysis. As an interesting result,
the disformal coupling changes the equation of state of the gauge field from
radiation to matter at some stages of the evolution of the universe, thereby
the gauge field can contribute to some fraction of the total dark matter. We
have also quantified the redshift-dependent contribution of the gauge field in
the form of dark radiation during the radiation era to the effective number of
relativistic species. This depends essentially on the initial conditions and,
more importantly, on the disformal coupling function. Phenomenological
couplings and the Abelian version of the model are discussed in order to check
the generality of our results. Finally, the phenomenological advantages of this
model are discussed in the light of the current tensions in the $\Lambda$CDM
model.
|
Let $\mathbb{Z}^{ab}$ be the ring of integers of $\mathbb{Q}^{ab}$, the
maximal abelian extension of $\mathbb{Q}$. We show that there exists an
algorithm to decide whether a system of equations and inequations, with integer
coefficients, has a solution in $\mathbb{Z}^{ab}$ modulo every rational prime.
|
High temperature superconductivity in cuprates arises from doping a parent
Mott insulator by electrons or holes. A central issue is how the Mott gap
evolves and the low-energy states emerge with doping. Here we report
angle-resolved photoemission spectroscopy measurements on a cuprate parent
compound by sequential in situ electron doping. The chemical potential jumps to
the bottom of the upper Hubbard band upon a slight electron doping, making it
possible to directly visualize the charge transfer band and the full Mott gap
region. With increasing doping, the Mott gap rapidly collapses due to the
spectral weight transfer from the charge transfer band to the gapped region and
the induced low-energy states emerge in a wide energy range inside the Mott
gap. These results provide key information on the electronic evolution in
doping a Mott insulator and establish a basis for developing microscopic
theories for cuprate superconductivity.
|
A $(d,k)$-set is a subset of $\mathbb{R}^d$ containing a $k$-dimensional unit
ball of all possible orientations. Using an approach of D.~Oberlin we prove
various Fourier dimension estimates for compact $(d,k)$-sets. Our main interest
is in restricted $(d,k)$-sets, where the set only contains unit balls with a
restricted set of possible orientations $\Gamma$. In this setting our estimates
depend on the Hausdorff dimension of $\Gamma$ and can sometimes be improved if
additional geometric properties of $\Gamma$ are assumed. We are led to consider
cones and prove that the cone in $\mathbb{R}^{d+1}$ has Fourier dimension
$d-1$, which may be of interest in its own right.
|
We give an explicit formula for the Waring rank of every binary binomial form
with complex coefficients. We give several examples to illustrate this, and
compare the Waring rank and the real Waring rank for binary binomial forms.
|
Open Information Extraction (OIE) systems seek to compress the factual
propositions of a sentence into a series of n-ary tuples. These tuples are
useful for downstream tasks in natural language processing like knowledge base
creation, textual entailment, and natural language understanding. However,
current OIE datasets are limited in both size and diversity. We introduce a new
dataset by converting the QA-SRL 2.0 dataset to a large-scale OIE dataset
(LSOIE). Our LSOIE dataset is 20 times larger than the next largest
human-annotated OIE dataset. We construct and evaluate several benchmark OIE
models on LSOIE, providing baselines for future improvements on the task. Our
LSOIE data, models, and code are made publicly available
|
Software products have become an integral part of human lives, and therefore
need to account for human values such as privacy, fairness, and equality.
Ignoring human values in software development leads to biases and violations of
human values: racial biases in recidivism assessment and facial recognition
software are well-known examples of such issues. One of the most critical steps
in software development is Software Release Planning (SRP), where decisions are
made about the presence or absence of the requirements (features) in the
software. Such decisions are primarily guided by the economic value of the
requirements, ignoring their impacts on a broader range of human values. That
may result in ignoring (selecting) requirements that positively (negatively)
impact human values, increasing the risk of value breaches in the software. To
address this, we have proposed an Integer Programming approach to considering
human values in software release planning. In this regard, an Integer Linear
Programming (ILP) model has been proposed, that explicitly accounts for human
values in finding an "optimal" subset of the requirements. The ILP model
exploits the algebraic structure of fuzzy graphs to capture dependencies and
conflicts among the values of the requirements.
|
Without contamination from the final state interactions, the calculation of
the branching ratios of semileptonic decays $\Xi^{(')}_{c}\to\Xi+e^+\nu_e$ may
provide us more information about the inner structure of charmed baryons.
Moreover, by studying those processes, one can better determine the form
factors of $\Xi_c\to\Xi$ which can be further applied to relevant estimates. In
this work, we use the light-front quark model to carry out the computations
where the three-body vertex functions for $\Xi_c$ and $\Xi$ are employed. To
fit the new data of the Belle II, we re-adjust the model parameters and obtain
$\beta_{s[sq]}=1.07$ GeV which is 2.9 times larger than $\beta_{s\bar s}=0.366$
GeV. This value may imply that the $ss$ pair in $\Xi$ constitutes a more
compact subsystem. Furthermore, we also investigate the non-leptonic decays of
$\Xi^{(')}_c\to \Xi$ which will be experimentally measured soon, so our model
would be tested by consistency with the new data.
|
Node influence metrics have been applied to many applications, including
ranking web pages on internet, or locations on spatial networks. PageRank is a
popular and effective algorithm for estimating node influence. However,
conventional PageRank method considers neither the heterogeneity of network
structures nor additional network information, causing a major impedance to
performance improvement and an underestimation of non-hub nodes' importance. As
these problems are only partially studied, existing solutions are still not
satisfying. This paper addresses the problems by presenting a general
PageRank-based model framework, dubbed Hetero-NodeRank, that accounts for
heterogeneous network topology and incorporates node attribute information to
capture both link- and node-based effects in measuring node influence.
Moreover, the framework enables the calibration of the proposed model against
empirical data, which transforms the original deductive approach into an
inductive one that could be useful for hypothesis testing and causal-effect
analysis. As the original unsupervised task becomes a supervised one,
optimization methods can be leveraged for model calibrations. Experiments on
real data from the national city network of China demonstrated that the
proposed model outperforms several widely used algorithms.
|
This review summarizes more than 100 years of research on spinel compounds,
mainly focusing on the progress in understanding their magnetic, electronic,
and polar properties during the last two decades. Many spinel compounds are
magnetic insulators or semiconductors; however, a number of spinel-type metals
exists including superconductors and some rare examples of d-derived
heavy-fermion compounds. In the early days, they gained importance as
ferrimagnetic or even ferromagnetic insulators with relatively high saturation
magnetization and high ordering temperatures, with magnetite being the first
magnetic mineral known to mankind. However, spinels played an outstanding role
in the development of concepts of magnetism, in testing and verifying the
fundamentals of magnetic exchange, in understanding orbital-ordering and
charge-ordering phenomena. In addition, the A- site as well as the B-site
cations in the spinel structure form lattices prone to strong frustration
effects resulting in exotic ground-state properties. In case the A-site cation
is Jahn-Teller active, additional entanglements of spin and orbital degrees of
freedom appear, which can give rise to a spin-orbital liquid or an orbital
glass state. The B-site cations form a pyrochlore lattice, one of the strongest
contenders of frustration in three dimensions. In addition, in spinels with
both cation lattices carrying magnetic moments, competing magnetic exchange
interactions become important, yielding ground states like the time-honoured
triangular Yafet-Kittel structure. Finally, yet importantly, there exists a
long-standing dispute about the possibility of a polar ground state in spinels,
despite their reported overall cubic symmetry. Indeed, over the years number of
multiferroic spinels were identified.
|
We study a class of gauge groups that can automatically yield a
perturbatively exact Peccei-Quinn symmetry, and we outline a model in which the
axion quality problem is solved at all operator dimensions. Gauge groups
belonging to this class can also enforce and protect accidental symmetries of
the clockwork type, and we present a toy model where an `invisible' axion
arises from a single breaking of the gauge and global symmetries.
|
I briefly review several important formal theory developments in quantum
field theory and string theory that were reported at ICHEP conferences in past
decades, and explain how they underlie a new research area referred to as
physical or quantum mathematics. To illustrate these ideas in some specific
context, I discuss certain aspects of topological string theory and a recently
discovered knots-quivers correspondence.
|
Convolution is one of the basic building blocks of CNN architectures. Despite
its common use, standard convolution has two main shortcomings:
Content-agnostic and Computation-heavy. Dynamic filters are content-adaptive,
while further increasing the computational overhead. Depth-wise convolution is
a lightweight variant, but it usually leads to a drop in CNN performance or
requires a larger number of channels. In this work, we propose the Decoupled
Dynamic Filter (DDF) that can simultaneously tackle both of these shortcomings.
Inspired by recent advances in attention, DDF decouples a depth-wise dynamic
filter into spatial and channel dynamic filters. This decomposition
considerably reduces the number of parameters and limits computational costs to
the same level as depth-wise convolution. Meanwhile, we observe a significant
boost in performance when replacing standard convolution with DDF in
classification networks. ResNet50 / 101 get improved by 1.9% and 1.3% on the
top-1 accuracy, while their computational costs are reduced by nearly half.
Experiments on the detection and joint upsampling networks also demonstrate the
superior performance of the DDF upsampling variant (DDF-Up) in comparison with
standard convolution and specialized content-adaptive layers.
|
School performance measures are published annually in England to hold schools
to account and to support parental school choice. This article reviews and
evaluates the Progress 8 secondary school accountability system for
state-funded schools. We assess the statistical strengths and weaknesses of
Progress 8 relating to: choice of pupil outcome attainment measures; potential
adjustments for pupil input attainment and background characteristics;
decisions around which schools and pupils are excluded from the measure;
presentation of Progress 8 to users, choice of statistical model, and
calculation of statistical uncertainty; and issues related to the volatility of
school performance over time, including scope for reporting multi-year
averages. We then discuss challenges for Progress 8 raised by the COVID-19
pandemic. Six simple recommendations follow to improve Progress 8 and school
accountability in England.
|
Designing broadband enhanced chirality is of strong interest to the emerging
fields of chiral chemistry and sensing, or to control the spin orbital momentum
of photons in recently introduced nanophotonic chiral quantum and classical
optical applications. However, chiral light-matter interactions have an
extremely weak nature, are difficult to be controlled and enhanced, and cannot
be made tunable or broadband. In addition, planar ultrathin nanophotonic
structures to achieve strong, broadband, and tunable chirality at the
technologically important visible to ultraviolet spectrum still remain elusive.
Here, we tackle these important problems by experimentally demonstrating and
theoretically verifying spectrally tunable, extremely large, and broadband
chiroptical response by nanohelical metamaterials. The reported new designs of
all-dielectric and dielectric-metallic (hybrid) plasmonic metamaterials permit
the largest and broadest ever measured chiral Kuhn dissymmetry factor achieved
by a large-scale nanophotonic structure. In addition, the strong circular
dichroism of the presented bottom-up fabricated optical metamaterials can be
tuned by varying their dimensions and proportions between their dielectric and
plasmonic helical subsections. The currently demonstrated ultrathin optical
metamaterials are expected to provide a substantial boost to the developing
field of chiroptics leading to significantly enhanced and broadband chiral
light-matter interactions at the nanoscale.
|
Recent works find that AI algorithms learn biases from data. Therefore, it is
urgent and vital to identify biases in AI algorithms. However, the previous
bias identification pipeline overly relies on human experts to conjecture
potential biases (e.g., gender), which may neglect other underlying biases not
realized by humans. To help human experts better find the AI algorithms'
biases, we study a new problem in this work -- for a classifier that predicts a
target attribute of the input image, discover its unknown biased attribute.
To solve this challenging problem, we use a hyperplane in the generative
model's latent space to represent an image attribute; thus, the original
problem is transformed to optimizing the hyperplane's normal vector and offset.
We propose a novel total-variation loss within this framework as the objective
function and a new orthogonalization penalty as a constraint. The latter
prevents trivial solutions in which the discovered biased attribute is
identical with the target or one of the known-biased attributes. Extensive
experiments on both disentanglement datasets and real-world datasets show that
our method can discover biased attributes and achieve better disentanglement
w.r.t. target attributes. Furthermore, the qualitative results show that our
method can discover unnoticeable biased attributes for various object and scene
classifiers, proving our method's generalizability for detecting biased
attributes in diverse domains of images. The code is available at
https://git.io/J3kMh.
|
We study a fundamental problem in computational chemistry known as molecular
conformation generation, trying to predict stable 3D structures from 2D
molecular graphs. Existing machine learning approaches usually first predict
distances between atoms and then generate a 3D structure satisfying the
distances, where noise in predicted distances may induce extra errors during 3D
coordinate generation. Inspired by the traditional force field methods for
molecular dynamics simulation, in this paper, we propose a novel approach
called ConfGF by directly estimating the gradient fields of the log density of
atomic coordinates. The estimated gradient fields allow directly generating
stable conformations via Langevin dynamics. However, the problem is very
challenging as the gradient fields are roto-translation equivariant. We notice
that estimating the gradient fields of atomic coordinates can be translated to
estimating the gradient fields of interatomic distances, and hence develop a
novel algorithm based on recent score-based generative models to effectively
estimate these gradients. Experimental results across multiple tasks show that
ConfGF outperforms previous state-of-the-art baselines by a significant margin.
|
This paper will suggest a new finite element method to find a $P^4$-velocity
and a $P^3$-pressure solving Stokes equations. The method solves first the
decoupled equation for the $P^4$-velocity. Then, four kinds of local
$P^3$-pressures and one $P^0$-pressure will be calculated in a successive way.
If we superpose them, the resulting $P^3$-pressure shows the optimal order of
convergence same as a $P^3$-projection. The chief time cost of the new method
is on solving two linear systems for the $P^4$-velocity and $P^0$-pressure,
respectively.
|
Based on the fact that the constituent quark model reproduces the recent
lattice result on baryon-baryon repulsion at short distance and that it
includes the quark dynamics with confinement, we analyze to what extent the
quarkyonic modes appear in the phase space of baryons as one increases the
density before only quark dynamics and hence deconfinement occurs. We find that
as one increases the baryon density, the initial quark mode that appears will
involve the $d(u)$-quark from a neutron (proton), which will leave the most
attractive ($ud$) diquark intact.
|
Since the outbreak of Coronavirus Disease 2019 (COVID-19), most of the
impacted patients have been diagnosed with high fever, dry cough, and soar
throat leading to severe pneumonia. Hence, to date, the diagnosis of COVID-19
from lung imaging is proved to be a major evidence for early diagnosis of the
disease. Although nucleic acid detection using real-time reverse-transcriptase
polymerase chain reaction (rRT-PCR) remains a gold standard for the detection
of COVID-19, the proposed approach focuses on the automated diagnosis and
prognosis of the disease from a non-contrast chest computed tomography (CT)scan
for timely diagnosis and triage of the patient. The prognosis covers the
quantification and assessment of the disease to help hospitals with the
management and planning of crucial resources, such as medical staff,
ventilators and intensive care units (ICUs) capacity. The approach utilises
deep learning techniques for automated quantification of the severity of
COVID-19 disease via measuring the area of multiple rounded ground-glass
opacities (GGO) and consolidations in the periphery (CP) of the lungs and
accumulating them to form a severity score. The severity of the disease can be
correlated with the medicines prescribed during the triage to assess the
effectiveness of the treatment. The proposed approach shows promising results
where the classification model achieved 93% accuracy on hold-out data.
|
We investigate errors in tangents and adjoints of implicit functions
resulting from errors in the primal solution due to approximations computed by
a numerical solver.
Adjoints of systems of linear equations turn out to be unconditionally
numerically stable. Tangents of systems of linear equations can become instable
as well as both tangents and adjoints of systems of nonlinear equations, which
extends to optima of convex unconstrained objectives. Sufficient conditions for
numerical stability are derived.
|
Recently, it was argued in [Phys. Rev. Lett. {\bf126}, 031102 (2021)] that
the WCCC can serve as a constraint to high-order effective field theories.
However, we find there exists a key error in their approximate black hole
solution. After correcting it, their calculation cannot show the ability of
WCCC to constrain the gravitational theories.
|
We present a dynamical description of (anti)proton number fluctuations
cumulants and correlation functions in central Au-Au collisions at
$\sqrt{s_{\rm NN}} = 7.7-200$ GeV by utilizing viscous hydrodynamics
simulations. The cumulants of proton and baryon number are calculated in a
given momentum acceptance analytically, via an appropriately extended
Cooper-Frye procedure describing particlization of an interacting hadron
resonance gas. The effects of global baryon number conservation are taken into
account using a generalized subensemble acceptance method. The experimental
data of the STAR collaboration are consistent at $\sqrt{s_{\rm NN}} \gtrsim 20$
GeV with simultaneous effects of global baryon number conservation and
repulsive interactions in baryon sector, the latter being in line with the
behavior of baryon number susceptibilities observed in lattice QCD. The data at
lower collision energies show possible indications for sizable attractive
interactions among baryons. The data also indicate sizable negative
two-particle correlations between antiprotons that are not satisfactorily
described by baryon conservation and excluded volume effects. We also discuss
differences between cumulants and correlation functions (factorial cumulants)
of (anti)proton number distribution, proton versus baryon number fluctuations,
and effects of hadronic afterburner.
|
We consider the problem where $n$ clients transmit $d$-dimensional
real-valued vectors using $d(1+o(1))$ bits each, in a manner that allows the
receiver to approximately reconstruct their mean. Such compression problems
naturally arise in distributed and federated learning. We provide novel
mathematical results and derive computationally efficient algorithms that are
more accurate than previous compression techniques. We evaluate our methods on
a collection of distributed and federated learning tasks, using a variety of
datasets, and show a consistent improvement over the state of the art.
|
Recently, the SARS-CoV-2 variants from the United Kingdom (UK), South Africa,
and Brazil have received much attention for their increased infectivity,
potentially high virulence, and possible threats to existing vaccines and
antibody therapies. The question remains if there are other more infectious
variants transmitted around the world. We carry out a large-scale study of
252,874 SARS-CoV-2 genome isolates from patients to identify many other rapidly
growing mutations on the spike (S) protein receptor-binding domain (RDB). We
reveal that 88 out of 95 significant mutations that were observed more than 10
times strengthen the binding between the RBD and the host
angiotensin-converting enzyme 2 (ACE2), indicating the virus evolves toward
more infectious variants. In particular, we discover new fast-growing RBD
mutations N439K, L452R, S477N, S477R, and N501T that also enhance the RBD and
ACE2 binding. We further unveil that mutation N501Y involved in United Kingdom
(UK), South Africa, and Brazil variants may moderately weaken the binding
between the RBD and many known antibodies, while mutations E484K and K417N
found in South Africa and Brazilian variants can potentially disrupt the
binding between the RDB and many known antibodies. Among three newly identified
fast-growing RBD mutations, L452R, which is now known as part of the California
variant B.1.427, and N501T are able to effectively weaken the binding of many
known antibodies with the RBD. Finally, we hypothesize that RBD mutations that
can simultaneously make SARS-CoV-2 more infectious and disrupt the existing
antibodies, called vaccine escape mutations, will pose an imminent threat to
the current crop of vaccines. A list of most likely vaccine escape mutations is
given, including N501Y, L452R, E484K, N501T, S494P, and K417N.
|
We relate the geometry of Schubert varieties in twisted affine Grassmannian
and the nilpotent varieties in symmetric spaces. This extends some results of
Achar-Henderson in the twisted setting. We also get some applications to the
geometry of the order 2 nilpotent varieties in certain classical symmetric
spaces.
|
The optomechanical character of molecules was discovered by Raman about one
century ago. Today, molecules are promising contenders for high-performance
quantum optomechanical platforms because their small size and large
energy-level separations make them intrinsically robust against thermal
agitations. Moreover, the precision and throughput of chemical synthesis can
ensure a viable route to quantum technological applications. The challenge,
however, is that the coupling of molecular vibrations to environmental phonons
limits their coherence to picosecond time scales. Here, we improve the
optomechanical quality of a molecule by several orders of magnitude through
phononic engineering of its surrounding. By dressing a molecule with long-lived
high-frequency phonon modes of its nanoscopic environment, we achieve storage
and retrieval of photons at millisecond time scales and allow for the emergence
of single-photon strong coupling in optomechanics. Our strategy can be extended
to the realization of molecular optomechanical networks.
|
In this paper we propose $\epsilon$-Consistent Mixup ($\epsilon$mu).
$\epsilon$mu is a data-based structural regularization technique that combines
Mixup's linear interpolation with consistency regularization in the Mixup
direction, by compelling a simple adaptive tradeoff between the two. This
learnable combination of consistency and interpolation induces a more flexible
structure on the evolution of the response across the feature space and is
shown to improve semi-supervised classification accuracy on the SVHN and
CIFAR10 benchmark datasets, yielding the largest gains in the most challenging
low label-availability scenarios. Empirical studies comparing $\epsilon$mu and
Mixup are presented and provide insight into the mechanisms behind
$\epsilon$mu's effectiveness. In particular, $\epsilon$mu is found to produce
more accurate synthetic labels and more confident predictions than Mixup.
|
We present a novel phase locking scheme for the coherent combination of beam
arrays in the filled aperture configuration. Employing a phase dithering
mechanism for the different beams similar to LOCSET, dithering frequencies for
sequential combination steps are reused. By applying an additional phase
alternating scheme, this allows to use standard synchronized multichannel
lock-in electronics for phase locking a large number of channels even when the
frequency bandwidth of the employed phase actuators is limited.
|
Short text is a popular avenue of sharing feedback, opinions and reviews on
social media, e-commerce platforms, etc. Many companies need to extract
meaningful information (which may include thematic content as well as semantic
polarity) out of such short texts to understand users' behaviour. However,
obtaining high quality sentiment-associated and human interpretable themes
still remains a challenge for short texts. In this paper we develop ELJST, an
embedding enhanced generative joint sentiment-topic model that can discover
more coherent and diverse topics from short texts. It uses Markov Random Field
Regularizer that can be seen as a generalisation of skip-gram based models.
Further, it can leverage higher-order semantic information appearing in word
embedding, such as self-attention weights in graphical models. Our results show
an average improvement of 10% in topic coherence and 5% in topic
diversification over baselines. Finally, ELJST helps understand users'
behaviour at more granular levels which can be explained. All these can bring
significant values to the service and healthcare industries often dealing with
customers.
|
Using $980~\rm fb^{-1}$ of data on and around the $\Upsilon(nS)(n=1,2,3,4,5)$
resonances collected with the Belle detector at the KEKB asymmetric-energy
$e^+e^-$ collider, the two-photon process $\gamma\gamma\to \gamma\psi(2S)$ is
studied from $\sqrt{s} = 3.7$ to $4.2~{\rm GeV}$ for the first time. Evidence
is found for a structure in the invariant mass distribution of $\gamma\psi(2S)$
at $M_1 = 3921.3\pm 2.4 \pm 1.6~{\rm MeV}/c^2$ with a width of $\Gamma_1 =
0.0\pm 5.3 \pm 2.0~{\rm MeV}$ and a significance of $4.0\sigma$ including
systematic uncertainties, and another structure is seen at $M_2 = 4014.4\pm 4.1
\pm 0.5~{\rm MeV}/c^2$ with a width of $\Gamma_2 = 6\pm 16 \pm 6~{\rm MeV}$ and
a global significance of $2.8\sigma$ including the look-elsewhere effect, if
the mass spectrum is parametrized with the incoherent sum of two Breit-Wigner
functions. The upper limits of the widths are determined to be $\Gamma_1^{\rm
UL} = 11.5~{\rm MeV}$ and $\Gamma_2^{\rm UL} = 39.3~{\rm MeV}$ at 90\%
confidence level. The production rates are determined to be
$\Gamma_{\gamma\gamma}{\cal B}(R_1\to\gamma\psi(2S)) = 8.2\pm 2.3\pm 0.9~{\rm
eV}$ assuming $(J^{PC}, |\lambda|) =(0^{++}, 0)$ and $1.6\pm 0.5\pm 0.2~{\rm
eV}$ with $(2^{++}, 2)$ for the first structure and $\Gamma_{\gamma\gamma}{\cal
B}(R_2\to\gamma\psi(2S)) = 5.3\pm 2.7\pm 2.5~{\rm eV}$ with $(0^{++}, 0)$ and
$1.1\pm 0.5\pm 0.5~{\rm eV}$ with $(2^{++}, 2)$ for the second one. Here, the
first errors are statistical and the second systematic.
|
The influence of ligands on the low frequency vibration of different
thicknesses cadmium selenide colloidal nanoplatelets is investigated using
resonant low frequency Raman scattering. The strong vibration frequency shifts
induced by ligand modifications as well as the sharp spectral linewidths make
low frequency Raman scattering a tool of choice to follow ligand exchange as
well as the nano-mechanical properties of the NPLs, as evidenced by a
carboxylate to thiolate exchange study. Apart from their molecular weight, the
nature of the ligands, such as the sulfur to metal bond of thiols, induces a
modification of the NPLs as a whole, increasing the thickness by one monolayer.
Moreover, as the weight of the ligands increases, the discrepancy between the
massload model and the experimental measurements increase. These effects are
all the more important when the number of layers is small and can only be
explained by a modification of the longitudinal sound velocity. This
modification takes its origin in a change of lattice structure of the NPLs,
that reflects on its elastic properties. These nanobalances are finally used to
characterize ligands affinity with the surface using binary thiols mixtures,
illustrating the potential of low frequency Raman scattering to finely
characterize nanocrystals surfaces.
|
Motivated by entropic optimal transport, we investigate an extended notion of
solution to the parabolic equation $( \partial_t + b\cdot \nabla + \Delta _{
a}/2 +V)g
=0$ with a final boundary condition. It is well-known that the viscosity
solution $g$ of this PDE is represented by the Feynman-Kac formula when the
drift $b$, the diffusion matrix $a$ and the scalar potential $V$ are regular
enough and not growing too fast. In this article, $b$ and $V$ are not assumed
to be regular and their growth is controlled by a finite entropy condition,
allowing for instance $V$ to belong to some Kato class. We show that the
Feynman-Kac formula represents a solution, in an extended sense, to the
parabolic equation. This notion of solution is trajectorial and expressed with
the semimartingale extension of the Markov generator $ b\cdot \nabla + \Delta
_{ a}/2.$ Our probabilistic approach relies on stochastic derivatives,
semimartingales, Girsanov's theorem and the Hamilton-Jacobi-Bellman equation
satisfied by $\log g$.
|
In continuous-variable quantum information processing, quantum error
correction of Gaussian errors requires simultaneous estimation of both
quadrature components of displacements on phase space. However, quadrature
operators $x$ and $p$ are non-commutative conjugate observables, whose
simultaneous measurement is prohibited by the uncertainty principle.
Gottesman-Kitaev-Preskill (GKP) error correction deals with this problem using
complex non-Gaussian states called GKP states. On the other hand, simultaneous
estimation of displacement using experimentally feasible non-Gaussian states
has not been well studied. In this paper, we consider a multi-parameter
estimation problem of displacements assuming an isotropic Gaussian prior
distribution and allowing post-selection of measurement outcomes. We derive a
lower bound for the estimation error when only Gaussian operations are used,
and show that even simple non-Gaussian states such as single-photon states can
beat this bound. Based on Ghosh's bound, we also obtain a lower bound for the
estimation error when the maximum photon number of the input state is given.
Our results reveal the role of non-Gaussianity in the estimation of
displacements, and pave the way toward the error correction of Gaussian errors
using experimentally feasible non-Gaussian states.
|
In this paper we discuss two canonical transformations that turn St\"{a}ckel
separable Hamiltonians of Benenti type into polynomial form: transformation to
Vi\`ete coordinates and transformation to Newton coordinates. Transformation to
Newton coordinates has been applied to these systems only very recently and in
this paper we present a new proof that this transformation indeed leads to
polynomial form of St\"{a}ckel Hamiltonians of Benenti type. Moreover we
present all geometric ingredients of these Hamiltonians in both Vi\`ete and
Newton coordinates.
|
In this article we obtained the harmonic oscillator solution for quaternionic
quantum mechanics ($\mathbbm{H}$QM) in the real Hilbert space, both in the
analytic method and in the algebraic method. The quaternionic solutions have
many additional possibilities if compared to complex quantum mechanics
($\mathbbm{C}$QM), and thus there are many possible applications to these
results in future research.
|
Recently unpolarized and polarized $J/\psi \,(\Upsilon)$ production at the
Electron-Ion Collider (EIC) has been proposed as a new way to extract two
poorly known color-octet NRQCD long-distance matrix elements:
$\langle0\vert{\cal O}_{8}^{J/\psi}(^{1}S_{0})\vert0\rangle$ and
$\langle0\vert{\cal O}_{8}^{J/\psi}(^{3}P_{0})\vert0\rangle$. The proposed
method is based on a comparison to open heavy-quark pair production ideally
performed at the same kinematics. In this paper we analyze this proposal in
more detail and provide predictions for the EIC based on the available
determinations of the color-octet matrix elements. We also propose two
additional methods that do not require comparison to open heavy-quark pair
production.
|
Computational models based on the depth-averaged shallow water equations
(SWE) offer an efficient choice to analyse velocity fields around hydraulic
structures. Second-order finite volume (FV2) solvers have often been used for
this purpose subject to adding an eddy viscosity term at sub-meter resolution,
but have been shown to fall short of capturing small-scale field transients
emerging from wave-structure interactions. The second-order discontinuous
Galerkin (DG2) alternative is significantly more resistant to the growth of
numerical diffusion and leads to faster convergence rates. These properties
make the DG2 solver a promising modelling tool for detailed velocity field
predictions. This paper focuses on exploring this DG2 capability with reference
to an FV2 counterpart for a selection of test cases that require well-resolved
velocity field predictions. The findings of this work lead to identifying a
particular setting for the DG2 solver that allows for obtaining more accurate
and efficient depth-averaged velocity fields incorporating small-scale
transients.
|
We develop a variational framework to understand the properties of functions
learned by fitting deep neural networks with rectified linear unit activations
to data. We propose a new function space, which is reminiscent of classical
bounded variation-type spaces, that captures the compositional structure
associated with deep neural networks. We derive a representer theorem showing
that deep ReLU networks are solutions to regularized data fitting problems over
functions from this space. The function space consists of compositions of
functions from the Banach spaces of second-order bounded variation in the Radon
domain. These are Banach spaces with sparsity-promoting norms, giving insight
into the role of sparsity in deep neural networks. The neural network solutions
have skip connections and rank bounded weight matrices, providing new
theoretical support for these common architectural choices. The variational
problem we study can be recast as a finite-dimensional neural network training
problem with regularization schemes related to the notions of weight decay and
path-norm regularization. Finally, our analysis builds on techniques from
variational spline theory, providing new connections between deep neural
networks and splines.
|
The next wave of wireless technologies is proliferating in connecting things
among themselves as well as to humans. In the era of the Internet of things
(IoT), billions of sensors, machines, vehicles, drones, and robots will be
connected, making the world around us smarter. The IoT will encompass devices
that must wirelessly communicate a diverse set of data gathered from the
environment for myriad new applications. The ultimate goal is to extract
insights from this data and develop solutions that improve quality of life and
generate new revenue. Providing large-scale, long-lasting, reliable, and near
real-time connectivity is the major challenge in enabling a smart connected
world. This paper provides a comprehensive survey on existing and emerging
communication solutions for serving IoT applications in the context of
cellular, wide-area, as well as non-terrestrial networks. Specifically,
wireless technology enhancements for providing IoT access in fifth-generation
(5G) and beyond cellular networks, and communication networks over the
unlicensed spectrum are presented. Aligned with the main key performance
indicators of 5G and beyond 5G networks, we investigate solutions and standards
that enable energy efficiency, reliability, low latency, and scalability
(connection density) of current and future IoT networks. The solutions include
grant-free access and channel coding for short-packet communications,
non-orthogonal multiple access, and on-device intelligence. Further, a vision
of new paradigm shifts in communication networks in the 2030s is provided, and
the integration of the associated new technologies like artificial
intelligence, non-terrestrial networks, and new spectra is elaborated. Finally,
future research directions toward beyond 5G IoT networks are pointed out.
|
Human gesture recognition has drawn much attention in the area of computer
vision. However, the performance of gesture recognition is always influenced by
some gesture-irrelevant factors like the background and the clothes of
performers. Therefore, focusing on the regions of hand/arm is important to the
gesture recognition. Meanwhile, a more adaptive architecture-searched network
structure can also perform better than the block-fixed ones like Resnet since
it increases the diversity of features in different stages of the network
better. In this paper, we propose a regional attention with
architecture-rebuilt 3D network (RAAR3DNet) for gesture recognition. We replace
the fixed Inception modules with the automatically rebuilt structure through
the network via Neural Architecture Search (NAS), owing to the different shape
and representation ability of features in the early, middle, and late stage of
the network. It enables the network to capture different levels of feature
representations at different layers more adaptively. Meanwhile, we also design
a stackable regional attention module called dynamic-static Attention (DSA),
which derives a Gaussian guidance heatmap and dynamic motion map to highlight
the hand/arm regions and the motion information in the spatial and temporal
domains, respectively. Extensive experiments on two recent large-scale RGB-D
gesture datasets validate the effectiveness of the proposed method and show it
outperforms state-of-the-art methods. The codes of our method are available at:
https://github.com/zhoubenjia/RAAR3DNet.
|
Neural networks (NNs) and linear stochastic estimation (LSE) have widely been
utilized as powerful tools for fluid-flow regressions. We investigate
fundamental differences between them considering two canonical fluid-flow
problems: 1. the estimation of high-order proper orthogonal decomposition
coefficients from low-order their counterparts for a flow around a
two-dimensional cylinder, and 2. the state estimation from wall characteristics
in a turbulent channel flow. In the first problem, we compare the performance
of LSE to that of a multi-layer perceptron (MLP). With the channel flow
example, we capitalize on a convolutional neural network (CNN) as a nonlinear
model which can handle high-dimensional fluid flows. For both cases, the
nonlinear NNs outperform the linear methods thanks to nonlinear activation
functions. We also perform error-curve analyses regarding the estimation error
and the response of weights inside models. Our analysis visualizes the
robustness against noisy perturbation on the error-curve domain while revealing
the fundamental difference of the covered tools for fluid-flow regressions.
|
In-volume ultrafast laser direct writing of silicon is generally limited by
strong nonlinear propagation effects preventing the initiation of
modifications. By employing a triple-optimization procedure in the spectral,
temporal and spatial domains, we demonstrate that modifications can be
repeatably produced inside silicon. Our approach relies on irradiation at
$\approx 2$-$\mu$m wavelength with temporally-distorted femtosecond pulses.
These pulses are focused in a way that spherical aberrations of different
origins counterbalance, as predicted by point spread function analyses and in
good agreement with nonlinear propagation simulations. We also establish the
laws governing modification growth on a pulse-to-pulse basis, which allows us
to demonstrate transverse inscription inside silicon with various line
morphologies depending on the irradiation conditions. We finally show that the
production of single-pulse repeatable modifications is a necessary condition
for reliable transverse inscription inside silicon.
|
In the past decade, remarkable progress has been achieved in deep learning
related systems and applications. In the post Moore's Law era, however, the
limit of semiconductor fabrication technology along with the increasing data
size have slowed down the development of learning algorithms. In parallel, the
fast development of quantum computing has pushed it to the new ear. Google
illustrates quantum supremacy by completing a specific task (random sampling
problem), in 200 seconds, which is impracticable for the largest classical
computers. Due to the limitless potential, quantum based learning is an area of
interest, in hopes that certain systems might offer a quantum speedup. In this
work, we propose a novel architecture QuClassi, a quantum neural network for
both binary and multi-class classification. Powered by a quantum
differentiation function along with a hybrid quantum-classic design, QuClassi
encodes the data with a reduced number of qubits and generates the quantum
circuit, pushing it to the quantum platform for the best states, iteratively.
We conduct intensive experiments on both the simulator and IBM-Q quantum
platform. The evaluation results demonstrate that QuClassi is able to
outperform the state-of-the-art quantum-based solutions, Tensorflow-Quantum and
QuantumFlow by up to 53.75% and 203.00% for binary and multi-class
classifications. When comparing to traditional deep neural networks, QuClassi
achieves a comparable performance with 97.37% fewer parameters.
|
We consider a linear minimum mean squared error (LMMSE) estimation framework
with model mismatch where the assumed model order is smaller than that of the
underlying linear system which generates the data used in the estimation
process. By modelling the regressors of the underlying system as random
variables, we analyze the average behaviour of the mean squared error (MSE).
Our results quantify how the MSE depends on the interplay between the number of
samples and the number of parameters in the underlying system and in the
assumed model. In particular, if the number of samples is not sufficiently
large, neither increasing the number of samples nor the assumed model
complexity is sufficient to guarantee a performance improvement.
|
We discuss the anatomy of the $L_{VV}$ observable designed as a ratio of the
longitudinal components of $B_s \to VV$ versus $B_d \to VV$ decays. We focus on
the particular case of $B_{d,s} \to K^{*0} {\bar K^{*0}}$ where we find for the
SM prediction $L_{K^* \bar{K}^*}=19.5^{+9.3}_{-6.8}$ implying a 2.6$\sigma$
tension with respect to data. The interpretation of this tension in a model
independent way identifies two Wilson coefficients ${\cal C}_{4}$ and ${\cal
C}_{8g}$ as possible sources. The example of one simplified model including a
Kaluza-Klein (KK) gluon is discussed. This KK gluon embedded inside a
composite/extra dimensional model combined with a $Z^\prime$ can explain also
the $b\to s\ell\ell$ anomalies albeit with a significant amount of fine-tuning.
|
We review the calculation of the solar axion flux from axion-photon and
axion-electron interactions and discuss the size of various effects neglected
in current calculations. For the Primakoff flux we then explicitly include the
partial degeneracy of electrons. We survey the available solar models and
opacity codes and develop a publicly available C++/Python code to quantify the
associated systematic differences and statistical uncertainties. The number of
axions emitted in helioseismological solar models is systematically larger by
about 5% compared to photospheric models, while the overall statistical
uncertainties in solar models are typically at the percent level in both
helioseismological and photospheric models. However, for specific energies, the
statistical fluctuations can reach up to about 5% as well. Taking these
uncertainties into account, we investigate the ability of the upcoming
helioscope IAXO to discriminate KSVZ axion models. Such a discrimination is
possible for a number of models, and a discovery of KSVZ axions with high $E/N$
ratios could potentially help to solve the solar abundance problem. We discuss
limitations of the axion emission calculations and identify potential
improvements, which would help to determine axion model parameters more
accurately.
|
In this paper, we prove existence and regularity of positive solutions for
singular quasilinear elliptic systems involving gradient terms. Our approach is
based on comparison properties, a priori estimates and Schauder's fixed point
theorem.
|
A novel and compact tri-band planar antenna for 2.4/5.2/5.8-GHz wireless
local area network (WLAN), 2.3/3.5/5.5GHz Worldwide Interoperability for
Microwave Access (WiMAX) and Bluetooth applications is proposed and studied in
this paper. The antenna comprises of a L-shaped element which is coupled with a
ground shorted parasitic resonator to generate three resonant modes for
tri-band operation. The L-shaped element which is placed on top of the
substrate is fed by a 50$\Omega$ microstrip feed line and is responsible for
the generation of a wide band at 5.5 GHz. The parasitic resonator is placed on
the other side of the substrate and is directly connected to the ground plane.
The presence of the parasitic resonator gives rise to two additional resonant
bands at 2.3 GHz and 3.5 GHz. Thus, together the two elements generate three
resonant bands to cover WLAN, WiMAX and Bluetooth bands of operation. A
thorough parametric study has been performed on the antenna and it has been
found that the three bands can be tuned by varying certain dimensions of the
antenna. Hence, the same design can be used for frequencies in adjacent bands
as well with minor changes in its dimensions. Important antenna parameters such
as return loss, radiation pattern and peak gains in the operating bands have
been studied in detail to prove that the proposed design is a promising
candidate for the aforementioned wireless technologies.
|
We present a novel mapping approach for WENO schemes through the use of an
approximate constant mapping function which is constructed by employing an
approximation of the classic signum function. The new approximate constant
mapping function is designed to meet the overall criteria for a proper mapping
function required in the design of the WENO-PM6 scheme. The WENO-PM6 scheme was
proposed to overcome the potential loss of accuracy of the WENO-M scheme which
was developed to recover the optimal convergence order of the WENO-JS scheme at
critical points. Our new mapped WENO scheme, denoted as WENO-ACM, maintains
almost all advantages of the WENO-PM6 scheme, including low dissipation and
high resolution, while decreases the number of mathematical operations
remarkably in every mapping process leading to a significant improvement of
efficiency. The convergence rates of the WENO-ACM scheme have been shown
through one-dimensional linear advection equation with various initial
conditions. Numerical results of one-dimensional Euler equations for the
Riemann problems, the Mach 3 shock-density wave interaction and the
Woodward-Colella interacting blastwaves are improved in comparison with the
results obtained by the WENO-JS, WENO-M and WENO-PM6 schemes. Numerical
experiments with two-dimensional problems as the 2D Riemann problem, the
shock-vortex interaction, the 2D explosion problem, the double Mach reflection
and the forward-facing step problem modeled via the two dimensional Euler
equations have been conducted to demonstrate the high resolution and the
effectiveness of the WENO-ACM scheme. The WENO-ACM scheme provides
significantly better resolution than the WENO-M scheme and slightly better
resolution than the WENO-PM6 scheme, and compared to the WENO-M and WENO-PM6
schemes, the extra computational cost is reduced by more than 83% and 93%,
respectively.
|
Hundreds of millions of speakers of bidirectional (BiDi) languages rely on
writing systems that mix the native right-to-left script with left-to-right
strings. The global reach of interactive digital technologies requires special
attention to these people, whose perception of interfaces is affected by this
script mixture. However, empirical research on this topic is scarce. Although
leading software vendors provide guidelines for BiDi design, bidirectional
interfaces demonstrate inconsistent and incorrect directionality of UI
elements, which may cause user confusion and errors. Through a websites'
review, we identified problematic UI items and considered reasons for their
existence. In an online survey with 234 BiDi speakers, we observed that in many
cases, users' direction preferences were inconsistent with the guidelines. The
findings provide potential insights for design rules and empirical evidence for
the problem's complexity, suggesting the need for further empirical research
and greater attention by the HCI community to the BiDi design problem.
|
Grasping objects in cluttered scenarios is a challenging task in robotics.
Performing pre-grasp actions such as pushing and shifting to scatter objects is
a way to reduce clutter. Based on deep reinforcement learning, we propose a
Fast-Learning Grasping (FLG) framework, that can integrate pre-grasping actions
along with grasping to pick up objects from cluttered scenarios with reduced
real-world training time. We associate rewards for performing moving actions
with the change of environmental clutter and utilize a hybrid triggering
method, leading to data-efficient learning and synergy. Then we use the output
of an extended fully convolutional network as the value function of each pixel
point of the workspace and establish an accurate estimation of the grasp
probability for each action. We also introduce a mask function as prior
knowledge to enable the agents to focus on the accurate pose adjustment to
improve the effectiveness of collecting training data and, hence, to learn
efficiently. We carry out pre-training of the FLG over simulated environment,
and then the learnt model is transferred to the real world with minimal
fine-tuning for further learning during actions. Experimental results
demonstrate a 94% grasp success rate and the ability to generalize to novel
objects. Compared to state-of-the-art approaches in the literature, the
proposed FLG framework can achieve similar or higher grasp success rate with
lesser amount of training in the real world. Supplementary video is available
at https://youtu.be/e04uDLsxfDg.
|
Structured matrices, such as those derived from Kronecker products (KP), are
effective at compressing neural networks, but can lead to unacceptable accuracy
loss when applied to large models. In this paper, we propose the notion of
doping -- addition of an extremely sparse matrix to a structured matrix. Doping
facilitates additional degrees of freedom for a small number of parameters,
allowing them to independently diverge from the fixed structure. To train LSTMs
with doped structured matrices, we introduce the additional parameter matrix
while slowly annealing its sparsity level. However, we find that performance
degrades as we slowly sparsify the doping matrix, due to co-matrix adaptation
(CMA) between the structured and the sparse matrices. We address this over
dependence on the sparse matrix using a co-matrix dropout regularization (CMR)
scheme. We provide empirical evidence to show that doping, CMA and CMR are
concepts generally applicable to multiple structured matrices (Kronecker
Product, LMF, Hybrid Matrix Decomposition). Additionally, results with doped
kronecker product matrices demonstrate state-of-the-art accuracy at large
compression factors (10 - 25x) across 4 natural language processing
applications with minor loss in accuracy. Doped KP compression technique
outperforms previous state-of-the art compression results by achieving 1.3 -
2.4x higher compression factor at a similar accuracy, while also beating strong
alternatives like pruning and low-rank methods by a large margin (8% or more).
Additionally, we show that doped KP can be deployed on commodity hardware using
the current software stack and achieve 2.5 - 5.5x inference run-time speed-up
over baseline.
|
Recently, deep neural network (DNN)-based speech enhancement (SE) systems
have been used with great success. During training, such systems require clean
speech data - ideally, in large quantity with a variety of acoustic conditions,
many different speaker characteristics and for a given sampling rate (e.g.,
48kHz for fullband SE). However, obtaining such clean speech data is not
straightforward - especially, if only considering publicly available datasets.
At the same time, a lot of material for automatic speech recognition (ASR) with
the desired acoustic/speaker/sampling rate characteristics is publicly
available except being clean, i.e., it also contains background noise as this
is even often desired in order to have ASR systems that are noise-robust.
Hence, using such data to train SE systems is not straightforward. In this
paper, we propose two improvements to train SE systems on noisy speech data.
First, we propose several modifications of the loss functions, which make them
robust against noisy speech targets. In particular, computing the median over
the sample axis before averaging over time-frequency bins allows to use such
data. Furthermore, we propose a noise augmentation scheme for mixture-invariant
training (MixIT), which allows using it also in such scenarios. For our
experiments, we use the Mozilla Common Voice dataset and we show that using our
robust loss function improves PESQ by up to 0.19 compared to a system trained
in the traditional way. Similarly, for MixIT we can see an improvement of up to
0.27 in PESQ when using our proposed noise augmentation.
|
Temporal, spectral, and sample-to-sample fluctuations in coherence properties
of qubits form an outstanding challenge for the development of upscaled
fault-tolerant quantum computers. A ubiquitous source for these fluctuations in
superconducting qubits is a set of atomic-scale defects with a two-level
structure. Here we propose a way to mitigate these fluctuations and stabilize
the qubit performance. We show that frequency modulation of a qubit or,
alternatively, of the two-level defects, leads to averaging of the qubit
relaxation rate over a wide interval of frequencies.
|
In this work, we introduce NU-Wave, the first neural audio upsampling model
to produce waveforms of sampling rate 48kHz from coarse 16kHz or 24kHz inputs,
while prior works could generate only up to 16kHz. NU-Wave is the first
diffusion probabilistic model for audio super-resolution which is engineered
based on neural vocoders. NU-Wave generates high-quality audio that achieves
high performance in terms of signal-to-noise ratio (SNR), log-spectral distance
(LSD), and accuracy of the ABX test. In all cases, NU-Wave outperforms the
baseline models despite the substantially smaller model capacity (3.0M
parameters) than baselines (5.4-21%). The audio samples of our model are
available at https://mindslab-ai.github.io/nuwave, and the code will be made
available soon.
|
We obtain the asymptotic formula with an error term $O(X^{\frac{1}{2} +
\varepsilon})$ for the smoothed first moment of quadratic twists of modular
$L$-functions. We also give a similar result for the smoothed first moment of
the first derivative of quadratic twists of modular $L$-functions. The argument
is largely based on Young's recursive method [19,20].
|
Convolutional neural network training can suffer from diverse issues like
exploding or vanishing gradients, scaling-based weight space symmetry and
covariant-shift. In order to address these issues, researchers develop weight
regularization methods and activation normalization methods. In this work we
propose a weight soft-regularization method based on the Oblique manifold. The
proposed method uses a loss function which pushes each weight vector to have a
norm close to one, i.e. the weight matrix is smoothly steered toward the
so-called Oblique manifold. We evaluate our method on the very popular
CIFAR-10, CIFAR-100 and ImageNet 2012 datasets using two state-of-the-art
architectures, namely the ResNet and wide-ResNet. Our method introduces
negligible computational overhead and the results show that it is competitive
to the state-of-the-art and in some cases superior to it. Additionally, the
results are less sensitive to hyperparameter settings such as batch size and
regularization factor.
|
In the online balanced graph repartitioning problem, one has to maintain a
clustering of $n$ nodes into $\ell$ clusters, each having $k = n / \ell$ nodes.
During runtime, an online algorithm is given a stream of communication requests
between pairs of nodes: an inter-cluster communication costs one unit, while
the intra-cluster communication is free. An algorithm can change the
clustering, paying unit cost for each moved node.
This natural problem admits a simple $O(\ell^2 \cdot k^2)$-competitive
algorithm COMP, whose performance is far apart from the best known lower bound
of $\Omega(\ell \cdot k)$. One of open questions is whether the dependency on
$\ell$ can be made linear; this question is of practical importance as in the
typical datacenter application where virtual machines are clustered on physical
servers, $\ell$ is of several orders of magnitude larger than $k$. We answer
this question affirmatively, proving that a simple modification of COMP is
$(\ell \cdot 2^{O(k)})$-competitive.
On the technical level, we achieve our bound by translating the problem to a
system of linear integer equations and using Graver bases to show the existence
of a ``small'' solution.
|
We prove a pair of sharp reverse isoperimetric inequalities for domains in
nonpositively curved surfaces: (1) metric disks centered at the vertex of a
Euclidean cone of angle at least $2\pi$ have minimal area among all
nonpositively curved disks of the same perimeter and the same total curvature;
(2) geodesic triangles in a Euclidean (resp. hyperbolic) cone of angle at least
$2\pi$ have minimal area among all nonpositively curved geodesic triangles
(resp. all geodesic triangles of curvature at most $-1$) with the same side
lengths and angles.
|
We derive bounds on the error, in high-order Sobolev norms, incurred in the
approximation of Sobolev-regular as well as analytic functions by neural
networks with the hyperbolic tangent activation function. These bounds provide
explicit estimates on the approximation error with respect to the size of the
neural networks. We show that tanh neural networks with only two hidden layers
suffice to approximate functions at comparable or better rates than much deeper
ReLU neural networks.
|
This paper addresses the problem of time-varying bearing formation control in
$d$ $(d\ge 2)$-dimensional Euclidean space by exploring Persistence of
Excitation (PE) of the desired bearing reference. A general concept of Bearing
Persistently Exciting (BPE) formation defined in $d$-dimensional space is here
fully developed. By providing a desired formation that is BPE, distributed
control laws for multi-agent systems under both single- and double-integrator
dynamics are proposed using bearing measurements (along with velocity
measurements when the agents are described by double-integrator dynamics),
which guarantee uniform exponential stabilization of the desired formation in
terms of shape and scale. A key contribution of this work is to show that the
classical bearing rigidity condition on the graph topology, required for
achieving the stabilization of a formation up to a scaling factor, is relaxed
and extended in a natural manner by exploring PE conditions imposed either on a
specific set of desired bearing vectors or on the whole desired formation.
Simulation results are provided to illustrate the performance of the proposed
control method.
|
By investigating the exam scores of introductory economics in a business
school in Taiwan between 2008 and 2019, we find three sets of results: First,
we find no significant difference between genders in the exam scores. Second,
students' majors are significantly associated with their exam scores, which
likely reflects their academic ability measured at college admission. Third,
the exam scores are strong predictors of students' future academic performance.
|
We explicitly determine the defining relations of all quantum symmetric pair
coideal subalgebras of quantized enveloping algebras of Kac-Moody type. Our
methods are based on star products on noncommutative $\mathbb{N}$-graded
algebras. The resulting defining relations are expressed in terms of continuous
q-Hermite polynomials and a new family of deformed Chebyshev polynomials.
|
The usage of smartphone-collected respiratory sound, trained with deep
learning models, for detecting and classifying COVID-19 becomes popular
recently. It removes the need for in-person testing procedures especially for
rural regions where related medical supplies, experienced workers, and
equipment are limited. However, existing sound-based diagnostic approaches are
trained in a fully supervised manner, which requires large scale well-labelled
data. It is critical to discover new methods to leverage unlabelled respiratory
data, which can be obtained more easily. In this paper, we propose a novel
self-supervised learning enabled framework for COVID-19 cough classification. A
contrastive pre-training phase is introduced to train a Transformer-based
feature encoder with unlabelled data. Specifically, we design a random masking
mechanism to learn robust representations of respiratory sounds. The
pre-trained feature encoder is then fine-tuned in the downstream phase to
perform cough classification. In addition, different ensembles with varied
random masking rates are also explored in the downstream phase. Through
extensive evaluations, we demonstrate that the proposed contrastive
pre-training, the random masking mechanism, and the ensemble architecture
contribute to improving cough classification performance.
|
We report the discovery of a circumstellar debris disk viewed nearly edge-on
and associated with the young, K1 star BD+45$^{\circ}$598 using high-contrast
imaging at 2.2$\mu$m obtained at the W.M.~Keck Observatory. We detect the disk
in scattered light with a peak significance of $\sim$5$\sigma$ over three
epochs, and our best-fit model of the disk is an almost edge-on $\sim$70 AU
ring, with inclination angle $\sim$87$^\circ$. Using the NOEMA interferometer
at the Plateau de Bure Observatory operating at 1.3mm, we find resolved
continuum emission aligned with the ring structure seen in the 2.2$\mu$m
images. We estimate a fractional infrared luminosity of $L_{IR}/L_{tot}$
$\simeq6^{+2}_{-1}$$\times$$10^{-4}$, higher than that of the debris disk
around AU Mic. Several characteristics of BD+45$^{\circ}$598, such as its
galactic space motion, placement in a color-magnitude diagram, and strong
presence of Lithium, are all consistent with its membership in the $\beta$
Pictoris Moving Group with an age of 23$\pm$3 Myr. However, the galactic
position for BD+45$^{\circ}$598 is slightly discrepant from previously-known
members of the $\beta$ Pictoris Moving Group, possibly indicating an extension
of members of this moving group to distances of at least 70pc.
BD+45$^{\circ}$598 appears to be an example from a population of young
circumstellar debris systems associated with newly identified members of young
moving groups that can be imaged in scattered light, key objects for mapping
out the early evolution of planetary systems from $\sim$10-100 Myr. This target
will also be ideal for northern-hemisphere, high-contrast imaging platforms to
search for self-luminous, planetary mass companions residing in this system.
|
We study the fair division problem on divisible heterogeneous resources (the
cake cutting problem) with strategic agents, where each agent can manipulate
his/her private valuation in order to receive a better allocation. A
(direct-revelation) mechanism takes agents' reported valuations as input, and
outputs an allocation that satisfies a given fairness requirement. A natural
and fundamental open problem, first raised by [Chen et al., 2010] and
subsequently raised by [Procaccia, 2013] [Aziz and Ye, 2014] [Branzei and
Miltersen, 2015] [Menon and Larson, 2017] [Bei et al., 2017] [Bei et al.,
2020], etc., is whether there exists a deterministic, truthful and envy-free
(or even proportional) cake cutting mechanism. In this paper, we resolve this
open problem by proving that there does not exist a deterministic, truthful and
proportional cake cutting mechanism, even in the special case where all of the
followings hold: 1. there are only two agents; 2. each agent's valuation is a
piecewise-constant function; 3. each agent is hungry: each agent has a strictly
positive value on any part of the cake. The impossibility result extends to the
case where the mechanism is allowed to leave some part of the cake unallocated.
To circumvent this impossibility result, we aim to design mechanisms that
possess certain degree of truthfulness. Motivated by the kind of truthfulness
possessed by the classical I-cut-you-choose protocol, we define a weaker notion
of truthfulness: the risk-averse truthfulness. We show that the well-known
moving-knife procedure and Even-Paz algorithm do not have this truthful
property. We propose a mechanism that is risk-averse truthful and envy-free,
and a mechanism that is risk-averse truthful and proportional that always
outputs allocations with connected pieces.
|
The Deep Extragalactic VIsible Legacy Survey (DEVILS) is an ongoing
high-completeness, deep spectroscopic survey of $\sim$60,000 galaxies to
Y$<$21.2 mag, over $\sim$6 deg2 in three well-studied deep extragalactic
fields: D10 (COSMOS), D02 (XMM-LSS) and D03 (ECDFS). Numerous DEVILS projects
all require consistent, uniformly-derived and state-of-the-art photometric data
with which to measure galaxy properties. Existing photometric catalogues in
these regions either use varied photometric measurement techniques for
different facilities/wavelengths leading to inconsistencies, older imaging data
and/or rely on source detection and photometry techniques with known problems.
Here we use the ProFound image analysis package and state-of-the-art imaging
datasets (including Subaru-HSC, VST-VOICE, VISTA-VIDEO and UltraVISTA-DR4) to
derive matched-source photometry in 22 bands from the FUV to 500{\mu}m. This
photometry is found to be consistent, or better, in colour-analysis to previous
approaches using fixed-size apertures (which are specifically tuned to derive
colours), but produces superior total source photometry, essential for the
derivation of stellar masses, star-formation rates, star-formation histories,
etc. Our photometric catalogue is described in detail and, after internal
DEVILS team projects, will be publicly released for use by the broader
scientific community.
|
The Kepler and TESS missions delivered high-precision, long-duration
photometric time series for hundreds of main-sequence stars with
gravito-inertial (g) pulsation modes. This high precision allows us to evaluate
increasingly detailed theoretical stellar models. Recent theoretical work
extended the traditional approximation of rotation (TAR), a framework to
evaluate the effect of the Coriolis acceleration on g-modes, to include the
effects of the centrifugal acceleration in the approximation of slightly
deformed stars, which so far had mostly been neglected in asteroseismology.
This extension of the TAR was conceived by rederiving the TAR in a
centrifugally deformed, spheroidal coordinate system. We explore the effect of
the centrifugal acceleration on g modes and assess its detectability in
space-based photometry. We implement the new framework to calculate the
centrifugal deformation of precomputed 1D spherical stellar structure models
and compute the corresponding g-mode frequencies, assuming uniform rotation.
The framework is evaluated for a grid of stellar structure models covering a
relevant parameter space for observed g-mode pulsators. The centrifugal
acceleration modifies the effect of the Coriolis acceleration on g modes,
narrowing the equatorial band in which they are trapped. Furthermore, the
centrifugal acceleration causes the pulsation periods and period spacings of
the most common g modes (prograde dipole modes and r modes) to increase with
values similar to the observational uncertainties in Kepler and TESS data. The
effect of the centrifugal acceleration on g~modes is formally detectable in
modern space photometry. Implementation of the new theoretical framework in
stellar structure and pulsation codes will allow for more precise asteroseismic
modelling of centrifugally deformed stars, to assess its effect on mode
excitation, -trapping and -damping.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.