abstract
stringlengths 42
2.09k
|
---|
Motivated by the observation of the first hidden charm pentaquarks by the
LHCb collaboration in 2015 and the updated analysis with an order-of-magnitude
larger data set in 2019, we estimate their cross sections for the prompt
production as well as their heavy quark spin partners, in the
$\Sigma_c^{(*)}\bar{D}^{(*)}$ hadronic molecular picture, at the center-of-mass
energy $7~\mathrm{TeV}$ in the $pp$ collision. Their cross sections are several
$\mathrm{nb}$ and we would expect several tens hidden charm pentaquark events
in the LHC based on its current integrated luminosity. The cross sections show
a sizable deviation of the cross sections for hidden charm pentaquarks with the
third isospin component $I_z=+\frac{1}{2}$ ($P_c^+$) from those with
$I_z=-\frac{1}{2}$ ($P_c^0$). The cross sections decrease dramatically with the
increasing transverse momentum. Our study can also tell where to search for the
missing hidden charm pentaquarks. The confirmation of the complete hidden charm
pentaquarks in the heavy quark symmetry would further verify their
$\Sigma_c^{(*)}\bar{D}^{(*)}$ molecular interpretation. In addition, the
relative strength among these cross sections for pentaquarks can help us to
identify the quantum numbers of the $P_c(4440)$ and $P_c(4457)$.
|
This work studies the quantitative stability of the quadratic optimal
transport map between a fixed probability density $\rho$ and a probability
measure $\mu$ on R^d , which we denote T$\mu$. Assuming that the source density
$\rho$ is bounded from above and below on a compact convex set, we prove that
the map $\mu$ $\rightarrow$ T$\mu$ is bi-H{\"o}lder continuous on large
families of probability measures, such as the set of probability measures whose
moment of order p > d is bounded by some constant. These stability estimates
show that the linearized optimal transport metric W2,$\rho$($\mu$, $\nu$) =
T$\mu$ -- T$\nu$ L 2 ($\rho$,R d) is bi-H{\"o}lder equivalent to the
2-Wasserstein distance on such sets, justifiying its use in applications.
|
Speedrunning in general means to play a video game fast, i.e. using all means
at one's disposal to achieve a given goal in the least amount of time possible.
To do so, a speedrun must be planned in advance, or routed, as it is referred
to by the community. This paper focuses on discovering challenges and defining
models needed when trying to approach the problem of routing algorithmically.
It provides an overview of relevant speedrunning literature, extracting vital
information and formulating criticism. Important categorizations are pointed
out and a nomenclature is build to support professional discussion. Different
concepts of graph representations are presented and their potential is
discussed with regard to solving the speedrun routing optimization problem.
Visions both for problem modeling as well as solving are presented and assessed
regarding suitability and expected challenges. This results in a vision of
potential solutions and what will be addressed in the future.
|
Many engineering problems involve the optimization of computationally
expensive models for which derivative information is not readily available. The
Bayesian optimization (BO) framework is a particularly promising approach for
solving these problems, which uses Gaussian process (GP) models and an expected
utility function to systematically tradeoff between exploitation and
exploration of the design space. BO, however, is fundamentally limited by the
black-box model assumption that does not take into account any underlying
problem structure. In this paper, we propose a new algorithm, COBALT, for
constrained grey-box optimization problems that combines multivariate GP models
with a novel constrained expected utility function whose structure can be
exploited by state-of-the-art nonlinear programming solvers. COBALT is compared
to traditional BO on seven test problems including the calibration of a
genome-scale bioreactor model to experimental data. Overall, COBALT shows very
promising performance on both unconstrained and constrained test problems.
|
Neural lexicalized PCFGs (L-PCFGs) have been shown effective in grammar
induction. However, to reduce computational complexity, they make a strong
independence assumption on the generation of the child word and thus bilexical
dependencies are ignored. In this paper, we propose an approach to parameterize
L-PCFGs without making implausible independence assumptions. Our approach
directly models bilexical dependencies and meanwhile reduces both learning and
representation complexities of L-PCFGs. Experimental results on the English WSJ
dataset confirm the effectiveness of our approach in improving both running
speed and unsupervised parsing performance.
|
A common practice in many auctions is to offer bidders an opportunity to
improve their bids, known as a Best and Final Offer (BAFO) stage. This final
bid can depend on new information provided about either the asset or the
competitors. This paper examines the effects of new information regarding
competitors, seeking to determine what information the auctioneer should
provide assuming the set of allowable bids is discrete. The rational strategy
profile that maximizes the revenue of the auctioneer is the one where each
bidder makes the highest possible bid that is lower than his valuation of the
item. This strategy profile is an equilibrium for a large enough number of
bidders, regardless of the information released. We compare the number of
bidders needed for this profile to be an equilibrium under different
information settings. We find that it becomes an equilibrium with fewer bidders
when less additional information is made available to the bidders regarding the
competition. It follows that when the number of bidders is a priori unknown,
there are some advantages to the auctioneer to not reveal information.
|
Using isobaric Monte Carlo simulations, we map out the entire phase diagram
of a system of hard cylindrical particles of length $L$ and diameter $D$, using
an improved algorithm to identify the overlap condition between two cylinders.
Both the prolate $L/D>1$ and the oblate $L/D<1$ phase diagrams are reported
with no solution of continuity. In the prolate $L/D>1$ case, we find
intermediate nematic \textrm{N} and smectic \textrm{SmA} phases in addition to
a low density isotropic \textrm{I} and a high density crystal \textrm{X} phase,
with \textrm{I-N-SmA} and \textrm{I-SmA-X} triple points. An apparent columnar
phase \textrm{C} is shown to be metastable as in the case of spherocylinders.
In the oblate $L/D<1$ case, we find stable intermediate cubatic \textrm{Cub},
nematic \textrm{N}, and columnar \textrm{C} phases with \textrm{I-N-Cub},
\textrm{N-Cub-C}, and \textrm{I-Cub-C} triple points. Comparison with previous
numerical and analytical studies is discussed. The present study, accounting
for the explicit cylindrical shape, paves the way to more sophisticated models
with important biological applications, such as viruses and nucleosomes.
|
Person re-identification (re-ID) in the scenario with large spatial and
temporal spans has not been fully explored. This is partially because that,
existing benchmark datasets were mainly collected with limited spatial and
temporal ranges, e.g., using videos recorded in a few days by cameras in a
specific region of the campus. Such limited spatial and temporal ranges make it
hard to simulate the difficulties of person re-ID in real scenarios. In this
work, we contribute a novel Large-scale Spatio-Temporal LaST person re-ID
dataset, including 10,862 identities with more than 228k images. Compared with
existing datasets, LaST presents more challenging and high-diversity re-ID
settings, and significantly larger spatial and temporal ranges. For instance,
each person can appear in different cities or countries, and in various time
slots from daytime to night, and in different seasons from spring to winter. To
our best knowledge, LaST is a novel person re-ID dataset with the largest
spatio-temporal ranges. Based on LaST, we verified its challenge by conducting
a comprehensive performance evaluation of 14 re-ID algorithms. We further
propose an easy-to-implement baseline that works well on such challenging re-ID
setting. We also verified that models pre-trained on LaST can generalize well
on existing datasets with short-term and cloth-changing scenarios. We expect
LaST to inspire future works toward more realistic and challenging re-ID tasks.
More information about the dataset is available at
https://github.com/shuxjweb/last.git.
|
We construct a symmetric invertible binary pairing function $F(m,n)$ on the
set of positive integers with a property of $F(m,n)=F(n,m)$. Then we provide a
complete proof of its symmetry and bijectivity, from which the construction of
symmetric invertible binary pairing functions on any custom set of integers
could be seen.
|
This paper presents OpenCV2X, the first publicly available, open-source
simulation model of the Third Generation Partnership Project (3GPP) Release 14
Cellular Vehicle to Everything (C-V2X) sidelink, which forms the basis for 5G
NR Mode 2 under later releases. This model is fully compliant with the existing
vehicular service and application layers, including messaging sets as defined
by the automotive and standards communities providing a fully standardised,
cross-layer communication model. Using this model, we show how the current
sidelink scheduling mechanism performs poorly when scheduling applications with
highly aperiodic communication characteristics, such as ETSI Cooperative
Awareness Messages (CAMs). We then provide the first indepth evaluation of
dedicated per-packet aperiodic scheduling mechanisms, in contrast to schemes
that parameterise the existing algorithm. This paper highlights that the level
of aperiodicity exhibited by the application model greatly impacts scheduling
performance. Finally, we analyse how such scheduling mechanisms might co-exist.
|
The variational quantum eigensolver (VQE) is one of the most representative
quantum algorithms in the noisy intermediate-size quantum (NISQ) era, and is
generally speculated to deliver one of the first quantum advantages for the
ground-state simulations of some non-trivial Hamiltonians. However, short
quantum coherence time and limited availability of quantum hardware resources
in the NISQ hardware strongly restrain the capacity and expressiveness of VQEs.
In this Letter, we introduce the variational quantum-neural hybrid eigensolver
(VQNHE) in which the shallow-circuit quantum ansatz can be further enhanced by
classical post-processing with neural networks. We show that VQNHE consistently
and significantly outperforms VQE in simulating ground-state energies of
quantum spins and molecules given the same amount of quantum resources. More
importantly, we demonstrate that for arbitrary post-processing neural
functions, VQNHE only incurs an polynomial overhead of processing time and
represents the first scalable method to exponentially accelerate VQE with
non-unitary post-processing that can be efficiently implemented in the NISQ
era.
|
It is good practice to name test methods such that they are comprehensible to
developers; they must be written in such a way that their purpose and
functionality are clear to those who will maintain them. Unfortunately, there
is little automated support for writing or maintaining the names of test
methods. This can lead to inconsistent and low-quality test names and increase
the maintenance cost of supporting these methods. Due to this risk, it is
essential to help developers in maintaining their test method names over time.
In this paper, we use grammar patterns, and how they relate to test method
behavior, to understand test naming practices. This data will be used to
support an automated tool for maintaining test names.
|
This paper proposes a general destriping framework using flatness
constraints, where we can handle various regularization functions in a unified
manner. Removing stripe noise, i.e., destriping, from remote sensing images is
an essential task in terms of visual quality and subsequent processing. Most of
the existing methods are designed by combining a particular image
regularization with a stripe noise characterization that cooperates with the
regularization, which precludes us to examine different regularizations to
adapt to various target images. To resolve this, we formulate the destriping
problem as a convex optimization problem involving a general form of image
regularization and the flatness constraints, a newly introduced stripe noise
characterization. This strong characterization enables us to consistently
capture the nature of stripe noise, regardless of the choice of image
regularization. For solving the optimization problem, we also develop an
efficient algorithm based on a diagonally preconditioned primal-dual splitting
algorithm (DP-PDS), which can automatically adjust the stepsizes. The
effectiveness of our framework is demonstrated through destriping experiments,
where we comprehensively compare combinations of image regularizations and
stripe noise characterizations using hyperspectral images (HSI) and infrared
(IR) videos.
|
Biology offers many examples of large-scale, complex, concurrent systems:
many processes take place in parallel, compete on resources and influence each
other's behavior. The scalable modeling of biological systems continues to be a
very active field of research. In this paper we introduce a new approach based
on Event-B, a state-based formal method with refinement as its central
ingredient, allowing us to check for model consistency step-by-step in an
automated way. Our approach based on functions leads to an elegant and concise
modeling method. We demonstrate this approach by constructing what is, to our
knowledge, the largest ever built Event-B model, describing the ErbB signaling
pathway, a key evolutionary pathway with a significant role in development and
in many types of cancer. The Event-B model for the ErbB pathway describes 1320
molecular reactions through 242 events.
|
Computer vision and image processing address many challenging applications.
While the last decade has seen deep neural network architectures
revolutionizing those fields, early methods relied on 'classic', i.e.,
non-learned approaches. In this study, we explore the differences between
classic and deep learning (DL) algorithms to gain new insight regarding which
is more suitable for a given application. The focus is on two challenging
ill-posed problems, namely faint edge detection and multispectral image
registration, studying recent state-of-the-art DL and classic solutions. While
those DL algorithms outperform classic methods in terms of accuracy and
development time, they tend to have higher resource requirements and are unable
to perform outside their training space. Moreover, classic algorithms are more
transparent, which facilitates their adoption for real-life applications. As
both classes of approaches have unique strengths and limitations, the choice of
a solution is clearly application dependent.
|
Recently, flow-based methods have achieved promising success in video frame
interpolation. However, electron microscopic (EM) images suffer from unstable
image quality, low PSNR, and disorderly deformation. Existing flow-based
interpolation methods cannot precisely compute optical flow for EM images since
only predicting each position's unique offset. To overcome these problems, we
propose a novel interpolation framework for EM images that progressively
synthesizes interpolated features in a coarse-to-fine manner. First, we extract
missing intermediate features by the proposed temporal spatial-adaptive (TSA)
interpolation module. The TSA interpolation module aggregates temporal contexts
and then adaptively samples the spatial-related features with the proposed
residual spatial adaptive block. Second, we introduce a stacked deformable
refinement block (SDRB) further enhance the reconstruction quality, which is
aware of the matching positions and relevant features from input frames with
the feedback mechanism. Experimental results demonstrate the superior
performance of our approach compared to previous works, both quantitatively and
qualitatively.
|
The underlying theme of this paper is to explore the various facets of power
systems data through the lens of graph signal processing (GSP), laying down the
foundations of the Grid-GSP framework. Grid-GSP provides an interpretation for
the spatio-temporal properties of voltage phasor measurements, by showing how
the well-known power systems modeling supports a generative low-pass graph
filter model for the state variables, namely the voltage phasors. Using the
model we formalize the empirical observation that voltage phasor measurement
data lie in a low-dimensional subspace and tie their spatio-temporal structure
to generator voltage dynamics. The Grid-GSP generative model is then
successfully employed to investigate the problems pertaining to the grid of
data sampling and interpolation, network inference, detection of anomalies and
data compression. Numerical results on a large synthetic grid that mimics the
real-grid of the state of Texas, ACTIVSg2000, and on real-world measurements
from ISO-New England verify the efficacy of applying Grid-GSP methods to
electric grid data.
|
In this paper, we study both convergence and bounded variation properties of
a new fully discrete conservative Lagrangian--Eulerian scheme to the entropy
solution in the sense of Kruzhkov (scalar case) by using a weak asymptotic
analysis. We discuss theoretical developments on the conception of no-flow
curves for hyperbolic problems within scientific computing. The resulting
algorithms have been proven to be effective to study nonlinear wave formations
and rarefaction interactions. We present experiments to a study based on the
use of the Wasserstein distance to show the effectiveness of the no-flow curves
approach in the cases of shock interaction with an entropy wave related to the
inviscid Burgers' model problem and to a 2x2 nonlocal traffic flow symmetric
system of type Keyfitz--Kranzer.
|
Recent developments in graph theoretic analysis of complex networks have led
to deeper understanding of brain networks. Many complex networks show similar
macroscopic behaviors despite differences in the microscopic details. Probably
two most often observed characteristics of complex networks are scale-free and
small-world properties. In this paper, we will explore whether brain networks
follow scale-free and small-worldness among other graph theory properties.
|
Motivated by previous works on a Floquet version of the PXP model [Mukherjee
{\it et al.} Phys. Rev. B 102, 075123 (2020), Mukherjee {\it et al.} Phys. Rev.
B 101, 245107 (2020)], we study a one-dimensional spin-$1/2$ lattice model with
three-spin interactions in the same constrained Hilbert space (where all
configurations with two adjacent $S^z=\uparrow$ spins are excluded). We show
that this model possesses an extensive fragmentation of the Hilbert space which
leads to a breakdown of thermalization upon unitary evolution starting from a
large class of simple initial states. Despite the non-integrable nature of the
Hamiltonian, many of its high-energy eigenstates admit a quasiparticle
description. A class of these, which we dub as "bubble eigenstates", have
integer eigenvalues (including mid-spectrum zero modes) and strictly localized
quasiparticles while another class contains mobile quasiparticles leading to a
dispersion in momentum space. Other anomalous eigenstates that arise due to a
{\it secondary} fragmentation mechanism, including those that lead to flat
bands in momentum space due to destructive quantum interference, are also
discussed. The consequences of adding a (non-commuting) staggered magnetic
field and a PXP term respectively to this model, where the former preserves the
Hilbert space fragmentation while the latter destroys it, are discussed. A
Floquet version with time-dependent staggered field also evades thermalization
with additional features like freezing of exponentially many states at special
drive frequencies. Finally, we map the model to a $U(1)$ lattice gauge theory
coupled to dynamical fermions and discuss the interpretation of some of these
anomalous states in this language. A class of gauge-invariant states show
reduced mobility of the elementary charged excitations with only certain
charge-neutral objects being mobile suggesting a connection to fractons.
|
The recently discovered monad, Tx = Selection (x -> r) -> r, provides an
elegant way to finnd optimal strategies in sequential games. During this
thesis, a library was developed which provides a set of useful functions using
the selection monad to compute optimal games and AIs for sequential games. In
order to explore the selection monads ability to support these AI
implementations, three example case studies were developed using Haskell: The
two-player game Connect Four, a Sudoku solver and a simplified version of
Chess. These case studies show how to elegantly implement a game AI.
Furthermore, a performance analysis of these case studies was done, identifying
the major points where performance can be increased.
|
Amenability is a notion of facial exposedness for convex cones that is
stronger than being facially dual complete (or "nice") which is, in turn,
stronger than merely being facially exposed. Hyperbolicity cones are a family
of algebraically structured closed convex cones that contain all spectrahedra
(linear sections of positive semidefinite cones) as special cases. It is known
that all spectrahedra are amenable. We establish that all hyperbolicity cones
are amenable. As part of the argument, we show that any face of a hyperbolicity
cone is a hyperbolicity cone. As a corollary, we show that the intersection of
two hyperbolicity cones, not necessarily sharing a common relative interior
point, is a hyperbolicity cone.
|
A two-component-two-dimensional coupled with one-component-three-dimensional
(2C2Dcw1C3D) flow may also be called a real Schur flow (RSF), as its velocity
gradient is uniformly of real Schur form, the latter being the intrinsic local
property of any general flows. The thermodynamic and `vortic' fine structures
of RSF are exposed and, in particular, the complete set of equations governing
a (viscous and/or driven) 2C2Dcw1C3D flow are derived. The Lie invariances of
the decomposed vorticity 2-forms of RSFs in $d$-dimensional Euclidean space
$\mathbb{E}^d$ for any interger $d\ge 3$ are also proven, and many
Lie-invariant fine results, such as those of the combinations of the entropic
and vortic quantities, including the invariances of the decomposed Ertel
potential vorticity (and their multiplications by any interger powers of
entropy) 3-forms, then follow.
|
This article addresses extraction of physically meaningful information from
STEM EELS and EDX spectrum-images using methods of Multivariate Statistical
Analysis. The problem is interpreted in terms of data distribution in a
multi-dimensional factor space, which allows for a straightforward and
intuitively clear comparison of various approaches. A new computationally
efficient and robust method for finding physically meaningful endmembers in
spectrum-image datasets is presented. The method combines the geometrical
approach of Vertex Component Analysis with the statistical approach of Bayesian
inference. The algorithm is described in detail at an example of EELS
spectrum-imaging of a multi-compound CMOS transistor.
|
Protoplanetary disks are thought to evolve viscously, where the disk mass -
the reservoir available for planet formation - decreases over time as material
is accreted onto the central star. Observations show a correlation between dust
mass and the stellar accretion rate, as expected from viscous theory. However,
the gas mass inferred from 13CO and C18O line fluxes, which should be a more
direct measure, shows no such correlation. Using thermochemical DALI models, we
investigate how 13CO and C18O J=3-2 line fluxes change over time in a viscously
evolving disk. We also investigate if the chemical conversion of CO through
grain-surface chemistry combined with viscous evolution can explain the
observations of disks in Lupus. The 13CO and C18O 3-2 line fluxes increase over
time due to their optically thick emitting regions growing in size as the disk
expands viscously. The C18O 3-2 emission is optically thin throughout the disk
for only a subset of our models (Mdisk (t = 1 Myr) < 1e-3 Msun). For these
disks the integrated C18O flux decreases with time, similar to the disk mass.
The C18O 3-2 fluxes for the bulk of the disks in Lupus (with Mdust < 5e-5 Msun)
can be reproduced to within a factor of ~2 with viscously evolving disks in
which CO is converted into other species through grain-surface chemistry driven
by a cosmic-ray ionization rate zeta_cr ~ 5e-17 - 1e-16 s^-1. However,
explaining the stacked C18O upper limits requires a lower average abundance
than our models can produce and they cannot explain the observed 13CO fluxes,
which, for most disks, are more than an order of magnitude fainter than what
our models predict. Reconciling the 13CO fluxes of viscously evolving disks
with the observations requires either a combination of efficient vertical
mixing and a high zeta_cr or low mass disks (Mdust < 3e-5 Msun) being much
thinner and/or smaller than their more massive counterparts.
|
Electrospinning has exhibited excellent benefits to treat the trauma for
tissue engineering due to its produced micro/nano fibrous structure. It can
effectively adhere to the tissue surface for long-term continuous therapy. This
paper develops a robotic electrospinning platform for endoluminal therapy. The
platform consists of a continuum manipulator, the electrospinning device, and
the actuation unit. The continuum manipulator has two bending sections to
facilitate the steering of the tip needle for a controllable spinning
direction. Non-circular joint profile is carefully designed to enable a
constant length of the centreline of a continuum manipulator for stable fluid
transmission inside it. Experiments are performed on a bronchus phantom, and
the steering ability and bending limitation in each direction are also
investigated. The endoluminal electrospinning is also fulfilled by a trajectory
following and points targeting experiments. The effective adhesive area of the
produced fibre is also illustrated. The proposed robotic electrospinning shows
its feasibility to precisely spread more therapeutic drug to construct fibrous
structure for potential endoluminal treatment.
|
Extended Affine (EA) equivalence is the equivalence relation between two
vectorial Boolean functions $F$ and $G$ such that there exist two affine
permutations $A$, $B$, and an affine function $C$ satisfying $G = A \circ F
\circ B + C$. While a priori simple, it is very difficult in practice to test
whether two functions are EA-equivalent. This problem has two variants:
EA-testing deals with figuring out whether the two functions can be
EA-equivalent, and EA-recovery is about recovering the tuple $(A,B,C)$ if it
exists.
In this paper, we present a new efficient algorithm that efficiently solves
the EA-recovery problem for quadratic functions. Though its worst-case
complexity is obtained when dealing with APN functions, it supersedes all
previously known algorithms in terms of performance, even in this case. This
approach is based on the Jacobian matrix of the functions, a tool whose study
in this context can be of independent interest.
In order to tackle EA-testing efficiently, the best approach in practice
relies on class invariants. We provide an overview of the literature on said
invariants along with a new one based on the \emph{ortho-derivative} which is
applicable to quadratic APN functions, a specific type of functions that is of
great interest, and of which tens of thousands need to be sorted into distinct
EA-classes. Our ortho-derivative-based invariant is both very fast to compute,
and highly discriminating.
|
Cosmic shear estimation is an essential scientific goal for large galaxy
surveys. It refers to the coherent distortion of distant galaxy images due to
weak gravitational lensing along the line of sight. It can be used as a tracer
of the matter distribution in the Universe. The unbiased estimation of the
local value of the cosmic shear can be obtained via Bayesian analysis which
relies on robust estimation of the galaxies ellipticity (shape) posterior
distribution. This is not a simple problem as, among other things, the images
may be corrupted with strong background noise. For current and coming surveys,
another central issue in galaxy shape determination is the treatment of
statistically dominant overlapping (blended) objects. We propose a Bayesian
Convolutional Neural Network based on Monte-Carlo Dropout to reliably estimate
the ellipticity of galaxies and the corresponding measurement uncertainties. We
show that while a convolutional network can be trained to correctly estimate
well calibrated aleatoric uncertainty, -- the uncertainty due to the presence
of noise in the images -- it is unable to generate a trustworthy ellipticity
distribution when exposed to previously unseen data (i.e. here, blended
scenes). By introducing a Bayesian Neural Network, we show how to reliably
estimate the posterior predictive distribution of ellipticities along with
robust estimation of epistemic uncertainties. Experiments also show that
epistemic uncertainty can detect inconsistent predictions due to unknown
blended scenes.
|
Binary stars are abundant in nearby galaxies, but are typically unaccounted
for in simulations of the high redshift Universe. Stellar population synthesis
models that include the effects of binary evolution result in greater relative
abundances of ionizing photons that could significantly affect the ambient
ionizing background during the epoch of hydrogen reionization, additionally
leading to differences in galaxy gas content and star formation. We use
hydrodynamic cosmological simulations including in situ multifrequency
radiative transfer to evaluate the effects of a high binary fraction in
reionization-era galaxies on traits of the early intergalactic medium and the
abundance of H I and He II ionizing photons. We further extend this to analyze
the traits of enriched gas. In comparing metrics generated using a fiducial
simulation assuming single stars with one incorporating a high binary fraction,
we find that binary stars cause H I reionization to complete earlier and at an
accelerated pace, while also increasing the abundances of high-ionization
metals (C IV and Si IV) in simulated absorption spectra while reducing the
abundance of low-ionization states (O I, Si II, and C II). However, through
increased photoheating of galactic and circumgalactic gas, they simultaneously
reduce the rate of star formation in low-mass galaxies, slowing the ongoing
process of enrichment and suppressing their own ionizing background. This
potentially contributes to a slower He II reionization process at $z\geq5$, and
further indicates that self-regulation of galaxies could be underestimated when
neglecting binary stellar evolution.
|
We present a concept for a machine-learning classification of hard X-ray
(HXR) emissions from solar flares observed by the Reuven Ramaty High Energy
Solar Spectroscopic Imager (RHESSI), identifying flares that are either
occulted by the solar limb or located on the solar disk. Although HXR
observations of occulted flares are important for particle-acceleration
studies, HXR data analyses for past observations were time consuming and
required specialized expertise. Machine-learning techniques are promising for
this situation, and we constructed a sample model to demonstrate the concept
using a deep-learning technique. Input data to the model are HXR spectrograms
that are easily produced from RHESSI data. The model can detect occulted flares
without the need for image reconstruction nor for visual inspection by experts.
A technique of convolutional neural networks was used in this model by
regarding the input data as images. Our model achieved a classification
accuracy better than 90 %, and the ability for the application of the method to
either event screening or for an event alert for occulted flares was
successfully demonstrated.
|
The mathematical similarities between non-relativistic wavefunction
propagation in quantum mechanics and image propagation in scalar diffraction
theory are used to develop a novel understanding of time and paths through
spacetime as a whole. It is well known that Feynman's original derivation of
the path integral formulation of non-relativistic quantum mechanics uses
time-slicing to calculate amplitudes as sums over all possible paths through
space, but along a definite curve through time. Here, a 3+1D spacetime wave
distribution and its 4-momentum dual are formally developed which have no
external time parameter and therefore cannot change or evolve in the usual
sense. Time is thus seen "from the outside". A given 3+1D momentum
representation of a system encodes complete dynamical information, describing
the system's spacetime behavior as a whole. A comparison is made to the
mathematics of holograms, and properties of motion for simple systems are
derived.
|
Existing open-source modeling frameworks dedicated to energy systems
optimization typically utilize (mixed-integer) linear programming ((MI)LP)
formulations, which lack modeling freedom for technical system design and
operation. We present COMANDO, an open-source Python package for
component-oriented modeling and optimization for nonlinear design and operation
of integrated energy systems. COMANDO allows to assemble system models from
component models including nonlinear, dynamic and discrete characteristics.
Based on a single system model, different deterministic and stochastic problem
formulations can be obtained by varying objective function and underlying data,
and by applying automatic or manual reformulations. The flexible open-source
implementation allows for the integration of customized routines required to
solve challenging problems, e.g., initialization, problem decomposition, or
sequential solution strategies. We demonstrate features of COMANDO via case
studies, including automated linearization, dynamic optimization, stochastic
programming, and the use of nonlinear artificial neural networks as surrogate
models in a reduced-space formulation for deterministic global optimization.
|
The emerging large-scale and data-hungry algorithms require the computations
to be delegated from a central server to several worker nodes. One major
challenge in the distributed computations is to tackle delays and failures
caused by the stragglers. To address this challenge, introducing efficient
amount of redundant computations via distributed coded computation has received
significant attention. Recent approaches in this area have mainly focused on
introducing minimum computational redundancies to tolerate certain number of
stragglers. To the best of our knowledge, the current literature lacks a
unified end-to-end design in a heterogeneous setting where the workers can vary
in their computation and communication capabilities. The contribution of this
paper is to devise a novel framework for joint scheduling-coding, in a setting
where the workers and the arrival of stream computational jobs are based on
stochastic models. In our initial joint scheme, we propose a systematic
framework that illustrates how to select a set of workers and how to split the
computational load among the selected workers based on their differences in
order to minimize the average in-order job execution delay. Through
simulations, we demonstrate that the performance of our framework is
dramatically better than the performance of naive method that splits the
computational load uniformly among the workers, and it is close to the ideal
performance.
|
We study the realisation of higher-form symmetries in the holographic dual of
gauge theories coupled to probe matter in the fundamental. We particularly
focus on the dual of U(N) gauge theory coupled to fundamental matter. We
demonstrate the existence of a continuous 1-form symmetry associated with the
conservation of magnetic flux and show that this symmetry is spontaneously
broken in the IR when the flavour degrees of freedom are gapped. We numerically
compute the spectral function of the 2-form current and demonstrate the
existence of the associated Goldstone mode. We compare to expectations at
weak-coupling.
|
We consider the motion of an electron in an atom subjected to a strong
linearly polarized laser field. We identify the invariant structures organizing
a very specific subset of trajectories, namely recollisions. Recollisions are
trajectories which first escape the ionic core (i.e., ionize) and later return
to this ionic core, for instance, to transfer the energy gained during the
large excursion away from the core to bound electrons. We consider the role
played by the directions transverse to the polarization direction in the
recollision process. We compute the family of two-dimensional invariant tori
associated with a specific hyperbolic-elliptic periodic orbit and their stable
and unstable manifolds. We show that these manifolds organize recollisions in
phase space.
|
The Operating Room Scheduling (ORS) problem is the task of assigning patients
to operating rooms, taking into account different specialties, lengths and
priority scores of each planned surgery, operating room session durations, and
the availability of beds for the entire length of stay both in the Intensive
Care Unit and in the wards. A proper solution to the ORS problem is of primary
importance for the healthcare service quality and the satisfaction of patients
in hospital environments. In this paper we first present a solution to the
problem based on Answer Set Programming (ASP). The solution is tested on
benchmarks with realistic sizes and parameters, on three scenarios for the
target length on 5-day scheduling, common in small-medium sized hospitals, and
results show that ASP is a suitable solving methodology for the ORS problem in
such setting. Then, we also performed a scalability analysis on the schedule
length up to 15 days, which still shows the suitability of our solution also on
longer plan horizons. Moreover, we also present an ASP solution for the
rescheduling problem, i.e. when the off-line schedule cannot be completed for
some reason. Finally, we introduce a web framework for managing ORS problems
via ASP that allows a user to insert the main parameters of the problem, solve
a specific instance, and show results graphically in real-time. Under
consideration in Theory and Practice of Logic Programming (TPLP).
|
We report on the first results of the Noble and Alkali Spin Detectors for
Ultralight Coherent darK matter (NASDUCK) collaboration. We search for the
interactions of Axion-Like Particles (ALPs) with atomic spins using an
earth-based precision quantum detector as it traverses through the galactic
dark matter halo. The detector is composed of spin-polarized xenon gas which
can coherently interact with a background ALP dark matter field and an in-situ
rubidium Floquet optical-magnetometer. Conducting a five months-long search, we
derive new constraints on ALP-proton and ALP-neutron interactions in the
$4\times 10^{-15}-4\times 10^{-12}{~\rm eV/c^2}$ mass range. Our limits on the
ALP-proton (ALP-neutron) couplings improve upon previous terrestrial bounds by
up to 3 orders of magnitude for masses above $4\times 10^{-14}{~\rm eV/c^2}$
($4\times 10^{-13}{~\rm eV/c^2}$). Moreover, barring the uncertain supernova
constraints, the ALP-proton bound improves on all existing terrestrial and
astrophysical limits, partially closing the unexplored region for couplings in
the range $10^{-6}~{\rm GeV^{-1}}$ to $2\times 10^{-5}~{\rm GeV^{-1}}$.
Finally, we also cast bounds on pseudo-scalar dark matter models in which dark
matter is quadratically-coupled to the nucleons.
|
In this paper, we investigate properties of orbits of Hermann actions as
submanifolds without assuming the commutability of involutions which define
Hermann actions. In particular, we compute the second fundamental form of
orbits of Hermann action, and give a sufficient condition for orbits of Hermann
action to be weakly reflective (resp. arid) submanifolds.
|
In this paper, we explore the impact of a galactic bar on the inspiral
time-scale of a massive perturber (MP) within a Milky Way-like galaxy. We
integrate the orbit of MPs in a multi-component galaxy model via a
semi-analytical approach including an accurate treatment for dynamical friction
generalized to rotationally supported backgrounds. We compare the MP evolution
in a galaxy featuring a Milky Way-like rotating bar to the evolution within an
analogous axisymmetric galaxy without the bar. We find that the bar presence
may significantly affect the inspiral, sometimes making it shorter by a factor
of a few, sometimes hindering it for a Hubble time, implying that dynamical
friction alone is greatly insufficient to fully characterize the orbital decay.
The effect of the bar is more prominent for initially in-plane, prograde MPs,
especially those crossing the bar co-rotation radius or outer Lindblad
resonance. In the barred galaxy, we find the sinking of the most massive MPs
(>~10^7.5 Msun) approaching the galaxy from large separations (>~8 kpc) to be
most efficiently hampered. Neglecting the effect of global torques associated
to the non-symmetric mass distribution is thus not advisable even within our
idealized, smooth Milky Way model, and it should be avoided when dealing with
more complex and realistic galaxy systems. This has important implications for
the orbital decay of massive black holes in late-type spirals, the natural
candidate sources to be detected with the Laser Interferometer Space Antenna
(LISA).
|
We present a combined theoretical and experimental study of X-ray optical
wave mixing. This class of nonlinear phenomena combines the strengths of
spectroscopic techniques from the optical domain, with the high-resolution
capabilities of X-rays. In particular, the spectroscopic sensitivity of these
phenomena can be exploited to selectively probe valence dynamics. Specifically,
we focus on the effect of X-ray parametric down-conversion. We present a
theoretical description of the process, from which we deduce the observable
nonlinear response of valence charges. Subsequently, we simulate scattering
patterns for realistic conditions and identify characteristic signatures of the
nonlinear conversion. For the observation of this signature, we present a
dedicated experimental setup and results of a detailed investigation. However,
we do not find evidence of the nonlinear effect. This finding stands in strong
contradiction to previous claims of proof-of-principle demonstrations.
Nevertheless, we are optimistic to employ related X-ray optical wave mixing
processes on the basis of the methods presented here for probing valence
dynamics in the future.
|
Estimation of prevalence of undocumented SARS-CoV-2 infections is critical
for understanding the overall impact of the Covid-19 disease. In fact,
unveiling uncounted cases has fundamental implications for public policy
interventions strategies. In the present work, we show a basic yet effective
approach to estimate the actual number of people infected by Sars-Cov-2, by
using epidemiological raw data reported by official health institutions in the
largest EU countries and USA.
|
We discuss the rejection-free event-chain Monte-Carlo algorithm and several
applications to dense soft matter systems. Event-chain Monte-Carlo is an
alternative to standard local Markov-chain Monte-Carlo schemes, which are based
on detailed balance, for example the well-known Metropolis-Hastings algorithm.
Event-chain Monte-Carlo is a Markov chain Monte-Carlo scheme that uses
so-called lifting moves to achieve global balance without rejections (maximal
global balance). It has been originally developed for hard sphere systems but
is applicable to many soft matter systems and particularly suited for dense
soft matter systems with hard core interactions, where it gives significant
performance gains compared to a local Monte-Carlo simulation. The algorithm can
be generalized to deal with soft interactions and with three-particle
interactions, as they naturally arise, for example, in bead-spring models of
polymers with bending rigidity. We present results for polymer melts, where the
event-chain algorithm can be used for an efficient initialization. We then move
on to large systems of semiflexible polymers that form bundles by attractive
interactions and can serve as model systems for actin filaments in the
cytoskeleton. The event chain algorithm shows that these systems form networks
of bundles which coarsen similar to a foam. Finally, we present results on
liquid crystal systems, where the event-chain algorithm can equilibrate large
systems containing additional colloidal disks very efficiently, which reveals
the parallel chaining of disks.
|
Advanced building control methods such as model predictive control (MPC)
offer significant potential benefits to both consumers and grid operators, but
the high computational requirements have acted as barriers to more widespread
adoption. Local control computation requires installation of expensive
computational hardware, while cloud computing introduces data security and
privacy concerns. In this paper, we drastically reduce the local computational
requirements of advanced building control through a reinforcement learning
(RL)-based approach called Behavioral Cloning, which represents the MPC policy
as a neural network that can be locally implemented and quickly computed on a
low-cost programmable logic controller. While previous RL and approximate MPC
methods must be specifically trained for each building, our key improvement is
that our controller can generalize to many buildings, electricity rates, and
thermostat setpoint schedules without additional, effort-intensive retraining.
To provide this versatility, we have adapted the traditional Behavioral Cloning
approach through (1) a constraint-informed parameter grouping (CIPG) method
that provides a more efficient representation of the training data; (2) an
MPC-Guided training data generation method using the DAgger algorithm that
improves stability and constraint satisfaction; and (3) a new deep learning
model-structure called reverse-time recurrent neural networks (RT-RNN) that
allows future information to flow backward in time to more effectively
interpret the temporal information in disturbance predictions. The result is an
easy-to-deploy, generalized behavioral clone of MPC that can be implemented on
a programmable logic controller and requires little building-specific
controller tuning, reducing the effort and costs associated with implementing
smart residential heat pump control.
|
Let $\mathcal{G}^{(\lambda)}$ be a group scheme which deforms $\mathbb{G}_a$
to $\mathbb{G}_m$. We explicitly describe the Cartier dual of the $l$-th
Frobenius type kernel $N_l$ of the group scheme $\mathcal{E}^{(\lambda,\mu;D)}$
which is an extension of $\mathcal{G}^{(\lambda)}$ by $\mathcal{G}^{(\mu)}$.
Here we assume that the base ring $A$ is a $\mathbb{Z}_{(p)}$-algebra
containing some nilpotent elements. The obtained result generalizes a previous
result by N. Aki and M. Amano (Tsukuba J.Math. $\textbf{34}$ (2010)) which
assumes that $A$ is an $\mathbb{F}_p$-algebra.
|
The competition of depletion attractions and longer-ranged repulsions between
colloidal particles in colloid-polymer mixtures leads to the formation of
heterogeneous gel-like structures. For instance, gel networks, i.e., states
where the colloids arrange in thin strands that span the whole system occur at
low packing fractions for attractions that are stronger than those at the
binodal line of equilibrium liquid-fluid phase separation. By using Brownian
dynamics simulations we explore the formation, structure, and ageing dynamics
of gel networks. We determine reduced network that focus on the essential
connections in a gel network. We compare the observed properties to those of
bulky gels or cluster fluids. Our results demonstrate that both the structure
as well as the (often slow) dynamics of the stable or meta-stable heterogenous
states in colloid-polymer mixtures possess distinct features on various length
and time scales and thus are richly divers.
|
Recent studies revealed that the electric multipole moments of insulators
result in fractional electric charges localized to the hinges and corners of
the sample. We here explore the magnetic analog of this relation. We show that
a collinear antiferromagnet with spin $S$ defined on a $d$-dimensional cubic
lattice features fractionally quantized magnetization $M_{\text{c}}^z=S/2^d$ at
the corners. We find that the quantization is robust even in the presence of
gapless excitations originating from the spontaneous formation of the N\'eel
order, although the localization length diverges, suggesting a power-law
localization of the corner magnetization. When the spin rotational symmetry
about the $z$ axis is explicitly broken, the corner magnetization is no longer
sharply quantized. Even in this case, we numerically find that the deviation
from the quantized value is negligibly small based on quantum Monte Carlo
simulations.
|
Video streaming platforms such as Youtube, Twitch, and DLive allow users to
live-stream video content for viewers who can optionally express their
appreciation through monetary donations. DLive is one of the smaller and
lesser-known streaming platforms, and historically has had fewer content
moderation practices. It has thus become a popular place for violent extremists
and other clandestine groups to earn money and propagandize. What is the
financial structure of the DLive streaming ecosystem and how much money is
changing hands? In the past it has been difficult to understand how far-right
extremists fundraise via podcasts and video streams because of the secretive
nature of the activity and because of the difficulty of getting data from
social media platforms. This paper describes a novel experiment to collect and
analyze data from DLive's publicly available ledgers of transactions in order
to understand the financial structure of the clandestine, extreme far-right
video streaming community. The main findings of this paper are, first, that the
majority of donors are using micropayments in varying frequencies, but a small
handful of donors spend large amounts of money to finance their favorite
streamers. Next, the timing of donations to high-profile far-right streamers
follows a fairly predictable pattern that is closely tied to a broadcast
schedule. Finally, the far-right video streaming financial landscape is divided
into separate cliques which exhibit very little crossover in terms of sizable
donations. This work will be important to technology companies, policymakers,
and researchers who are trying to understand how niche social media services,
including video platforms, are being exploited by extremists to propagandize
and fundraise.
|
Charts often contain visually prominent features that draw attention to
aspects of the data and include text captions that emphasize aspects of the
data. Through a crowdsourced study, we explore how readers gather takeaways
when considering charts and captions together. We first ask participants to
mark visually prominent regions in a set of line charts. We then generate text
captions based on the prominent features and ask participants to report their
takeaways after observing chart-caption pairs. We find that when both the chart
and caption describe a high-prominence feature, readers treat the doubly
emphasized high-prominence feature as the takeaway; when the caption describes
a low-prominence chart feature, readers rely on the chart and report a
higher-prominence feature as the takeaway. We also find that external
information that provides context, helps further convey the caption's message
to the reader. We use these findings to provide guidelines for authoring
effective chart-caption pairs.
|
In sorted range selection problem, the aim is to preprocess a given array
A[1: n] so as to answers queries of type: given two indices i,j ($1 \le i\le j
\le n$) and an integer k, report k smallest elements in sorted order present in
the sub-array A[i: j] Brodal et.al.[2] have shown that the problem can be
solved in O(k) time after O(n log n) preprocessing in linear space. In this
paper we discuss another tradeoff. We reduce preprocessing time to O(n), but
query time is O(k log k), again using linear space. Our method is very simple.
|
Let n respondents rank order d items, and suppose that d << n. Our main task
is to uncover and display the structure of the observed rank data by an
exploratory riffle shuffling procedure which sequentially decomposes the n
voters into a finite number of coherent groups plus a noisy group : where the
noisy group represents the outlier voters and each coherent group is composed
of a finite number of coherent clusters. We consider exploratory riffle
shuffling of a set of items to be equivalent to optimal two blocks seriation of
the items with crossing of some scores between the two blocks. A riffle
shuffled coherent cluster of voters within its coherent group is essentially
characterized by the following facts : a) Voters have identical first TCA
factor score, where TCA designates taxicab correspondence analysis, an L1
variant of correspondence analysis ; b) Any preference is easily interpreted as
riffle shuffling of its items ; c) The nature of different riffle shuffling of
items can be seen in the structure of the contingency table of the first-order
marginals constructed from the Borda scorings of the voters ; d) The first TCA
factor scores of the items of a coherent cluster are interpreted as Borda scale
of the items. We also introduce a crossing index, which measures the extent of
crossing of scores of voters between the two blocks seriation of the items. The
novel approach is explained on the benchmarking SUSHI data set, where we show
that this data set has a very simple structure, which can also be communicated
in a tabular form.
|
This paper proposes a novel machine-learning approach for predicting AC-OPF
solutions that features a fast and scalable training. It is motivated by the
two critical considerations: (1) the fact that topology optimization and the
stochasticity induced by renewable energy sources may lead to fundamentally
different AC-OPF instances; and (2) the significant training time needed by
existing machine-learning approaches for predicting AC-OPF. The proposed
approach is a 2-stage methodology that exploits a spatial decomposition of the
power network that is viewed as a set of regions. The first stage learns to
predict the flows and voltages on the buses and lines coupling the regions, and
the second stage trains, in parallel, the machine-learning models for each
region. Experimental results on the French transmission system (up to 6,700
buses and 9,000 lines) demonstrate the potential of the approach. Within a
short training time, the approach predicts AC-OPF solutions with very high
fidelity and minor constraint violations, producing significant improvements
over the state-of-the-art. The results also show that the predictions can seed
a load flow optimization to return a feasible solution within 0.03% of the
AC-OPF objective, while reducing running times significantly.
|
With increasing automation in passenger vehicles, the study of safe and
smooth occupant-vehicle interaction and control transitions is key. In this
study, we focus on the development of contextual, semantically meaningful
representations of the driver state, which can then be used to determine the
appropriate timing and conditions for transfer of control between driver and
vehicle. To this end, we conduct a large-scale real-world controlled data study
where participants are instructed to take-over control from an autonomous agent
under different driving conditions while engaged in a variety of distracting
activities. These take-over events are captured using multiple driver-facing
cameras, which when labelled result in a dataset of control transitions and
their corresponding take-over times (TOTs). We then develop and train TOT
models that operate sequentially on mid to high-level features produced by
computer vision algorithms operating on different driver-facing camera views.
The proposed TOT model produces continuous predictions of take-over times
without delay, and shows promising qualitative and quantitative results in
complex real-world scenarios.
|
In this paper, we improve speech translation (ST) through effectively
leveraging large quantities of unlabeled speech and text data in different and
complementary ways. We explore both pretraining and self-training by using the
large Libri-Light speech audio corpus and language modeling with CommonCrawl.
Our experiments improve over the previous state of the art by 2.6 BLEU on
average on all four considered CoVoST 2 language pairs via a simple recipe of
combining wav2vec 2.0 pretraining, a single iteration of self-training and
decoding with a language model. Different to existing work, our approach does
not leverage any other supervision than ST data. Code and models will be
publicly released.
|
Machine learning systems are often trained using data collected from
historical decisions. If past decisions were biased, then automated systems
that learn from historical data will also be biased. We propose a black-box
approach to identify and remove biased training data. Machine learning models
trained on such debiased data (a subset of the original training data) have low
individual discrimination, often 0%. These models also have greater accuracy
and lower statistical disparity than models trained on the full historical
data. We evaluated our methodology in experiments using 6 real-world datasets.
Our approach outperformed seven previous approaches in terms of individual
discrimination and accuracy.
|
Convolutional neural networks (CNNs) learn to extract representations of
complex features, such as object shapes and textures to solve image recognition
tasks. Recent work indicates that CNNs trained on ImageNet are biased towards
features that encode textures and that these alone are sufficient to generalize
to unseen test data from the same distribution as the training data but often
fail to generalize to out-of-distribution data. It has been shown that
augmenting the training data with different image styles decreases this texture
bias in favor of increased shape bias while at the same time improving
robustness to common corruptions, such as noise and blur. Commonly, this is
interpreted as shape bias increasing corruption robustness. However, this
relationship is only hypothesized. We perform a systematic study of different
ways of composing inputs based on natural images, explicit edge information,
and stylization. While stylization is essential for achieving high corruption
robustness, we do not find a clear correlation between shape bias and
robustness. We conclude that the data augmentation caused by style-variation
accounts for the improved corruption robustness and increased shape bias is
only a byproduct.
|
We investigate the photon pumping effect in a topological model consisting of
a periodically driven spin-1/2 coupled to a quantum cavity mode out of the
adiabatic limit. In the strong-drive adiabatic limit, a quantized frequency
conversion of photons is expected as the temporal analog of the Hall current.
We numerically establish a novel photon pumping phenomenon in the
experimentally accessible nonadiabatic driving regime for a broad region of the
parameter space. The photon frequency conversion efficiency exhibits strong
fluctuations and high efficiency that can reach up 80% of the quantized value
for commensurate frequency combinations. We link the pumping properties to the
delocalization of the corresponding Floquet states which display multifractal
behavior as the result of hybridization between localized and delocalized
sectors. Finally we demonstrate that the quantum coherence properties of the
initial state are preserved during the frequency conversion process in both the
strong and ultra-weak-drive limit.
|
We study the optimal investment policy of a firm facing both technological
and cash-flow uncertainty. At any point in time, the firm can decide to invest
in a standalone technology or to wait for a technological breakthrough.
Breakthroughs occur when market conditions become favorable enough, exceeding a
certain threshold value that is ex-ante unknown to the firm. A microfoundation
for this assumption is that a breakthrough occurs when the share of the surplus
from the new technology accruing to its developer is high enough to cover her
privately observed cost. We show that the relevant Markov state variables for
the firm's optimal investment policy are the current market conditions and
their current historic maximum, and that the firm optimally invests in the
stand-alone technology only when market conditions deteriorate enough after
reaching a maximum. Empirically, investments in new technologies requiring the
active cooperation of developers should thus take place in booms, whereas
investments in state-of-the-art technologies should take place in busts.
Moreover, the required return for investing in the stand-alone technology is
always higher than if this were the only available technology and can take
arbitrarily large values following certain histories. Finally, a decrease in
development costs, or an increase in the value of the new technology, makes the
firm more prone to bear downside risk and to delay investment in the
stand-alone technology.
|
In this paper, we present the XMUSPEECH system for Task 1 of 2020
Personalized Voice Trigger Challenge (PVTC2020). Task 1 is a joint wake-up word
detection with speaker verification on close talking data. The whole system
consists of a keyword spotting (KWS) sub-system and a speaker verification (SV)
sub-system. For the KWS system, we applied a Temporal Depthwise Separable
Convolution Residual Network (TDSC-ResNet) to improve the system's performance.
For the SV system, we proposed a multi-task learning network, where phonetic
branch is trained with the character label of the utterance, and speaker branch
is trained with the label of the speaker. Phonetic branch is optimized with
connectionist temporal classification (CTC) loss, which is treated as an
auxiliary module for speaker branch. Experiments show that our system gets
significant improvements compared with baseline system.
|
In order to transmit electrical energy in a continuous and quality manner, it
is necessary to control it from the point of production to the point of
consumption. Therefore, protection of transmission and distribution lines is
essential at every stage from production to consumption. The main function of
the protection relays in electrical installations should be deactivated as soon
as possible in the event of short circuits in the system. The most important
part of the system is energy transmission lines and distance protection relays
that protect these lines. An accurate error location technique is required to
make fast and efficient work. Transformer neutral point grounding in
transmission lines affects the operation of the zero component current during
the single phase to ground short circuit failure of a power system. Considering
the relationship between the grounding system and protection systems, an
appropriate grounding choice should be made. Artificial neural network (ANN)
has been used in order to accurately locate short circuit faults in different
grounding systems in transmission lines. Compared with support vector machines
(SVM) for testing inside ANN The transmission line model is made in the
PSCAD-EMTDC simulation program. Data sets were created by recording the image
of the impedance change of the R-X impedance diagram of the distance protection
relay in short circuit faults created in different grounding systems. The
related focal points in the images are given as an introduction to different
ANN models using feature extraction and image processing techniques and the ANN
model with the highest fault location estimation accuracy was chosen.
|
Two intermetallic FeAl compounds with Al content of 70.68 and 72.17 at.pct
were studied using M\"ossbauer spectroscopy (5 to 296 K) and X-ray diffraction
(15 to 300 K). The compounds were found to crystallize in the orthorhombic Cmcm
space group (eta-phase). The collected data revealed that dynamics of the Fe
atoms (harmonic in entire temperature range) is significantly different that Al
atoms. For the latter strong anharmonicity was evidenced. Moreover, it was
found that partial filling of the different Al sites leads to occurrence of low
and high symmetry coordination of Fe atoms, which was reflected in occurrence
of two distinct doublets in M\"ossbauer spectra. All spectral parameters of the
doublets as well as the Debye temperature, force constant, kinetic and
potential energies of vibrations were determined. Those results revealed
significant differences between both alloys, likely originating from
approaching the stability boundary of the eta-phase for Fe-Al 72.17 at.pct
alloy.
|
In clinical practice, medical image interpretation often involves
multi-labeled classification, since the affected parts of a patient tend to
present multiple symptoms or comorbidities. Recently, deep learning based
frameworks have attained expert-level performance on medical image
interpretation, which can be attributed partially to large amounts of accurate
annotations. However, manually annotating massive amounts of medical images is
impractical, while automatic annotation is fast but imprecise (possibly
introducing corrupted labels). In this work, we propose a new regularization
approach, called Flow-Mixup, for multi-labeled medical image classification
with corrupted labels. Flow-Mixup guides the models to capture robust features
for each abnormality, thus helping handle corrupted labels effectively and
making it possible to apply automatic annotation. Specifically, Flow-Mixup
decouples the extracted features by adding constraints to the hidden states of
the models. Also, Flow-Mixup is more stable and effective comparing to other
known regularization methods, as shown by theoretical and empirical analyses.
Experiments on two electrocardiogram datasets and a chest X-ray dataset
containing corrupted labels verify that Flow-Mixup is effective and insensitive
to corrupted labels.
|
Multi-scale biomedical knowledge networks are expanding with emerging
experimental technologies that generates multi-scale biomedical big data. Link
prediction is increasingly used especially in bipartite biomedical networks to
identify hidden biological interactions and relationshipts between key entities
such as compounds, targets, gene and diseases. We propose a Graph Neural
Networks (GNN) method, namely Graph Pair based Link Prediction model (GPLP),
for predicting biomedical network links simply based on their topological
interaction information. In GPLP, 1-hop subgraphs extracted from known network
interaction matrix is learnt to predict missing links. To evaluate our method,
three heterogeneous biomedical networks were used, i.e. Drug-Target Interaction
network (DTI), Compound-Protein Interaction network (CPI) from NIH Tox21, and
Compound-Virus Inhibition network (CVI). Our proposed GPLP method significantly
outperforms over the state-of-the-art baselines. In addition, different network
incompleteness is analysed with our devised protocol, and we also design an
effective approach to improve the model robustness towards incomplete networks.
Our method demonstrates the potential applications in other biomedical
networks.
|
Assuming Lehmer's conjecture, we estimate the degree of the trace field
$K(M_{p/q})$ of a hyperbolic Dehn-filling $M_{p/q}$ of a 1-cusped
hyperbolic 3-manifold $M$ by \begin{equation*}
\dfrac{1}{C}(\max\;\{|p|,|q|\})\leq \text{deg }K(M_{p/q})
\leq C(\max\;\{|p|,|q|\}) \end{equation*} where $C=C_M$ is a constant that
depends on $M$.
|
The calculation of the MP2 correlation energy for extended systems can be
viewed as a multi-dimensional integral in the thermodynamic limit, and the
standard method for evaluating the MP2 energy can be viewed as a trapezoidal
quadrature scheme. We demonstrate that existing analysis neglects certain
contributions due to the non-smoothness of the integrand, and may significantly
underestimate finite-size errors. We propose a new staggered mesh method, which
uses two staggered Monkhorst-Pack meshes for occupied and virtual orbitals,
respectively, to compute the MP2 energy. The staggered mesh method circumvents
a significant error source in the standard method, in which certain quadrature
nodes are always placed on points where the integrand is discontinuous. One
significant advantage of the proposed method is that there are no tunable
parameters, and the additional numerical effort needed can be negligible
compared to the standard MP2 calculation. Numerical results indicate that the
staggered mesh method can be particularly advantageous for quasi-1D systems, as
well as quasi-2D and 3D systems with certain symmetries.
|
Unsupervised concept identification through clustering, i.e., identification
of semantically related words and phrases, is a common approach to identify
contextual primitives employed in various use cases, e.g., text dimension
reduction, i.e., replace words with the concepts to reduce the vocabulary size,
summarization, and named entity resolution. We demonstrate the first results of
an unsupervised approach for the identification of groups of persons as actors
extracted from a set of related articles. Specifically, the approach clusters
mentions of groups of persons that act as non-named entity actors in the texts,
e.g., "migrant families" = "asylum-seekers." Compared to our baseline, the
approach keeps the mentions of the geopolitical entities separated, e.g., "Iran
leaders" != "European leaders," and clusters (in)directly related mentions with
diverse wording, e.g., "American officials" = "Trump Administration."
|
The electroweak symmetry breaking (EWSB) mechanism is still an undecided
question in particle physics. We propose to utilize the single top quark and
Higgs associated production ($th$), $Zh$ production via gluon fusion at the LHC
to probe the couplings between the Higgs and the gauge bosons and further to
test the EWSB. We demonstrate that the $th$ and $gg\to Zh$ productions are
sensitive to the relative sign of couplings ($ht\bar{t}$, $hWW$) and
($ht\bar{t}$, $hZZ$), respectively. We find that the relative sign between
$hWW$ and $hZZ$ couplings could be fully determined after combining the present
measurements from $gg\to h$, $t\bar{t}h$ and the $th$, $Zh$ channels, as well
as $tZj$ and $Zt\bar{t}$ production at the 13 TeV LHC, and this conclusion is
not sensitive to the possible new physics contribution induced by $Zt\bar{t}$
couplings in the $gg\to Zh$ production.
|
A single transverse mode high pulse-energy VECSEL was developed. The GaSb
based VECSEL emits at a wavelength of 2.04 microns with a peak power exceeding
500W while maintaining good beam quality. The cavity employs a Pockels cell
combined with a low-loss thin film polarizer to selectively dump the
intracavity energy into a 10ns pulse. The laser has promise for incoherent
LiDAR, materials processing, gas sensing, and nonlinear optics.
|
Deep learning-based video manipulation methods have become widely accessible
to the masses. With little to no effort, people can quickly learn how to
generate deepfake (DF) videos. While deep learning-based detection methods have
been proposed to identify specific types of DFs, their performance suffers for
other types of deepfake methods, including real-world deepfakes, on which they
are not sufficiently trained. In other words, most of the proposed deep
learning-based detection methods lack transferability and generalizability.
Beyond detecting a single type of DF from benchmark deepfake datasets, we focus
on developing a generalized approach to detect multiple types of DFs, including
deepfakes from unknown generation methods such as DeepFake-in-the-Wild (DFW)
videos. To better cope with unknown and unseen deepfakes, we introduce a
Convolutional LSTM-based Residual Network (CLRNet), which adopts a unique model
training strategy and explores spatial as well as the temporal information in
deepfakes. Through extensive experiments, we show that existing defense methods
are not ready for real-world deployment. Whereas our defense method (CLRNet)
achieves far better generalization when detecting various benchmark deepfake
methods (97.57% on average). Furthermore, we evaluate our approach with a
high-quality DeepFake-in-the-Wild dataset, collected from the Internet
containing numerous videos and having more than 150,000 frames. Our CLRNet
model demonstrated that it generalizes well against high-quality DFW videos by
achieving 93.86% detection accuracy, outperforming existing state-of-the-art
defense methods by a considerable margin.
|
Colonies of bacterial cells endowed with a pili-based self-propulsion
machinery represent an ideal system for studying how active adhesion forces,
mediated by dynamic, bond-like interactions, affect structure and dynamics of
many-particle systems. We introduce a molecular-dynamics-simulation-based
approach to study Neisseria gonorrhoeae colonies. A generic, adaptable
simulation method for particle systems with fluctuating bond-like interactions
is devised. The simulations are employed to investigate growth of bacterial
colonies and the dependence of the colony structure on cell-cell interactions.
In colonies consisting only of wild-type cells, active pilus retraction is
found to enhance local ordering. For mixed colonies consisting of different
types of cells, the simulations show a segregation depending on the
pili-mediated interactions among different cells. Both results are in good
qualitative agreement with experimental observations. By representing an
experimental setup in silico, we study the power-spectral density of
colony-shape fluctuations and the fluctuation-response relation. Simulations
predict a strong violation of the equilibrium fluctuation-response relation
across the measurable frequency range. Furthermore, we show that active force
generation enables colonies to spread on adhesive surfaces and to invade narrow
channels. Our work presents a foundation for quantitative studies of the
physics of many-particle systems with active adhesion forces in complex
geometries.
|
We investigate in detail the spectrum of gravitational waves induced by a
peaked primordial curvature power spectrum generated in single field
inflationary models. We argue that the $f_{\rm NL}$ parameter can be inferred
by measuring the high frequency spectral tilt of the induced gravitational
waves. We also show that the intrinsically non-Gaussian impact of $f_{\rm NL}$
in $\Omega_{\rm GW}$ is to broaden its peak, although at a negligible level in
order not to overproduce primordial black holes. We discuss possible
degeneracies in the high frequency spectral tilt between $f_{\rm NL}$ and a
general equation of state of the universe $w$. Finally, we discuss the
constraints on the amplitude, peak and slope (or equivalently, $f_{\rm NL}$) of
the primordial power spectrum by combining current and future gravitational
wave experiments with limits on $\mu$ distortions from the cosmic microwave
background.
|
Using the Markovian master equation for quantum quasiparticles, we show that
convection in the stellar photosphere generates plasma waves by an irreversible
process akin to Zeldovich superradiance and sonic booms. In the Sun, this
mechanism is most efficient in quiet regions with magnetic fields of order one
gauss. Most energy is carried by Alfven waves with megahertz frequencies, which
travel upwards until they reach a height at which they dissipate via mode
conversion. This gives the right power flux for the observed energy transport
from the colder photosphere to the hotter corona.
|
We study high-dimensional Bayesian linear regression with product priors.
Using the nascent theory of non-linear large deviations (Chatterjee and
Dembo,2016), we derive sufficient conditions for the leading-order correctness
of the naive mean-field approximation to the log-normalizing constant of the
posterior distribution. Subsequently, assuming a true linear model for the
observed data, we derive a limiting infinite dimensional variational formula
for the log normalizing constant of the posterior. Furthermore, we establish
that under an additional "separation" condition, the variational problem has a
unique optimizer, and this optimizer governs the probabilistic properties of
the posterior distribution. We provide intuitive sufficient conditions for the
validity of this "separation" condition. Finally, we illustrate our results on
concrete examples with specific design matrices.
|
Objective: Deep learning-based neural decoders have emerged as the prominent
approach to enable dexterous and intuitive control of neuroprosthetic hands.
Yet few studies have materialized the use of deep learning in clinical settings
due to its high computational requirements. Methods: Recent advancements of
edge computing devices bring the potential to alleviate this problem. Here we
present the implementation of a neuroprosthetic hand with embedded deep
learning-based control. The neural decoder is designed based on the recurrent
neural network (RNN) architecture and deployed on the NVIDIA Jetson Nano - a
compacted yet powerful edge computing platform for deep learning inference.
This enables the implementation of the neuroprosthetic hand as a portable and
self-contained unit with real-time control of individual finger movements.
Results: The proposed system is evaluated on a transradial amputee using
peripheral nerve signals (ENG) with implanted intrafascicular microelectrodes.
The experiment results demonstrate the system's capabilities of providing
robust, high-accuracy (95-99%) and low-latency (50-120 msec) control of
individual finger movements in various laboratory and real-world environments.
Conclusion: Modern edge computing platforms enable the effective use of deep
learning-based neural decoders for neuroprosthesis control as an autonomous
system. Significance: This work helps pioneer the deployment of deep neural
networks in clinical applications underlying a new class of wearable biomedical
devices with embedded artificial intelligence.
|
For $G = \mathrm{GL}_2, \mathrm{SL}_2, \mathrm{PGL}_2$ we compute the
intersection E-polynomials and the intersection Poincar\'e polynomials of the
$G$-character variety of a compact Riemann surface $C$ and of the moduli space
of $G$-Higgs bundles on $C$ of degree zero. We derive several results
concerning the P=W conjectures for these singular moduli spaces.
|
Getting a robust time-series clustering with best choice of distance measure
and appropriate representation is always a challenge. We propose a novel
mechanism to identify the clusters combining learned compact representation of
time-series, Auto Encoded Compact Sequence (AECS) and hierarchical clustering
approach. Proposed algorithm aims to address the large computing time issue of
hierarchical clustering as learned latent representation AECS has a length much
less than the original length of time-series and at the same time want to
enhance its performance.Our algorithm exploits Recurrent Neural Network (RNN)
based under complete Sequence to Sequence(seq2seq) autoencoder and
agglomerative hierarchical clustering with a choice of best distance measure to
recommend the best clustering. Our scheme selects the best distance measure and
corresponding clustering for both univariate and multivariate time-series. We
have experimented with real-world time-series from UCR and UCI archive taken
from diverse application domains like health, smart-city, manufacturing etc.
Experimental results show that proposed method not only produce close to
benchmark results but also in some cases outperform the benchmark.
|
The all-optical synchronization systems used in various X-ray free-electron
lasers (XFEL) such as the European XFEL observe the transient fields of passing
electron bunches coupled into one or more pickups in the Bunch Arrival Time
Monitors (BAM). The extracted signal is then amplitude modulated on reference
laser pulses in a Mach-Zehnder type electro-optical modulator. With the
emerging demand for future experiments with ultra-short FEL shots, fs precision
is required for the synchronization systems even with 1 pC bunches. Since the
sensitivity of the BAM depends in particular on the slope of the bipolar signal
at the zero-crossing and thus, also on the bunch charge, a redesign with the
aim of a significant increase by optimized geometry and bandwidth is
inevitable. In this contribution the theoretical foundations of the pickup
signal are aggregated and treated with a focus on ultra-short bunches as well
as a general formulation. A possible new pickup concept is simulated and its
performance is compared to the previous concept. A significant improvement of
slope and voltage is found. The improvement is mainly achieved by the reduced
distance to the beam and a higher bandwidth.
|
Reconstructing under-sampled k-space measurements in Compressed Sensing MRI
(CS-MRI) is classically solved with regularized least-squares. Recently, deep
learning has been used to amortize this optimization by training reconstruction
networks on a dataset of under-sampled measurements. Here, a crucial design
choice is the regularization function(s) and corresponding weight(s). In this
paper, we explore a novel strategy of using a hypernetwork to generate the
parameters of a separate reconstruction network as a function of the
regularization weight(s), resulting in a regularization-agnostic reconstruction
model. At test time, for a given under-sampled image, our model can rapidly
compute reconstructions with different amounts of regularization. We analyze
the variability of these reconstructions, especially in situations when the
overall quality is similar. Finally, we propose and empirically demonstrate an
efficient and data-driven way of maximizing reconstruction performance given
limited hypernetwork capacity. Our code is publicly available at
https://github.com/alanqrwang/RegAgnosticCSMRI.
|
One of the challenges in a task oriented natural language application like
the Google Assistant, Siri, or Alexa is to localize the output to many
languages. This paper explores doing this by applying machine translation to
the English output. Using machine translation is very scalable, as it can work
with any English output and can handle dynamic text, but otherwise the problem
is a poor fit. The required quality bar is close to perfection, the range of
sentences is extremely narrow, and the sentences are often very different than
the ones in the machine translation training data. This combination of
requirements is novel in the field of domain adaptation for machine
translation. We are able to reach the required quality bar by building on
existing ideas and adding new ones: finetuning on in-domain translations,
adding sentences from the Web, adding semantic annotations, and using automatic
error detection. The paper shares our approach and results, together with a
distillation model to serve the translation models at scale.
|
For an ordinal $\alpha$, an $\alpha$-ITRM is a machine model of transfinite
computability that operates on finitely many registers, each of which can
contain an ordinal $\rho<\alpha$; they were introduced by Koepke in \cite{KM}.
In \cite{alpha itrms}, it was shown that the $\alpha$-ITRM-computable subsets
of $\alpha$ are exactly those in a level $L_{\beta(\alpha)}$ of the
constructible hierarchy. It was conjectured in \cite{alpha itrms} that
$\beta(\alpha)$ is the first limit of admissible ordinals above $\alpha$. Here,
we show that this is false; in particular, even the computational strength of
$\omega^{\omega}$-ITRMs goes far beyond $\omega_{\omega}^{\text{CK}}$.
|
Flow over a surface can be stratified by imposing a fixed mean vertical
temperature (density) gradient profile throughout or via cooling at the
surface. These distinct mechanisms can act simultaneously to establish a stable
stratification in a flow. Here, we perform a series of direct numerical
simulations of open-channel flows to study adaptation of a neutrally stratified
turbulent flow under the combined or independent action of the aforementioned
mechanisms. We force the fully developed flow with a constant mass flow rate.
This flow forcing technique enables us to keep the bulk Reynolds number
constant throughout our investigation and avoid complications arising from the
acceleration of the bulk flow when a constant pressure gradient approach were
to be adopted to force the flow instead. When both stratification mechanisms
are active, the dimensionless stratification perturbation number emerges as an
external flow control parameter, in addition to the Reynolds, Froude, and
Prandtl numbers. We demonstrate that significant deviations from the
Monin-Obukhov similarity formulation are possible when both types of
stratification mechanisms are active within an otherwise weakly stable flow,
even when the flux Richardson number is well below 0.2. An extended version of
the similarity theory due to Zilitinkevich and Calanca shows promise in
predicting the dimensionless shear for cases where both types of stratification
mechanisms are active, but the extended theory is less accurate for gradients
of scalar. The degree of deviation from neutral dimensionless shear as a
function of the vertical coordinate emerges as a qualitative measure of the
strength of stable stratification for all the cases investigated in this study.
|
Articulatory-to-acoustic (forward) mapping is a technique to predict speech
using various articulatory acquisition techniques as input (e.g. ultrasound
tongue imaging, MRI, lip video). The advantage of lip video is that it is
easily available and affordable: most modern smartphones have a front camera.
There are already a few solutions for lip-to-speech synthesis, but they mostly
concentrate on offline training and inference. In this paper, we propose a
system built from a backend for deep neural network training and inference and
a fronted as a form of a mobile application. Our initial evaluation shows that
the scenario is feasible: a top-5 classification accuracy of 74% is combined
with feedback from the mobile application user, making sure that the speaking
impaired might be able to communicate with this solution.
|
We investigate the hypothesis that within a combination of a 'number concept'
plus a 'substantive concept', such as 'eleven animals,' the identity and
indistinguishability present on the level of the concepts, i.e., all eleven
animals are identical and indistinguishable, gives rise to a statistical
structure of the Bose-Einstein type similar to how Bose-Einstein statistics is
present for identical and indistinguishable quantum particles. We proceed by
identifying evidence for this hypothesis by extracting the statistical data
from the World-Wide-Web utilizing the Google Search tool. By using the
Kullback-Leibler divergence method, we then compare the obtained distribution
with the Maxwell-Boltzmann as well as with the Bose-Einstein distributions and
show that the Bose-Einstein's provides a better fit as compared to the
Maxwell-Boltzmanns.
|
We present an application of invariant polynomials in machine learning. Using
the methods developed in previous work, we obtain two types of generators of
the Lorentz- and permutation-invariant polynomials in particle momenta; minimal
algebra generators and Hironaka decompositions. We discuss and prove some
approximation theorems to make use of these invariant generators in machine
learning algorithms in general and in neural networks specifically. By
implementing these generators in neural networks applied to regression tasks,
we test the improvements in performance under a wide range of hyperparameter
choices and find a reduction of the loss on training data and a significant
reduction of the loss on validation data. For a different approach on
quantifying the performance of these neural networks, we treat the problem from
a Bayesian inference perspective and employ nested sampling techniques to
perform model comparison. Beyond a certain network size, we find that networks
utilising Hironaka decompositions perform the best.
|
Developing state-machine replication protocols for practical use is a complex
and labor-intensive process because of the myriad of essential tasks (e.g.,
deployment, communication, recovery) that need to be taken into account in an
implementation. In this paper, we show how this problem can be addressed with
stream-based replication, a novel approach that implements a replication
protocol as application on top of a data-stream processing framework. With such
framework already handling most essential tasks and furthermore providing means
for debugging and monitoring, this technique has the key benefit of
significantly minimizing overhead for both programmers as well as system
operators. Our first stream-based protocol Tara tolerates crashes and comprises
full-fledged mechanisms for request handling, checkpointing, and view changes.
Still, Tara's prototype implementation, which is based on Twitter's Heron
framework, consists of fewer than 1,500 lines of application-level code.
|
Evaluating Software testability can assist software managers in optimizing
testing budgets and identifying opportunities for refactoring. In this paper,
we abandon the traditional approach of pursuing testability measurements based
on the correlation between software metrics and test characteristics observed
on past projects, e.g., the size, the organization or the code coverage of the
test cases. We propose a radically new approach that exploits automatic test
generation and mutation analysis to quantify the amount of evidence about the
relative hardness of identifying effective test cases. We introduce two novel
evidence-based testability metrics, describe a prototype to compute them, and
discuss initial findings on whether our measurements can reflect actual
testability issues.
|
The high reflect beamforming gain of the intelligent reflecting surface (IRS)
makes it appealing not only for wireless information transmission but also for
wireless power transfer. In this letter, we consider an IRS-assisted wireless
powered communication network, where a base station (BS) transmits energy to
multiple users grouped into multiple clusters in the downlink, and the
clustered users transmit information to the BS in the manner of hybrid
non-orthogonal multiple access and time division multiple access in the uplink.
We investigate optimizing the reflect beamforming of the IRS and the time
allocation among the BS's power transfer and different user clusters'
information transmission to maximize the throughput of the network, and we
propose an efficient algorithm based on the block coordinate ascent,
semidefinite relaxation, and sequential rank-one constraint relaxation
techniques to solve the resultant problem. Simulation results have verified the
effectiveness of the proposed algorithm and have shown the impact of user
clustering setup on the throughput performance of the network.
|
We prove normalization for (univalent, Cartesian) cubical type theory,
closing the last major open problem in the syntactic metatheory of cubical type
theory. Our normalization result is reduction-free, in the sense of yielding a
bijection between equivalence classes of terms in context and a tractable
language of $\beta/\eta$-normal forms. As corollaries we obtain both
decidability of judgmental equality and the injectivity of type constructors.
|
This paper presents a simple unsupervised visual representation learning
method with a pretext task of discriminating all images in a dataset using a
parametric, instance-level classifier. The overall framework is a replica of a
supervised classification model, where semantic classes (e.g., dog, bird, and
ship) are replaced by instance IDs. However, scaling up the classification task
from thousands of semantic labels to millions of instance labels brings
specific challenges including 1) the large-scale softmax computation; 2) the
slow convergence due to the infrequent visiting of instance samples; and 3) the
massive number of negative classes that can be noisy. This work presents
several novel techniques to handle these difficulties. First, we introduce a
hybrid parallel training framework to make large-scale training feasible.
Second, we present a raw-feature initialization mechanism for classification
weights, which we assume offers a contrastive prior for instance discrimination
and can clearly speed up converge in our experiments. Finally, we propose to
smooth the labels of a few hardest classes to avoid optimizing over very
similar negative pairs. While being conceptually simple, our framework achieves
competitive or superior performance compared to state-of-the-art unsupervised
approaches, i.e., SimCLR, MoCoV2, and PIC under ImageNet linear evaluation
protocol and on several downstream visual tasks, verifying that full instance
classification is a strong pretraining technique for many semantic visual
tasks.
|
Oxygen defective cerium oxides exhibits a non classical giant
electromechanical response that is superior to lead based electrostrictors. In
this work, we report the key role of acceptor dopants, with different size and
valence Mg2+, Sc3+, Gd3+, and La3+, on polycrystalline bulk ceria. Different
dopants tune the electrostrictive properties by changing the electrosteric
dopant defect interactions. We find two distinct electromechanical behaviors
when the interaction is weak dopant vacancy binding energy 0.3 eV,
electrostriction displays high coefficient, up to 10-17 m2V-2, with strongly
time dependent effects. In contrast, we observe no time dependent effects when
the interaction becomes strong 0.6 eV.
|
We performed photometric and spectroscopic investigations of NSVS 5029961 for
the first time. The new BV(RI)$_c$-band light curves were obtained with the
1.0-m telescope at Weihai Observatory of Shandong University. Applying the
Wilson-Devinney program, we found that NSVS 5029961 is an A-subtype shallow
contact binary with extremely low mass ratio (q = 0.1515, f = 19.1\%). Six
spectra have been obtained by LAMOST, and many chromospheric activity emission
line indicators were detected in the spectra, revealing that the target
exhibits strong chromospheric activity. We calculated the absolute parameters
with the photometric solutions and Gaia distance, and estimated the initial
masses of the two components and the age of the binary. The evolutionary status
was discussed by using the mass-radius and mass-luminosity diagrams. The result
shows the primary component is a little evolved star and the secondary
component has evolved away from the main sequence. The formation and evolution
investigations of NSVS 5029661 indicate that it may have evolved from a
detached binary with short period and low mass ratio by angular momentum loss
via magnetic braking and case A mass transfer, and is in a stable contact stage
at present.
|
Federated learning is an emerging research paradigm for enabling
collaboratively training deep learning models without sharing patient data.
However, the data from different institutions are usually heterogeneous across
institutions, which may reduce the performance of models trained using
federated learning. In this study, we propose a novel heterogeneity-aware
federated learning method, SplitAVG, to overcome the performance drops from
data heterogeneity in federated learning. Unlike previous federated methods
that require complex heuristic training or hyper parameter tuning, our SplitAVG
leverages the simple network split and feature map concatenation strategies to
encourage the federated model training an unbiased estimator of the target data
distribution. We compare SplitAVG with seven state-of-the-art federated
learning methods, using centrally hosted training data as the baseline on a
suite of both synthetic and real-world federated datasets. We find that the
performance of models trained using all the comparison federated learning
methods degraded significantly with the increasing degrees of data
heterogeneity. In contrast, SplitAVG method achieves comparable results to the
baseline method under all heterogeneous settings, that it achieves 96.2% of the
accuracy and 110.4% of the mean absolute error obtained by the baseline in a
diabetic retinopathy binary classification dataset and a bone age prediction
dataset, respectively, on highly heterogeneous data partitions. We conclude
that SplitAVG method can effectively overcome the performance drops from
variability in data distributions across institutions. Experimental results
also show that SplitAVG can be adapted to different base networks and
generalized to various types of medical imaging tasks.
|
We investigate the use of programmable optical lattices for quantum
simulation of Hubbard models, determining analytic expressions for the hopping
and Hubbard U, finding that they are suitable for emulating strongly correlated
systems with arbitrary structures, including those with multiple site basis and
impurities. Programmable potentials are highly flexible, with the ability to
control the depth and shape of individual sites in the optical lattice
dynamically. Quantum simulators of Hubbard models with (1) arbitrary basis are
required to represent many real materials of contemporary interest, (2) broken
translational symmetry are needed to study impurity physics, and (3) dynamical
lattices are needed to investigate strong correlation out of equilibrium. We
derive analytic expressions for Hubbard Hamiltonians in programmable potential
systems. We find experimental parameters for quantum simulation of Hubbard
models with arbitrary basis, concluding that programmable optical lattices are
suitable for this purpose. We discuss how programmable optical lattices can be
used for quantum simulation of dynamical multi-band Hubbard models that
represent complicated compounds, impurities, and non-equilibrium physics.
|
The superconducting TMD 4Hb-TaS$_2$ consists of alternating layers of H and T
structures, which in their bulk form are metallic and Mott-insulating,
respectively. Recently, this compound has been proposed as a candidate chiral
superconductor, due to an observed enhancement of the muon spin relaxation at
$T_c$. 4Hb-TaS$_2$ also exhibits a puzzling $T$-linear specific heat at low
temperatures, which is unlikely to be caused by disorder. Elucidating the
origin of this behavior is an essential step in discerning the true nature of
the superconducting ground state. Here, we propose a simple model that
attributes the $T$-linear specific heat to the emergence of a robust multi-band
gapless superconducting state. We show that an extended regime of gapless
superconductivity naturally appears when the pair-breaking scattering rate on
distinct Fermi-surface pockets differs significantly, and the pairing
interaction is predominantly intra-pocket. Using a tight-binding model derived
from first-principle calculations, we show that the pair-breaking scattering
rate promoted by slow magnetic fluctuations on the T layers, which arise from
proximity to a Mott transition, can be significantly different in the various
H-layer dominated Fermi pockets depending on their hybridization with T-layer
states. Thus, our results suggest that the ground state of 4Hb-TaS$_2$ consists
of Fermi pockets displaying gapless superconductivity, which are shunted by
superconducting Fermi pockets that are nearly decoupled from the T-layers.
|
Boosting Trees are one of the most successful statistical learning approaches
that involve sequentially growing an ensemble of simple regression trees (i.e.,
"weak learners"). However, gradient boosted trees are not yet available for
spatially correlated data. This paper proposes a new gradient Boosted Trees
algorithm for Spatial Data (Boost-S) with covariate information. Boost-S
integrates the spatial correlation structure into the classical framework of
gradient boosted trees. Each tree is grown by solving a regularized
optimization problem, where the objective function involves two penalty terms
on tree complexity and takes into account the underlying spatial correlation. A
computationally-efficient algorithm is proposed to obtain the ensemble trees.
The proposed Boost-S is applied to the spatially-correlated FDG-PET
(fluorodeoxyglucose-positron emission tomography) imaging data collected during
cancer chemoradiotherapy. Our numerical investigations successfully demonstrate
the advantages of the proposed Boost-S over existing approaches for this
particular application.
|
We prove that strength and slice rank of homogeneous polynomials of degree $d
\geq 5$ over an algebraically closed field of characteristic zero coincide
generically. To show this, we establish a conjecture of Catalisano, Geramita,
Gimigliano, Harbourne, Migliore, Nagel and Shin concerning dimensions of secant
varieties of the varieties of reducible homogeneous polynomials. These
statements were already known in degrees $2\leq d\leq 7$ and $d=9$.
|
Burr and Erd\H{o}s in 1975 conjectured, and Chv\'atal, R\"odl, Szemer\'edi
and Trotter later proved, that the Ramsey number of any bounded degree graph is
linear in the number of vertices. In this paper, we disprove the natural
directed analogue of the Burr--Erd\H{o}s conjecture, answering a question of
Buci\'c, Letzter, and Sudakov. If $H$ is an acyclic digraph, the oriented
Ramsey number of $H$, denoted $\overrightarrow{r_{1}}(H)$, is the least $N$
such that every tournament on $N$ vertices contains a copy of $H$. We show that
for any $\Delta \geq 2$ and any sufficiently large $n$, there exists an acyclic
digraph $H$ with $n$ vertices and maximum degree $\Delta$ such that \[
\overrightarrow{r_{1}}(H)\ge n^{\Omega(\Delta^{2/3}/ \log^{5/3} \Delta)}. \]
This proves that $\overrightarrow{r_{1}}(H)$ is not always linear in the number
of vertices for bounded-degree $H$. On the other hand, we show that
$\overrightarrow{r_{1}}(H)$ is nearly linear in the number of vertices for
typical bounded-degree acyclic digraphs $H$, and obtain linear or nearly linear
bounds for several natural families of bounded-degree acyclic digraphs.
For multiple colors, we prove a quasipolynomial upper bound
$\overrightarrow{r_{k}}(H)=2^{(\log n)^{O_{k}(1)}}$ for all bounded-degree
acyclic digraphs $H$ on $n$ vertices, where $\overrightarrow{r_k}(H)$ is the
least $N$ such that every $k$-edge-colored tournament on $N$ vertices contains
a monochromatic copy of $H$. For $k\geq 2$ and $n\geq 4$, we exhibit an acyclic
digraph $H$ with $n$ vertices and maximum degree $3$ such that
$\overrightarrow{r_{k}}(H)\ge n^{\Omega(\log n/\log\log n)}$, showing that
these Ramsey numbers can grow faster than any polynomial in the number of
vertices.
|
We explore data reduction and correction steps and processed data
reproducibility in the emerging single crystal total scattering based technique
of three-dimensional differential atomic pair distribution function
(3D-$\Delta$PDF) analysis. All steps from sample measurement to data-processing
are outlined in detail using a CuIr$_2$S$_4$ example crystal studied in a setup
equipped with a high-energy x-ray beam and a flat panel area detector.
Computational overhead as it pertains to data-sampling and the associated data
processing steps is also discussed. Various aspects of the final 3D-$\Delta$PDF
reproducibility are explicitly tested by varying data-processing order and
included steps, and by carrying out a crystal-to-crystal data comparison. We
identify situations in which the 3D-$\Delta$PDF is robust, and caution against
a few particular cases which can lead to inconsistent 3D-$\Delta$PDFs. Although
not all the approaches applied here-in will be valid across all systems, and a
more in-depth analysis of some of the effects of the data processing steps may
still needed, the methods collected here-in represent the start of a more
systematic discussion about data processing and corrections in this field.
|
Machine learning models that offer excellent predictive performance often
lack the interpretability necessary to support integrated human machine
decision-making. In clinical medicine and other high-risk settings, domain
experts may be unwilling to trust model predictions without explanations. Work
in explainable AI must balance competing objectives along two different axes:
1) Explanations must balance faithfulness to the model's decision-making with
their plausibility to a domain expert. 2) Domain experts desire local
explanations of individual predictions and global explanations of behavior in
aggregate. We propose to train a proxy model that mimics the behavior of the
trained model and provides fine-grained control over these trade-offs. We
evaluate our approach on the task of assigning ICD codes to clinical notes to
demonstrate that explanations from the proxy model are faithful and replicate
the trained model behavior.
|
We obtain a Fourier dimension estimate for sets of exact approximation order
introduced by Bugeaud for certain approximation functions $\psi$. This Fourier
dimension estimate implies that these sets of exact approximation order contain
normal numbers.
|
There is growing interest in ASR systems that can recognize phones in a
language-independent fashion. There is additionally interest in building
language technologies for low-resource and endangered languages. However, there
is a paucity of realistic data that can be used to test such systems and
technologies. This paper presents a publicly available, phonetically
transcribed corpus of 2255 utterances (words and short phrases) in the
endangered Tangkhulic language East Tusom (no ISO 639-3 code), a Tibeto-Burman
language variety spoken mostly in India. Because the dataset is transcribed in
terms of phones, rather than phonemes, it is a better match for universal phone
recognition systems than many larger (phonemically transcribed) datasets. This
paper describes the dataset and the methodology used to produce it. It further
presents basic benchmarks of state-of-the-art universal phone recognition
systems on the dataset as baselines for future experiments.
|
Subsets and Splits