abstract
stringlengths 42
2.09k
|
---|
Confinement remains one the most interesting and challenging nonperturbative
phenomenon in non-Abelian gauge theories. Recent semiclassical (for SU(2)) and
lattice (for QCD) studies have suggested that confinement arises from
interactions of statistical ensembles of instanton-dyons with the Polyakov
loop. In this work, we extend studies of semiclassical ensemble of dyons to the
$SU(3)$ Yang-Mills theory. We find that such interactions do generate the
expected first-order deconfinement phase transition. The properties of the
ensemble, including correlations and topological susceptibility, are studied
over a range of temperatures above and below $T_c$. Additionally, the dyon
ensemble is studied in the Yang-Mills theory containing an extra
trace-deformation term. It is shown that such a term can cause the theory to
remain confined and even retain the same topological observables at high
temperatures.
|
We derive the first positivity bounds for low-energy Effective Field Theories
(EFTs) that are not invariant under Lorentz boosts. "Positivity bounds" are the
low-energy manifestation of certain fundamental properties in the UV -- to date
they have been used to constrain a wide variety of EFTs, however since all of
the existing bounds require Lorentz invariance they are not directly applicable
when this symmetry is broken, such as for most cosmological and condensed
matter systems. From the UV axioms of unitarity, causality and locality, we
derive an infinite family of bounds which (derivatives of) the $2\to2$ EFT
scattering amplitude must satisfy even when Lorentz boosts are broken (either
spontaneously or explicitly). We apply these bounds to the leading-order EFT of
both a superfluid and the scalar fluctuations produced during inflation,
comparing in the latter case with the current observational constraints on
primordial non-Gaussianity.
|
In this paper we consider the nature of the cosmological constant as due by
quantum fluctuations. Quantum fluctuations are generated at Planckian scales by
noncommutative effects and watered down at larger scales up to a decoherence
scale $L_D$ where classicality is reached. In particular, we formally depict
the presence of the scale at $L_D$ by adopting a renormalization group
approach. As a result, an analogy arises between the expression for the
observed cosmological constant $\overline{\Lambda}$ generated by quantum
fluctuations and the one expected by a renormalization group approach, provided
that the renormalization scale $\mu$ is suitably chosen. In this framework, the
decoherence scale $L_D$ is naturally identified with the value ${\mu}_D$, with
$\hbar{\mu}_D$ representing the minimum allowed particle-momentum for our
visible universe. Finally, by mimicking renormalization group approach, we
present a technique to formally obtain a non-trivial infrared (IR) fixed point
at $\mu=\mu_D$ in our model.
|
We find that, under certain conditions, protoplanetary disks may
spontaneously generate multiple, concentric gas rings without an embedded
planet through an eccentric cooling instability. Using both linear theory and
non-linear hydrodynamics simulations, we show that a variety of background
states may trap a slowly precessing, one-armed spiral mode that becomes
unstable when a gravitationally-stable disk rapidly cools. The angular momentum
required to excite this spiral comes at the expense of non-uniform mass
transport that generically results in multiple rings. For example, one
long-term hydrodynamics simulation exhibits four long-lived, axisymmetric gas
rings. We verify the instability evolution and ring formation mechanism from
first principles with our linear theory, which shows remarkable agreement with
the simulation results. Dust trapped in these rings may produce observable
features consistent with observed disks. Additionally, direct detection of the
eccentric gas motions may be possible when the instability saturates, and any
residual eccentricity leftover in the rings at later times may also provide
direct observational evidence of this mechanism.
|
Let $(F_n)_{n\ge 1}$ be the Fibonacci sequence. Define $P(F_n): =
(\sum_{i=1}^n F_i)_{n\ge 1}$; that is, the function $P$ gives the sequence of
partial sums of $(F_n)$. In this paper, we first give an identity involving
$P^k(F_n)$, which is the resulting sequence from applying $P$ to $(F_n)$ $k$
times. Second, we provide a combinatorial interpretation of the numbers in
$P^k(F_n)$.
|
We consider an interacting collective spin model known as coupled top (CT),
exhibiting a rich variety of phenomena related to quantum transitions and
ergodicity, which we explore and find their connection with underlying
dynamics. The ferromagnetic interaction between the spins leads to the quantum
phase transition (QPT) as well as a dynamical transition at a critical coupling
strength, and both the transitions are accompanied by excited state quantum
phase transitions at critical energy densities. Above QPT, the onset of chaos
in the CT model occurs in an intermediate coupling strength, which is analyzed
both classically and quantum mechanically. However, a detailed analysis reveals
the presence of non-ergodic multifractal eigenstates in the chaotic regime. We
quantify the degree of ergodicity of the eigenstates from the relative
entanglement entropy and multifractal dimensions, revealing its variation with
energy density across the energy band. We probe such energy dependent ergodic
behavior of states from non-equilibrium dynamics, which is also supplemented by
phase space mixing in classical dynamics. Moreover, we identify another source
of deviation from ergodicity due to the formation of `quantum scars' arising
from the unstable steady states and periodic orbits. Unlike the ergodic states,
the scarred eigenstates violate Berry's conjecture even in the chaotic regime,
leading to the athermal non-ergodic behavior. Finally, we discuss the detection
of non-ergodic behavior and dynamical signature of quantum scars by using
`out-of-time-order correlator', which has relevance in the recent experiments.
|
In this paper, we present the Quantum Information Software Developer Kit -
Qiskit, for teaching quantum computing to undergraduate students, with basic
knowledge of quantum mechanics postulates. We focus on presenting the
construction of the programs on any common laptop or desktop computer and their
execution on real quantum processors through the remote access to the quantum
hardware available on the IBM Quantum Experience platform. The codes are made
available throughout the text so that readers, even with little experience in
scientific computing, can reproduce them and adopt the methods discussed in
this paper to address their own quantum computing projects. The results
presented are in agreement with theoretical predictions and show the
effectiveness of the Qiskit package as a robust classroom working tool for the
introduction of applied concepts of quantum computing and quantum information
theory.
|
Researchers and designers have incorporated social media affordances into
learning technologies to engage young people and support personally relevant
learning, but youth may reject these attempts because they do not meet user
expectations. Through in-depth case studies, we explore the sociotechnical
ecosystems of six teens (ages 15-18) working at a science center that had
recently introduced a digital badge system to track and recognize their
learning. By analyzing interviews, observations, ecological momentary
assessments, and system data, we examined tensions in how badges as connected
learning technologies operate in teens' sociotechnical ecosystems. We found
that, due to issues of unwanted context collapse and incongruent identity
representations, youth only used certain affordances of the system and did so
sporadically. Additionally, we noted that some features seemed to prioritize
values of adult stakeholders over youth. Using badges as a lens, we reveal
critical tensions and offer design recommendations for networked learning
technologies.
|
Overfitting is one of the critical problems in deep neural networks. Many
regularization schemes try to prevent overfitting blindly. However, they
decrease the convergence speed of training algorithms. Adaptive regularization
schemes can solve overfitting more intelligently. They usually do not affect
the entire network weights. This paper detects a subset of the weighting layers
that cause overfitting. The overfitting recognizes by matrix and tensor
condition numbers. An adaptive regularization scheme entitled Adaptive Low-Rank
(ALR) is proposed that converges a subset of the weighting layers to their
Low-Rank Factorization (LRF). It happens by minimizing a new Tikhonov-based
loss function. ALR also encourages lazy weights to contribute to the
regularization when epochs grow up. It uses a damping sequence to increment
layer selection likelihood in the last generations. Thus before falling the
training accuracy, ALR reduces the lazy weights and regularizes the network
substantially. The experimental results show that ALR regularizes the deep
networks well with high training speed and low resource usage.
|
A word equation with one variable in a free group is given as $U = V$, where
both $U$ and $V$ are words over the alphabet of generators of the free group
and $X, X^{-1}$, for a fixed variable $X$. An element of the free group is a
solution when substituting it for $X$ yields a true equality (interpreted in
the free group) of left- and right-hand sides. It is known that the set of all
solutions of a given word equation with one variable is a finite union of sets
of the form $\{\alpha w^i \beta \: : \: i \in \mathbb Z \}$, where $\alpha, w,
\beta$ are reduced words over the alphabet of generators, and a polynomial-time
algorithm (of a high degree) computing this set is known. We provide a cubic
time algorithm for this problem, which also shows that the set of solutions
consists of at most a quadratic number of the above-mentioned sets. The
algorithm uses only simple tools of word combinatorics and group theory and is
simple to state. Its analysis is involved and focuses on the combinatorics of
occurrences of powers of a word within a larger word.
|
Salient human detection (SHD) in dynamic 360{\deg} immersive videos is of
great importance for various applications such as robotics, inter-human and
human-object interaction in augmented reality. However, 360{\deg} video SHD has
been seldom discussed in the computer vision community due to a lack of
datasets with large-scale omnidirectional videos and rich annotations. To this
end, we propose SHD360, the first 360{\deg} video SHD dataset which contains
various real-life daily scenes. Since so far there is no method proposed for
360{\deg} image/video SHD, we systematically benchmark 11 representative
state-of-the-art salient object detection (SOD) approaches on our SHD360, and
explore key issues derived from extensive experimenting results. We hope our
proposed dataset and benchmark could serve as a good starting point for
advancing human-centric researches towards 360{\deg} panoramic data. The
dataset is available at https://github.com/PanoAsh/SHD360.
|
Accurate and trustworthy epidemic forecasting is an important problem that
has impact on public health planning and disease mitigation. Most existing
epidemic forecasting models disregard uncertainty quantification, resulting in
mis-calibrated predictions. Recent works in deep neural models for
uncertainty-aware time-series forecasting also have several limitations; e.g.
it is difficult to specify meaningful priors in Bayesian NNs, while methods
like deep ensembling are computationally expensive in practice. In this paper,
we fill this important gap. We model the forecasting task as a probabilistic
generative process and propose a functional neural process model called EPIFNP,
which directly models the probability density of the forecast value. EPIFNP
leverages a dynamic stochastic correlation graph to model the correlations
between sequences in a non-parametric way, and designs different stochastic
latent variables to capture functional uncertainty from different perspectives.
Our extensive experiments in a real-time flu forecasting setting show that
EPIFNP significantly outperforms previous state-of-the-art models in both
accuracy and calibration metrics, up to 2.5x in accuracy and 2.4x in
calibration. Additionally, due to properties of its generative process,EPIFNP
learns the relations between the current season and similar patterns of
historical seasons,enabling interpretable forecasts. Beyond epidemic
forecasting, the EPIFNP can be of independent interest for advancing principled
uncertainty quantification in deep sequential models for predictive analytics
|
In fingerprint-based systems, the size of databases increases considerably
with population growth. In developing countries, because of the difficulty in
using a central system when enlisting voters, it often happens that several
regional voter databases are created and then merged to form a central
database. A process is used to remove duplicates and ensure uniqueness by
voter. Until now, companies specializing in biometrics use several costly
computing servers with algorithms to perform large-scale deduplication based on
fingerprints. These algorithms take a considerable time because of their
complexity in O (n2), where n is the size of the database. This article
presents an algorithm that can perform this operation in O (2n), with just a
computer. It is based on the development of an index obtained using a 5 * 5
matrix performed on each fingerprint. This index makes it possible to build
clusters of O (1) in size in order to compare fingerprints. This approach has
been evaluated using close to 11 4000 fingerprints, and the results obtained
show that this approach allows a penetration rate of less than 1%, an almost O
(1) identification, and an O (n) deduplication. A base of 10 000 000
fingerprints can be deduplicated with a just computer in less than two hours,
contrary to several days and servers for the usual tools.
Keywords: fingerprint, cluster, index, deduplication.
|
Content replication to many destinations is a common use case in the Internet
of Things (IoT). The deployment of IP multicast has proven inefficient, though,
due to its lack of layer-2 support by common IoT radio technologies and its
synchronous end-to-end transmission, which is highly susceptible to
interference. Information-centric networking (ICN) introduced hop-wise
multi-party dissemination of cacheable content, which has proven valuable in
particular for low-power lossy networking regimes. Even NDN, however, the most
prominent ICN protocol, suffers from a lack of deployment.
In this paper, we explore how multiparty content distribution in an
information-centric Web of Things (WoT) can be built on CoAP. We augment the
CoAP proxy by request aggregation and response replication functions, which
together with proxy caches enable asynchronous group communication. In a
further step, we integrate content object security with OSCORE into the CoAP
multicast proxy system, which enables ubiquitous caching of certified authentic
content. In our evaluation, we compare NDN with different deployment models of
CoAP, including our data-centric approach in realistic testbed experiments. Our
findings indicate that multiparty content distribution based on CoAP proxies
performs equally well as NDN, while remaining fully compatible with the
established IoT protocol world of CoAP on the Internet.
|
Models for strongly interacting fermions in disordered clusters forming an
array, with electron hopping between sites, reproduce the linear dependence on
temperature of the resistivity, typical of the strange metal phase of High
Temperature Superconducting materials (Extended Sachdev-Ye-Kitaev (SYK)
models). Our hydrodynamical approach to the marginal Fermi liquid emerging out
of the interaction, identifies the low energy collective excitations of the
system in its coherent phase. These neutral excitations diffuse in the lattice,
but the diffusion is heavily hindered by coupling to the pseudo Goldstone modes
of the conformal broken symmetry SYK phase, which are local in space. A
critical temperature for superconductivity arises in the electron liquid,in
case these excitations are assumed to mediate an attractive Cooper-pairing, in
the electron liquid, which is not BCS-like.
|
We consider the problem of active and sequential beam tracking at mmWave
frequencies and above. We focus on the dynamic scenario of a UAV to UAV
communications where we formulate the problem to be equivalent to tracking an
optimal beamforming vector along the line-of-sight path. In this setting, the
resulting beam ideally points in the direction of the angle of arrival with
sufficiently high resolution. Existing solutions account for predictable
movements or small random movements using filtering strategies or by accounting
for predictable mobility but must resort to re-estimation protocols when
tracking fails due to unpredictable movements. We propose an algorithm for
active learning of the AoA through evolving a Bayesian posterior probability
belief which is utilized for a sequential selection of beamforming vectors. We
propose an adaptive pilot allocation strategy based on a trade-off of mutual
information versus spectral efficiency. Numerically, we analyze the performance
of our proposed algorithm and demonstrate significant improvements over
existing strategies.
|
In this paper, the necessity theory for commutators of multilinear singular
integral operators on weighted Lebesgue spaces is investigated. The results
relax the restriction of the weights class to the general multiple weights,
which can be regarded as an essential improvement of
\cite{ChafCruz2018,GLW2020}. Our approach elaborates on a commonly expanding
the kernel locally by Fourier series, recovering many known results but
yielding also numerous new ones. In particular, we answer the question about
the necessity theory of the iterated commutators of the multilinear singular
integral operators.
|
Wireless Sensor Networks (WSNs) are groups of spatially distributed and
dedicated autonomous sensors for monitoring (and recording) the physical
conditions of the environment (and organizing the collected data at a central
location). They have been a topic of interest due to their versatility and
diverse capabilities despite having simple sensors measuring local quantities
such as temperature, pH, or pressure. We delve into understanding how such
networks can be utilized for localization, and propose a technique for
improving conditions of living for animals and humans on the IIT Bombay campus.
|
Building an interactive artificial intelligence that can ask questions about
the real world is one of the biggest challenges for vision and language
problems. In particular, goal-oriented visual dialogue, where the aim of the
agent is to seek information by asking questions during a turn-taking dialogue,
has been gaining scholarly attention recently. While several existing models
based on the GuessWhat?! dataset have been proposed, the Questioner typically
asks simple category-based questions or absolute spatial questions. This might
be problematic for complex scenes where the objects share attributes or in
cases where descriptive questions are required to distinguish objects. In this
paper, we propose a novel Questioner architecture, called Unified Questioner
Transformer (UniQer), for descriptive question generation with referring
expressions. In addition, we build a goal-oriented visual dialogue task called
CLEVR Ask. It synthesizes complex scenes that require the Questioner to
generate descriptive questions. We train our model with two variants of CLEVR
Ask datasets. The results of the quantitative and qualitative evaluations show
that UniQer outperforms the baseline.
|
We revisit the problem of the gauge invariance in the Coleman-Weinberg model
in which a $U(1)$ gauge symmetry is driven spontaneously broken by radiative
corrections. It was noticed in previous work that masses in this model are not
gauge invariant at one-loop order. In our analysis, we use the dressed
propagators of scalars which include a resummation of the one-loop self-energy
correction to the tree-level propagator. We calculate the one-loop self-energy
correction to the vector meson using these dressed propagators. We find that
the pole mass of the vector meson calculated using the dressed propagator is
gauge invariant at the vacuum determined using the effective potential
calculated with a resummation of daisy diagrams.
|
We implement four algorithms for solving linear Diophantine equations in the
naturals: a lexicographic enumeration algorithm, a completion procedure, a
graph-based algorithm, and the Slopes algorithm. As already known, the
lexicographic enumeration algorithm and the completion procedure are slower
than the other two algorithms. We compare in more detail the graph-based
algorithm and the Slopes algorithm. In contrast to previous comparisons, our
work suggests that they are equally fast on small inputs, but the graph-based
algorithm gets much faster as the input grows. We conclude that implementations
of AC-unification algorithms should use the graph-based algorithm for maximum
efficiency.
|
We study orbital evolution of multi-planet systems that form a resonant
chain, with nearest neighbours close to first order commensurabilities,
incorporating orbital circularisation produced by tidal interaction with the
central star. We develop a semi-analytic model applicable when the relative
proximities to commensurability, though small, are large compared to
epsilon^(2/3) , with epsilon being a measure of the characteristic planet to
central star mass ratio. This enables determination of forced eccentricities as
well as which resonant angles enter libration. When there are no active linked
three body Laplace resonances, the rate of evolution of the semi-major axes may
also be determined. We perform numerical simulations of the HD 158259 and EPIC
245950175 systems finding that the semi-analytic approach works well in the
former case but not so well in the latter case on account of the effects of
three active three body Laplace resonances which persist during the evolution.
For both systems we estimate that if the tidal parameter, Q', significantly
exceeds 1000, tidal effects are unlikely to have influenced period ratios
significantly since formation. On the other hand if Q' < ~ 100 tidal effects
may have produced significant changes including the formation of three body
Laplace resonances in the case of the EPIC 245950175 system.
|
Decades of research on Internet congestion control (CC) has produced a
plethora of algorithms that optimize for different performance objectives.
Applications face the challenge of choosing the most suitable algorithm based
on their needs, and it takes tremendous efforts and expertise to customize CC
algorithms when new demands emerge. In this paper, we explore a basic question:
can we design a single CC algorithm to satisfy different objectives? We propose
MOCC, the first multi-objective congestion control algorithm that attempts to
address this challenge. The core of MOCC is a novel multi-objective
reinforcement learning framework for CC that can automatically learn the
correlations between different application requirements and the corresponding
optimal control policies. Under this framework, MOCC further applies transfer
learning to transfer the knowledge from past experience to new applications,
quickly adapting itself to a new objective even if it is unforeseen. We provide
both user-space and kernel-space implementation of MOCC. Real-world experiments
and extensive simulations show that MOCC well supports multi-objective,
competing or outperforming the best existing CC algorithms on individual
objectives, and quickly adapting to new applications (e.g., 14.2x faster than
prior work) without compromising old ones.
|
Analysis of burden of underregistration in tuberculosis data in Brazil, from
2012 to 2014. Approches of Oliveira et al. (2020) and Stoner et al. (2019) are
applied. The main focus is to illustrated how the approach of Oliveira et al.
(2020) can be applied when the clustering structure is not previously
available.
|
The HERMES-TP/SP (High Energy Rapid Modular Ensemble of Satellites --
Technologic and Scientific Pathfinder) is an in-orbit demonstration of the
so-called distributed astronomy concept. Conceived as a mini-constellation of
six 3U nano-satellites hosting a new miniaturized detector, HERMES-TP/SP aims
at the detection and accurate localisation of bright high-energy transients
such as Gamma-Ray Bursts. The large energy band, the excellent temporal
resolution and the wide field of view that characterize the detectors of the
constellation represent the key features for the next generation high-energy
all-sky monitor with good localisation capabilities that will play a pivotal
role in the future of Multi-messenger Astronomy. In this work, we will describe
in detail the temporal techniques that allow the localisation of bright
transient events taking advantage of their almost simultaneous observation by
spatially spaced detectors. Moreover, we will quantitatively discuss the
all-sky monitor capabilities of the HERMES Pathfinder as well as its achievable
accuracies on the localisation of the detected Gamma-Ray Bursts.
|
Neural personalized recommendation models are used across a wide variety of
datacenter applications including search, social media, and entertainment.
State-of-the-art models comprise large embedding tables that have billions of
parameters requiring large memory capacities. Unfortunately, large and fast
DRAM-based memories levy high infrastructure costs. Conventional SSD-based
storage solutions offer an order of magnitude larger capacity, but have worse
read latency and bandwidth, degrading inference performance. RecSSD is a near
data processing based SSD memory system customized for neural recommendation
inference that reduces end-to-end model inference latency by 2X compared to
using COTS SSDs across eight industry-representative models.
|
Despite the accomplishments of Generative Adversarial Networks (GANs) in
modeling data distributions, training them remains a challenging task. A
contributing factor to this difficulty is the non-intuitive nature of the GAN
loss curves, which necessitates a subjective evaluation of the generated output
to infer training progress. Recently, motivated by game theory, duality gap has
been proposed as a domain agnostic measure to monitor GAN training. However, it
is restricted to the setting when the GAN converges to a Nash equilibrium. But
GANs need not always converge to a Nash equilibrium to model the data
distribution. In this work, we extend the notion of duality gap to proximal
duality gap that is applicable to the general context of training GANs where
Nash equilibria may not exist. We show theoretically that the proximal duality
gap is capable of monitoring the convergence of GANs to a wider spectrum of
equilibria that subsumes Nash equilibria. We also theoretically establish the
relationship between the proximal duality gap and the divergence between the
real and generated data distributions for different GAN formulations. Our
results provide new insights into the nature of GAN convergence. Finally, we
validate experimentally the usefulness of proximal duality gap for monitoring
and influencing GAN training.
|
In this work, the distance between a quark-antiquark pair is analyzed through
both the confinement potential as well as the hadronic total cross section.
Using the Helmholtz free energy, entropy is calculated near the minimum of the
total cross section through the confinement potential. A fitting procedure for
the proton-proton total cross section is performed, defining the fitting
parameters. Therefore, the only free parameter remaining in the model is the
mass scale $\kappa$ used to define the running coupling constant of the
light-front approach to QCD. The mass scale controls the distance $r$ between
the quark-antiquark pair and, under some conditions, it allows the occurrence
of free quarks even in the confinement regime of QCD.
|
There have been many studies of the instability of a flexible plate or flag
to flapping motions, and of large-amplitude flapping. Here we use inviscid
simulations and a linearized model to study more generally how key quantities
-- mode number (or wavenumber), frequency, and amplitude -- depend on the two
dimensionless parameters, flag mass and bending stiffness. In the limit of
small flag mass, flags perform traveling wave motions that move at nearly the
speed of the oncoming flow. The flag mode number scales as the -1/4 power of
bending stiffness. The flapping frequency has the same scaling, with an
additional slight increase with flag mass in the small-mass regime. The
flapping amplitude scales approximately as flag mass to the 1/2 power. For
large flag mass, the dominant mode number is low (0 or 1), the flapping
frequency tends to zero, and the amplitude saturates in the neighborhood of its
upper limit (the flag length). In a linearized model, the fastest growing modes
have somewhat different power law scalings for wavenumber and frequency. We
discuss how the numerical scalings are consistent with a weakly nonlinear
model.
|
In this paper, we propose FedChain, a novel framework for
federated-blockchain systems, to enable effective transferring of tokens
between different blockchain networks. Particularly, we first introduce a
federated-blockchain system together with a cross-chain transfer protocol to
facilitate the secure and decentralized transfer of tokens between chains. We
then develop a novel PoS-based consensus mechanism for FedChain, which can
satisfy strict security requirements, prevent various blockchain-specific
attacks, and achieve a more desirable performance compared to those of other
existing consensus mechanisms. Moreover, a Stackelberg game model is developed
to examine and address the problem of centralization in the FedChain system.
Furthermore, the game model can enhance the security and performance of
FedChain. By analyzing interactions between the stakeholders and chain
operators, we can prove the uniqueness of the Stackelberg equilibrium and find
the exact formula for this equilibrium. These results are especially important
for the stakeholders to determine their best investment strategies and for the
chain operators to design the optimal policy to maximize their benefits and
security protection for FedChain. Simulations results then clearly show that
the FedChain framework can help stakeholders to maximize their profits and the
chain operators to design appropriate parameters to enhance FedChain's security
and performance.
|
The Hauser-Feshbach Fission Fragment Decay (HF$^3$D) model is extended to
calculate the prompt fission neutron spectrum (PFNS) for the thermal neutron
induced fission on $^{235}$U, where the evaporated neutrons from all possible
fission fragment pairs are aggregated. By studying model parameter
sensitivities on the calculated PFNS, as well as non-statistical behavior of
low-lying discrete level spin distribution, we conclude that discrepancies
between the aggregation calculation and the experimental PFNS seen at higher
neutron emission energies can be attributed to both the primary fission
fragment yield distribution and the possible high spin states that are not
predicted by the statistical theory of nuclear structure.
|
Weight sharing has become a de facto standard in neural architecture search
because it enables the search to be done on commodity hardware. However, recent
works have empirically shown a ranking disorder between the performance of
stand-alone architectures and that of the corresponding shared-weight networks.
This violates the main assumption of weight-sharing NAS algorithms, thus
limiting their effectiveness. We tackle this issue by proposing a
regularization term that aims to maximize the correlation between the
performance rankings of the shared-weight network and that of the standalone
architectures using a small set of landmark architectures. We incorporate our
regularization term into three different NAS algorithms and show that it
consistently improves performance across algorithms, search-spaces, and tasks.
|
There is no unique way to encode a quantum algorithm into a quantum circuit.
With limited qubit counts, connectivities, and coherence times, circuit
optimization is essential to make the best use of near-term quantum devices. We
introduce two separate ideas for circuit optimization and combine them in a
multi-tiered quantum circuit optimization protocol called AQCEL. The first
ingredient is a technique to recognize repeated patterns of quantum gates,
opening up the possibility of future hardware co-optimization. The second
ingredient is an approach to reduce circuit complexity by identifying zero- or
low-amplitude computational basis states and redundant gates. As a
demonstration, AQCEL is deployed on an iterative and efficient quantum
algorithm designed to model final state radiation in high energy physics. For
this algorithm, our optimization scheme brings a significant reduction in the
gate count without losing any accuracy compared to the original circuit.
Additionally, we have investigated whether this can be demonstrated on a
quantum computer using polynomial resources. Our technique is generic and can
be useful for a wide variety of quantum algorithms.
|
Modern scattering-type scanning near-field optical microscopy (s-SNOM) has
become an indispensable tool in material research. However, as the s-SNOM
technique marches into the far-infrared (IR) and terahertz (THz) regimes,
emerging experiments sometimes produce puzzling results. For example, anomalies
in the near-field optical contrast have been widely reported. In this Letter,
we systematically investigate a series of extreme subwavelength metallic
nanostructures via s-SNOM near-field imaging in the GHz to THz frequency range.
We find that the near-field material contrast is greatly impacted by the
lateral size of the nanostructure, while the spatial resolution is practically
independent of it. The contrast is also strongly affected by the connectivity
of the metallic structures to a larger metallic ground plane. The observed
effect can be largely explained by a quasi-electrostatic analysis. We also
compare the THz s-SNOM results to those of the mid-IR regime, where the
size-dependence becomes significant only for smaller structures. Our results
reveal that the quantitative analysis of the near-field optical material
contrasts in the long-wavelength regime requires a careful assessment of the
size and configuration of metallic (optically conductive) structures.
|
Audio and vision are two main modalities in video data. Multimodal learning,
especially for audiovisual learning, has drawn considerable attention recently,
which can boost the performance of various computer vision tasks. However, in
video summarization, existing approaches just exploit the visual information
while neglect the audio information. In this paper, we argue that the audio
modality can assist vision modality to better understand the video content and
structure, and further benefit the summarization process. Motivated by this, we
propose to jointly exploit the audio and visual information for the video
summarization task, and develop an AudioVisual Recurrent Network (AVRN) to
achieve this. Specifically, the proposed AVRN can be separated into three
parts: 1) the two-stream LSTM is utilized to encode the audio and visual
feature sequentially by capturing their temporal dependency. 2) the audiovisual
fusion LSTM is employed to fuse the two modalities by exploring the latent
consistency between them. 3) the self-attention video encoder is adopted to
capture the global dependency in the video. Finally, the fused audiovisual
information, and the integrated temporal and global dependencies are jointly
used to predict the video summary. Practically, the experimental results on the
two benchmarks, \emph{i.e.,} SumMe and TVsum, have demonstrated the
effectiveness of each part, and the superiority of AVRN compared to those
approaches just exploiting visual information for video summarization.
|
We present results of three wide-band directed searches for continuous
gravitational waves from 15 young supernova remnants in the first half of the
third Advanced LIGO and Virgo observing run. We use three search pipelines with
distinct signal models and methods of identifying noise artifacts. Without
ephemerides of these sources, the searches are conducted over a frequency band
spanning from 10~Hz to 2~kHz. We find no evidence of continuous gravitational
radiation from these sources. We set upper limits on the intrinsic signal
strain at 95\% confidence level in sample sub-bands, estimate the sensitivity
in the full band, and derive the corresponding constraints on the fiducial
neutron star ellipticity and $r$-mode amplitude. The best 95\% confidence
constraints placed on the signal strain are $7.7\times 10^{-26}$ and $7.8\times
10^{-26}$ near 200~Hz for the supernova remnants G39.2--0.3 and G65.7+1.2,
respectively. The most stringent constraints on the ellipticity and $r$-mode
amplitude reach $\lesssim 10^{-7}$ and $ \lesssim 10^{-5}$, respectively, at
frequencies above $\sim 400$~Hz for the closest supernova remnant
G266.2--1.2/Vela Jr.
|
We investigate the numerical artifact known as a carbuncle, in the solution
of the shallow water equations. We propose a new Riemann solver that is based
on a local measure of the entropy residual and aims to avoid carbuncles while
maintaining high accuracy. We propose a new challenging test problem for
shallow water codes, consisting of a steady circular hydraulic jump that can be
physically unstable. We show that numerical methods are prone to either
suppress the instability completely or form carbuncles. We test existing cures
for the carbuncle. In our experiments, only the proposed method is able to
avoid unphysical carbuncles without suppressing the physical instability.
|
Universal gate sets for quantum computing have been known for decades, yet no
universal gate set has been proposed for particle-conserving unitaries, which
are the operations of interest in quantum chemistry. In this work, we show that
controlled single-excitation gates in the form of Givens rotations are
universal for particle-conserving unitaries. Single-excitation gates describe
an arbitrary $U(2)$ rotation on the two-qubit subspace spanned by the states
$|01\rangle, |10\rangle$, while leaving other states unchanged -- a
transformation that is analogous to a single-qubit rotation on a dual-rail
qubit. The proof is constructive, so our result also provides an explicit
method for compiling arbitrary particle-conserving unitaries. Additionally, we
describe a method for using controlled single-excitation gates to prepare an
arbitrary state of a fixed number of particles. We derive analytical gradient
formulas for Givens rotations as well as decompositions into single-qubit and
CNOT gates. Our results offer a unifying framework for quantum computational
chemistry where every algorithm is a unique recipe built from the same
universal ingredients: Givens rotations.
|
The widespread significance of Android IoT devices is due to its flexibility
and hardware support features which revolutionized the digital world by
introducing exciting applications almost in all walks of daily life, such as
healthcare, smart cities, smart environments, safety, remote sensing, and many
more. Such versatile applicability gives incentive for more malware attacks. In
this paper, we propose a framework which continuously aggregates multiple user
trained models on non-overlapping data into single model. Specifically for
malware detection task, (i) we propose a novel user (local) neural network
(LNN) which trains on local distribution and (ii) then to assure the model
authenticity and quality, we propose a novel smart contract which enable
aggregation process over blokchain platform. The LNN model analyzes various
static and dynamic features of both malware and benign whereas the smart
contract verifies the malicious applications both for uploading and downloading
processes in the network using stored aggregated features of local models. In
this way, the proposed model not only improves malware detection accuracy using
decentralized model network but also model efficacy with blockchain. We
evaluate our approach with three state-of-the-art models and performed deep
analyses of extracted features of the relative model.
|
Attempting to reconcile general relativity with quantum mechanics is one of
the great undertakings of contemporary physics. Here we present how the
incompatibility between the two theories arises in the simple thought
experiment of preparing a heavy object in a quantum superposition. Following
Penrose's analysis of the problem, we determine the requirements on physical
parameters to perform experiments where both theories potentially interplay. We
use these requirements to compare different systems, focusing on mechanical
oscillators which can be coupled to superconducting circuits.
|
Discriminative correlation filters (DCF) and siamese networks have achieved
promising performance on visual tracking tasks thanks to their superior
computational efficiency and reliable similarity metric learning, respectively.
However, how to effectively take advantages of powerful deep networks, while
maintaining the real-time response of DCF, remains a challenging problem.
Embedding the cross-correlation operator as a separate layer into siamese
networks is a popular choice to enhance the tracking accuracy. Being a key
component of such a network, the correlation layer is updated online together
with other parts of the network. Yet, when facing serious disturbance, fused
trackers may still drift away from the target completely due to accumulated
errors. To address these issues, we propose a coarse-to-fine tracking
framework, which roughly infers the target state via an online-updating DCF
module first and subsequently, finely locates the target through an
offline-training asymmetric siamese network (ASN). Benefitting from the
guidance of DCF and the learned channel weights obtained through exploiting the
given ground-truth template, ASN refines feature representation and implements
precise target localization. Systematic experiments on five popular tracking
datasets demonstrate that the proposed DCF-ASN achieves the state-of-the-art
performance while exhibiting good tracking efficiency.
|
The Fermilab Muon $g-2$ experiment recently reported its first measurement of
the anomalous magnetic moment $a_\mu^{\textrm{FNAL}}$, which is in full
agreement with the previous BNL measurement and pushes the world average
deviation $\Delta a_\mu^{2021}$ from the Standard Model to a significance of
$4.2\sigma$. Here we provide an extensive survey of its impact on beyond the
Standard Model physics. We use state-of-the-art calculations and a
sophisticated set of tools to make predictions for $a_\mu$, dark matter and LHC
searches in a wide range of simple models with up to three new fields, that
represent some of the few ways that large $\Delta a_\mu$ can be explained. In
addition for the particularly well motivated Minimal Supersymmetric Standard
Model, we exhaustively cover the scenarios where large $\Delta a_\mu$ can be
explained while simultaneously satisfying all relevant data from other
experiments. Generally, the $\Delta a_\mu$ result can only be explained by
rather small masses and/or large couplings and enhanced chirality flips, which
can lead to conflicts with limits from LHC and dark matter experiments. Our
results show that the new measurement excludes a large number of models and
provides crucial constraints on others. Two-Higgs doublet and leptoquark models
provide viable explanations of $a_\mu$ only in specific versions and in
specific parameter ranges. Among all models with up to three fields, only
models with chirality enhancements can accommodate $a_\mu$ and dark matter
simultaneously. The MSSM can simultaneously explain $a_\mu$ and dark matter for
Bino-like LSP in several coannihilation regions. Allowing under abundance of
the dark matter relic density, the Higgsino- and particularly Wino-like LSP
scenarios become promising explanations of the $a_\mu$ result.
|
It is a classical theorem of Sarason that an analytic function of bounded
mean oscillation ($BMOA$), is of vanishing mean oscillation if and only if its
rotations converge in norm to the original function as the angle of the
rotation tends to zero. In a series of two papers Blasco et al. have raised the
problem of characterizing all semigroups of holomorphic functions $(\varphi_t)$
that can replace the semigroup of rotations in Sarason's Theorem. We give a
complete answer to this question, in terms of a logarithmic vanishing
oscillation condition on the infinitesimal generator of the semigroup
$(\varphi_t)$. In addition we confirm the conjecture of Blasco et al. that all
such semigroups are elliptic. We also investigate the analogous question for
the Bloch and the little Bloch space and surprisingly enough we find that the
semigroups for which the Bloch version of Sarason's Theorem holds are exactly
the same as in the $BMOA$ case.
|
Nested sampling is an important tool for conducting Bayesian analysis in
Astronomy and other fields, both for sampling complicated posterior
distributions for parameter inference, and for computing marginal likelihoods
for model comparison. One technical obstacle to using nested sampling in
practice is the requirement (for most common implementations) that prior
distributions be provided in the form of transformations from the unit
hyper-cube to the target prior density. For many applications - particularly
when using the posterior from one experiment as the prior for another - such a
transformation is not readily available. In this letter we show that parametric
bijectors trained on samples from a desired prior density provide a
general-purpose method for constructing transformations from the uniform base
density to a target prior, enabling the practical use of nested sampling under
arbitrary priors. We demonstrate the use of trained bijectors in conjunction
with nested sampling on a number of examples from cosmology.
|
We live in momentous times. The science community is empowered with an
arsenal of cosmic messengers to study the Universe in unprecedented detail.
Gravitational waves, electromagnetic waves, neutrinos and cosmic rays cover a
wide range of wavelengths and time scales. Combining and processing these
datasets that vary in volume, speed and dimensionality requires new modes of
instrument coordination, funding and international collaboration with a
specialized human and technological infrastructure. In tandem with the advent
of large-scale scientific facilities, the last decade has experienced an
unprecedented transformation in computing and signal processing algorithms. The
combination of graphics processing units, deep learning, and the availability
of open source, high-quality datasets, have powered the rise of artificial
intelligence. This digital revolution now powers a multi-billion dollar
industry, with far-reaching implications in technology and society. In this
chapter we describe pioneering efforts to adapt artificial intelligence
algorithms to address computational grand challenges in Multi-Messenger
Astrophysics. We review the rapid evolution of these disruptive algorithms,
from the first class of algorithms introduced in early 2017, to the
sophisticated algorithms that now incorporate domain expertise in their
architectural design and optimization schemes. We discuss the importance of
scientific visualization and extreme-scale computing in reducing
time-to-insight and obtaining new knowledge from the interplay between models
and data.
|
We study the relationship between the eluder dimension for a function class
and a generalized notion of rank, defined for any monotone "activation" $\sigma
: \mathbb{R} \to \mathbb{R}$, which corresponds to the minimal dimension
required to represent the class as a generalized linear model. When $\sigma$
has derivatives bounded away from $0$, it is known that $\sigma$-rank gives
rise to an upper bound on eluder dimension for any function class; we show
however that eluder dimension can be exponentially smaller than $\sigma$-rank.
We also show that the condition on the derivative is necessary; namely, when
$\sigma$ is the $\mathrm{relu}$ activation, we show that eluder dimension can
be exponentially larger than $\sigma$-rank.
|
Software needs to be secure, in particular, when deployed to critical
infrastructures. Secure coding guidelines capture practices in industrial
software engineering to ensure the security of code. This study aims to assess
the level of awareness of secure coding in industrial software engineering, the
skills of software developers to spot weaknesses in software code, avoid them,
and the organizational support to adhere to coding guidelines. The approach
draws on well-established theories of policy compliance, neutralization theory,
and security-related stress and the authors' many years of experience in
industrial software engineering and on lessons identified from training secure
coding in the industry. The paper presents the questionnaire design for the
online survey and the first analysis of data from the pilot study.
|
Contextual word-representations became a standard in modern natural language
processing systems. These models use subword tokenization to handle large
vocabularies and unknown words. Word-level usage of such systems requires a way
of pooling multiple subwords that correspond to a single word. In this paper we
investigate how the choice of subword pooling affects the downstream
performance on three tasks: morphological probing, POS tagging and NER, in 9
typologically diverse languages. We compare these in two massively multilingual
models, mBERT and XLM-RoBERTa. For morphological tasks, the widely used `choose
the first subword' is the worst strategy and the best results are obtained by
using attention over the subwords. For POS tagging both of these strategies
perform poorly and the best choice is to use a small LSTM over the subwords.
The same strategy works best for NER and we show that mBERT is better than
XLM-RoBERTa in all 9 languages. We publicly release all code, data and the full
result tables at \url{https://github.com/juditacs/subword-choice}.
|
It is essential that software systems be tolerant to degradations in
components they rely on. There are patterns and techniques which software
engineers use to ensure their systems gracefully degrade. Despite these
techniques being available in practice, tuning and configuration is hard to get
right and it is expensive to explore possible changes to components and
techniques in complex systems. To fill these gaps, we propose Quartermaster to
model and simulate systems and fault-tolerant techniques. We anticipate that
Quartermaster will be useful to further research on graceful degradation and
help inform software engineers about techniques that are most appropriate for
their use cases.
|
The coherent coupling between a quartz electro-mechanical resonator at room
temperature and trapped ions in a 7-tesla Penning trap has been demonstrated
for the first time. The signals arising from the coupling remain for
integration times in the orders of seconds. From the measurements carried out,
we demonstrate that the coupling allows detecting the reduced-cyclotron
frequency ($\nu_+$) within times below 10~ms and with an improved resolution
compared to conventional electronic detection schemes. A resolving power
$\nu_+/\Delta \nu_+=2.4\times10^{7}$ has been reached in single measurements.
In this publication we present the first results, emphasizing the novel
features of the quartz resonator as fast non-destructive ion-trap detector
together with different ways to analyze the data and considering aspects like
precision, resolution and sensitivity.
|
Evidence for broken time reversal symmetry (TRS) has been found in the
superconducting states of the $R_5$Rh$_6$Sn$_{18}$ (R = Sc, Y, Lu) compounds
with a centrosymmetric caged crystal structure, but the origin of this
phenomenon is unresolved. Here we report neutron diffraction measurements of
single crystals with $R$=Lu, as well as measurements of the temperature
dependence of the magnetic penetration depth using a self-induced tunnel
diode-oscillator (TDO) based technique, together with band structure
calculations using density functional theory. Neutron diffraction measurements
reveal that the system crystallizes in a tetragonal caged structure, and that
one of nominal Lu sites in the Lu$_5$Rh$_6$Sn$_{18}$ structure is occupied by
Sn, yielding a composition Lu$_{5-x}$Rh$_6$Sn$_{18+x}$ ($x=1$). The low
temperature penetration depth shift $\Delta\lambda(T)$ exhibits an exponential
temperature dependence below around $0.3T_c$, giving clear evidence for fully
gapped superconductivity. The derived superfluid density is reasonably well
accounted for by a single gap $s$-wave model, whereas agreement cannot be found
for models of TRS breaking states with two-component order parameters.
Moreover, band structure calculations reveal multiple bands crossing the Fermi
level, and indicate that the aforementioned TRS breaking states would be
expected to have nodes on the Fermi surface, in constrast to the observations.
|
Interface science has become a key aspect for fundamental research questions
and for the understanding, design and optimization of urgently needed energy
and information technologies. As the interface properties change during
operation, e.g. under applied electrochemical stimulus, and because multiple
bulk and interface processes coexist and compete, detailed operando
characterization is needed. In this perspective, I present an overview of the
state-of-the art and challenges in selected X-ray spectroscopic techniques,
concluding that among others, interface-sensitivity remains a major concern in
the available techniques. I propose and discuss a new method to extract
interface-information from nominally bulk sensitive techniques, and critically
evaluate the selection of X-ray energies for the recently developed meniscus
X-ray photoelectron spectroscopy, a promising operando tool to characterize the
solid-liquid interface. I expect that these advancements along with further
developments in time and spatial resolution will expand our ability to probe
the interface electronic and molecular structure with sub-nm depth and complete
our understanding of charge transfer processes during operation.
|
We introduce DeepCert, a tool-supported method for verifying the robustness
of deep neural network (DNN) image classifiers to contextually relevant
perturbations such as blur, haze, and changes in image contrast. While the
robustness of DNN classifiers has been the subject of intense research in
recent years, the solutions delivered by this research focus on verifying DNN
robustness to small perturbations in the images being classified, with
perturbation magnitude measured using established Lp norms. This is useful for
identifying potential adversarial attacks on DNN image classifiers, but cannot
verify DNN robustness to contextually relevant image perturbations, which are
typically not small when expressed with Lp norms. DeepCert addresses this
underexplored verification problem by supporting:(1) the encoding of real-world
image perturbations; (2) the systematic evaluation of contextually relevant DNN
robustness, using both testing and formal verification; (3) the generation of
contextually relevant counterexamples; and, through these, (4) the selection of
DNN image classifiers suitable for the operational context (i)envisaged when a
potentially safety-critical system is designed, or (ii)observed by a deployed
system. We demonstrate the effectiveness of DeepCert by showing how it can be
used to verify the robustness of DNN image classifiers build for two benchmark
datasets (`German Traffic Sign' and `CIFAR-10') to multiple contextually
relevant perturbations.
|
In our earlier publication we introduced the Spectrally Integrated Voigt
Function (SIVF) as an alternative to the traditional Voigt function for the
HITRAN-based applications [Quine & Abrarov, JQSRT 2013]. It was shown that
application of the SIVF enables us to reduce spectral resolution without loss
of accuracy in computation of the spectral radiance. As a further development,
in this study we present more efficient SIVF approximations derived by using a
new sampling method based on incomplete cosine expansion of the sinc function
[Abrarov & Quine, Appl. Math. Comput. 2015]. Since the SIVF mathematically
represents the mean value integral of the Voigt function, this method accounts
for area under the curve of the Voigt function. Consequently, the total band
radiance, defined as the integrated spectral radiance within a given spectral
region, can also retain its accuracy even at low spectral resolution. Our
numerical results demonstrate that application of the SIVF may be promising for
more rapid line-by-line computation in atmospheric models utilizing the HITRAN
molecular spectroscopic database. Such an approach may be particularly
efficient to implement a retrieval algorithm for the greenhouse gases from the
NIR space data collected by Earth-orbiting micro-spectrometers like Argus 1000
for their operation in a real-time mode. The real-time mode operation of the
micro-spectrometers can be advantageous for instant decision making during
flight for more efficient data collection from space.
|
The unit distance graph $G_{\mathbb{R}^d}^1$ is the infinite graph whose
nodes are points in $\mathbb{R}^d$, with an edge between two points if the
Euclidean distance between these points is 1. The 2-dimensional version
$G_{\mathbb{R}^2}^1$ of this graph is typically studied for its chromatic
number, as in the Hadwiger-Nelson problem. However, other properties of unit
distance graphs are rarely studied. Here, we consider the restriction of
$G_{\mathbb{R}^d}^1$ to closed convex subsets $X$ of $\mathbb{R}^d$. We show
that the graph $G_{\mathbb{R}^d}^1[X]$ is connected precisely when the radius
of $r(X)$ of $X$ is equal to 0, or when $r(X)\geq 1$ and the affine dimension
of $X$ is at least 2. For hyperrectangles, we give bounds for the graph
diameter in the critical case that the radius is exactly 1.
|
Humans are able to form a complex mental model of the environment they move
in. This mental model captures geometric and semantic aspects of the scene,
describes the environment at multiple levels of abstractions (e.g., objects,
rooms, buildings), includes static and dynamic entities and their relations
(e.g., a person is in a room at a given time). In contrast, current robots'
internal representations still provide a partial and fragmented understanding
of the environment, either in the form of a sparse or dense set of geometric
primitives (e.g., points, lines, planes, voxels) or as a collection of objects.
This paper attempts to reduce the gap between robot and human perception by
introducing a novel representation, a 3D Dynamic Scene Graph(DSG), that
seamlessly captures metric and semantic aspects of a dynamic environment. A DSG
is a layered graph where nodes represent spatial concepts at different levels
of abstraction, and edges represent spatio-temporal relations among nodes. Our
second contribution is Kimera, the first fully automatic method to build a DSG
from visual-inertial data. Kimera includes state-of-the-art techniques for
visual-inertial SLAM, metric-semantic 3D reconstruction, object localization,
human pose and shape estimation, and scene parsing. Our third contribution is a
comprehensive evaluation of Kimera in real-life datasets and photo-realistic
simulations, including a newly released dataset, uHumans2, which simulates a
collection of crowded indoor and outdoor scenes. Our evaluation shows that
Kimera achieves state-of-the-art performance in visual-inertial SLAM, estimates
an accurate 3D metric-semantic mesh model in real-time, and builds a DSG of a
complex indoor environment with tens of objects and humans in minutes. Our
final contribution shows how to use a DSG for real-time hierarchical semantic
path-planning. The core modules in Kimera are open-source.
|
Motivated by a variety of online matching platforms, we consider demand and
supply units which are located i.i.d. in $[0,1]^d$, and each demand unit needs
to be matched with a supply unit. The goal is to minimize the expected average
distance between matched pairs (the "cost"). We model dynamic arrivals of one
or both of demand and supply with uncertain locations of future arrivals, and
characterize the scaling behavior of the achievable cost in terms of system
size (number of supply units), as a function of the dimension $d$. Our
achievability results are backed by concrete matching algorithms. Across cases,
we find that the platform can achieve cost (nearly) as low as that achievable
if the locations of future arrivals had been known beforehand. Furthermore, in
all cases except one, cost nearly as low as the expected distance to the
nearest neighboring supply unit is achievable, i.e., the matching constraint
does not cause an increase in cost either. The aberrant case is where only
demand arrivals are dynamic, and $d=1$; excess supply significantly reduces
cost in this case.
|
We prove that the uniqueness results obtained in \cite{urrea} for the
Benjamin equation, cannot be extended for any pair of non-vanishing solutions.
On the other hand, we study uniqueness results of solutions of the Benjamin
equation. With this purpose, we showed that for any solutions $u$ and $v$
defined in $\R\times [0,T]$, if there exists an open set $I\subset \R$ such
that $u(\cdot,0)$ and $v(\cdot,0)$ agree in $I$, $\p_t u(\cdot,0)$ and $\p_t
v(\cdot,0)$ agree in $I$, then $u\equiv v$. To finish, a better version of this
uniqueness result is also established.
|
Contagion processes have been proven to fundamentally depend on the
structural properties of the interaction networks conveying them. Many real
networked systems are characterized by clustered substructures representing
either collections of all-to-all pair-wise interactions (cliques) and/or group
interactions, involving many of their members at once. In this work, focusing
on interaction structures represented as simplicial complexes, we present a
discrete-time microscopic model of complex contagion for a
susceptible-infected-susceptible dynamics. Introducing a particular edge clique
cover and a heuristic to find it, the model accounts for the higher-order
dynamical correlations among the members of the substructures
(cliques/simplices). The analytical computation of the critical point reveals
that higher-order correlations are responsible for its dependence on the
higher-order couplings. While such dependence eludes any mean-field model, the
possibility of a bi-stable region is extended to structured populations.
|
The tensor product of props was defined by Hackney and Robertson as an
extension of the Boardman-Vogt product of operads to more general monoidal
theories. Theories that factor as tensor products include the theory of
commutative monoids and the theory of bialgebras. We give a topological
interpretation (and vast generalisation) of this construction as a
low-dimensional projection of a "smash product of pointed directed spaces".
Here directed spaces are embodied by combinatorial structures called
diagrammatic sets, while Gray products replace cartesian products. The
correspondence is mediated by a web of adjunctions relating diagrammatic sets,
pros, probs, props, and Gray-categories. The smash product applies to
presentations of higher-dimensional theories and systematically produces
higher-dimensional coherence cells.
|
An extension of QPTL is considered where functional dependencies among the
quantified variables can be restricted in such a way that their current values
are independent of the future values of the other variables. This restriction
is tightly connected to the notion of behavioral strategies in game-theory and
allows the resulting logic to naturally express game-theoretic concepts. The
fragment where only restricted quantifications are considered, called
behavioral quantifications, can be decided, for both model checking and
satisfiability, in 2ExpTime and is expressively equivalent to QPTL, though
significantly less succinct.
|
Significant memory and computational requirements of large deep neural
networks restrict their application on edge devices. Knowledge distillation
(KD) is a prominent model compression technique for deep neural networks in
which the knowledge of a trained large teacher model is transferred to a
smaller student model. The success of knowledge distillation is mainly
attributed to its training objective function, which exploits the soft-target
information (also known as "dark knowledge") besides the given regular hard
labels in a training set. However, it is shown in the literature that the
larger the gap between the teacher and the student networks, the more difficult
is their training using knowledge distillation. To address this shortcoming, we
propose an improved knowledge distillation method (called Annealing-KD) by
feeding the rich information provided by the teacher's soft-targets
incrementally and more efficiently. Our Annealing-KD technique is based on a
gradual transition over annealed soft-targets generated by the teacher at
different temperatures in an iterative process, and therefore, the student is
trained to follow the annealed teacher output in a step-by-step manner. This
paper includes theoretical and empirical evidence as well as practical
experiments to support the effectiveness of our Annealing-KD method. We did a
comprehensive set of experiments on different tasks such as image
classification (CIFAR-10 and 100) and NLP language inference with BERT-based
models on the GLUE benchmark and consistently got superior results.
|
We rectify an incorrect citation of the reference in obtaining the Gaussian
upper bound for heat kernels of the Schr\"odinger type operators
$(-\Delta)^2+V^2$.
|
Fundamental to the theory of continued fractions is the fact that every
infinite continued fraction with positive integer coefficients converges;
however, it is unknown precisely which continued fractions with integer
coefficients (not necessarily positive) converge. Here we present a simple test
that determines whether an integer continued fraction converges or diverges. In
addition, for convergent continued fractions the test specifies whether the
limit is rational or irrational.
An attractive way to visualise integer continued fractions is to model them
as paths on the Farey graph, which is a graph embedded in the hyperbolic plane
that induces a tessellation of the hyperbolic plane by ideal triangles. With
this geometric representation of continued fractions our test for convergence
can be interpreted in a particularly elegant manner, giving deeper insight into
the nature of continued fraction convergence.
|
Several deep learning methods for phase retrieval exist, but most of them
fail on realistic data without precise support information. We propose a novel
method based on single-instance deep generative prior that works well on
complex-valued crystal data.
|
Three-body calculations of $K\bar{K}N$ system with quantum numbers $I=1/2$,
$J^{\pi}=(\frac{1}{2})^{+}$ were performed. Using separable potentials for
two-body interactions, we calculated the $\pi\Sigma$ mass spectra for the
$(\bar{K}N)_{I=0}+K^{+}\rightarrow(\pi\Sigma)^{0}K^{+}$ reaction on the basis
of three-body Alt-Grassberger-Sandhas equations in the momentum representation.
In this regard, different types of $\bar{K}N-\pi\Sigma$ potentials based on
phenomenological and chiral SU(3) approach are used. The possibility to observe
the trace of $\Lambda(1405)$ resonance in $(\pi\Sigma)^{0}$ mass spectra was
studied. Using the $\chi^{2}$ fitting, it was shown that the mass of
$\Lambda$(1405) resonance is about 1417 $\mathrm{MeV}/c^{2}$.
|
Multi-Agent Path Finding (MAPF) is a challenging combinatorial problem that
asks us to plan collision-free paths for a team of cooperative agents. In this
work, we show that one of the reasons why MAPF is so hard to solve is due to a
phenomenon called pairwise symmetry, which occurs when two agents have many
different paths to their target locations, all of which appear promising, but
every combination of them results in a collision. We identify several classes
of pairwise symmetries and show that each one arises commonly in practice and
can produce an exponential explosion in the space of possible collision
resolutions, leading to unacceptable runtimes for current state-of-the-art
(bounded-sub)optimal MAPF algorithms. We propose a variety of reasoning
techniques that detect the symmetries efficiently as they arise and resolve
them by using specialized constraints to eliminate all permutations of pairwise
colliding paths in a single branching step. We implement these ideas in the
context of the leading optimal MAPF algorithm CBS and show that the addition of
the symmetry reasoning techniques can have a dramatic positive effect on its
performance - we report a reduction in the number of node expansions by up to
four orders of magnitude and an increase in scalability by up to thirty times.
These gains allow us to solve to optimality a variety of challenging MAPF
instances previously considered out of reach for CBS.
|
Multiple instance learning (MIL) is a powerful tool to solve the weakly
supervised classification in whole slide image (WSI) based pathology diagnosis.
However, the current MIL methods are usually based on independent and identical
distribution hypothesis, thus neglect the correlation among different
instances. To address this problem, we proposed a new framework, called
correlated MIL, and provided a proof for convergence. Based on this framework,
we devised a Transformer based MIL (TransMIL), which explored both
morphological and spatial information. The proposed TransMIL can effectively
deal with unbalanced/balanced and binary/multiple classification with great
visualization and interpretability. We conducted various experiments for three
different computational pathology problems and achieved better performance and
faster convergence compared with state-of-the-art methods. The test AUC for the
binary tumor classification can be up to 93.09% over CAMELYON16 dataset. And
the AUC over the cancer subtypes classification can be up to 96.03% and 98.82%
over TCGA-NSCLC dataset and TCGA-RCC dataset, respectively. Implementation is
available at: https://github.com/szc19990412/TransMIL.
|
Temporal language grounding (TLG) is a fundamental and challenging problem
for vision and language understanding. Existing methods mainly focus on fully
supervised setting with temporal boundary labels for training, which, however,
suffers expensive cost of annotation. In this work, we are dedicated to weakly
supervised TLG, where multiple description sentences are given to an untrimmed
video without temporal boundary labels. In this task, it is critical to learn a
strong cross-modal semantic alignment between sentence semantics and visual
content. To this end, we introduce a novel weakly supervised temporal adjacent
network (WSTAN) for temporal language grounding. Specifically, WSTAN learns
cross-modal semantic alignment by exploiting temporal adjacent network in a
multiple instance learning (MIL) paradigm, with a whole description paragraph
as input. Moreover, we integrate a complementary branch into the framework,
which explicitly refines the predictions with pseudo supervision from the MIL
stage. An additional self-discriminating loss is devised on both the MIL branch
and the complementary branch, aiming to enhance semantic discrimination by
self-supervising. Extensive experiments are conducted on three widely used
benchmark datasets, \emph{i.e.}, ActivityNet-Captions, Charades-STA, and
DiDeMo, and the results demonstrate the effectiveness of our approach.
|
Recent advances in implicit neural representations show great promise when it
comes to generating numerical solutions to partial differential equations.
Compared to conventional alternatives, such representations employ
parameterized neural networks to define, in a mesh-free manner, signals that
are highly-detailed, continuous, and fully differentiable. In this work, we
present a novel machine learning approach for topology optimization -- an
important class of inverse problems with high-dimensional parameter spaces and
highly nonlinear objective landscapes. To effectively leverage neural
representations in the context of mesh-free topology optimization, we use
multilayer perceptrons to parameterize both density and displacement fields.
Our experiments indicate that our method is highly competitive for minimizing
structural compliance objectives, and it enables self-supervised learning of
continuous solution spaces for topology optimization problems.
|
Multi-Target Multi-Camera (MTMC) vehicle tracking is an essential task of
visual traffic monitoring, one of the main research fields of Intelligent
Transportation Systems. Several offline approaches have been proposed to
address this task; however, they are not compatible with real-world
applications due to their high latency and post-processing requirements. In
this paper, we present a new low-latency online approach for MTMC tracking in
scenarios with partially overlapping fields of view (FOVs), such as road
intersections. Firstly, the proposed approach detects vehicles at each camera.
Then, the detections are merged between cameras by applying cross-camera
clustering based on appearance and location. Lastly, the clusters containing
different detections of the same vehicle are temporally associated to compute
the tracks on a frame-by-frame basis. The experiments show promising
low-latency results while addressing real-world challenges such as the a priori
unknown and time-varying number of targets and the continuous state estimation
of them without performing any post-processing of the trajectories.
|
We provide a systematic method to compute tree-level scattering amplitudes
with spinning external states from amplitudes with scalar external states in
arbitrary spacetime dimensions. We write down analytic answers for various
scattering amplitudes, including the four graviton amplitude due to the massive
spin $J$ exchange. We verify the results by computing angular distributions in
3 + 1 dimensions using various identities involving Jacobi polynomials.
|
We discuss a set of heterotic and type II string theory compactifications to
1+1 dimensions that are characterized by factorized internal worldsheet CFTs of
the form $V_1\otimes \bar V_2$, where $V_1, V_2$ are self-dual (super) vertex
operator algebras. In the cases with spacetime supersymmetry, we show that the
BPS states form a module for a Borcherds-Kac-Moody (BKM) (super)algebra, and we
prove that for each model the BKM (super)algebra is a symmetry of genus zero
BPS string amplitudes. We compute the supersymmetric indices of these models
using both Hamiltonian and path integral formalisms. The path integrals are
manifestly automorphic forms closely related to the Borcherds-Weyl-Kac
denominator. Along the way, we comment on various subtleties inherent to these
low-dimensional string compactifications.
|
Observations restrict the parameter space of Holographic Dark Energy (HDE) so
that a turning point in the Hubble parameter $H(z)$ is inevitable. Concretely,
cosmic microwave background (CMB), baryon acoustic oscillations (BAO) and Type
Ia supernovae (SNE) data put the turning point in the future, but removing SNE
results in an observational turning point at positive redshift. From the
perspective of theory, not only does the turning point violate the Null Energy
Condition (NEC), but as we argue, it may be interpreted as an evolution of the
Hubble constant $H_0$ with redshift, which is at odds with the very FLRW
framework within which data has been analysed. Tellingly, neither of these are
problems for the flat $\Lambda$CDM model, and a direct comparison of fits
further disfavours HDE relative to flat $\Lambda$CDM.
|
The conditional value-at-risk (CVaR) is a useful risk measure in fields such
as machine learning, finance, insurance, energy, etc. When measuring very
extreme risk, the commonly used CVaR estimation method of sample averaging does
not work well due to limited data above the value-at-risk (VaR), the quantile
corresponding to the CVaR level. To mitigate this problem, the CVaR can be
estimated by extrapolating above a lower threshold than the VaR using a
generalized Pareto distribution (GPD), which is often referred to as the
peaks-over-threshold (POT) approach. This method often requires a very high
threshold to fit well, leading to high variance in estimation, and can induce
significant bias if the threshold is chosen too low. In this paper, we derive a
new expression for the GPD approximation error of the CVaR, a bias term induced
by the choice of threshold, as well as a bias correction method for the
estimated GPD parameters. This leads to the derivation of a new estimator for
the CVaR that we prove to be asymptotically unbiased. In a practical setting,
we show through experiments that our estimator provides a significant
performance improvement compared with competing CVaR estimators in finite
samples. As a consequence of our bias correction method, it is also shown that
a much lower threshold can be selected without introducing significant bias.
This allows a larger portion of data to be be used in CVaR estimation compared
with the typical POT approach, leading to more stable estimates. As secondary
results, a new estimator for a second-order parameter of heavy-tailed
distributions is derived, as well as a confidence interval for the CVaR which
enables quantifying the level of variability in our estimator.
|
In this paper we have studied subgrid multiscale stabilized formulation with
dynamic subscales for non-Newtonian Casson fluid flow model tightly coupled
with variable coefficients ADR ($VADR$) equation. The Casson viscosity
coefficient is taken to be dependent upon solute mass concentration. This paper
presents the stability and convergence analyses of the stabilized finite
element solution. The proposed expressions of the stabilization parameters
helps in obtaining optimal order of convergences. Appropriate numerical
experiments have been provided.
|
The mushroom body of the fruit fly brain is one of the best studied systems
in neuroscience. At its core it consists of a population of Kenyon cells, which
receive inputs from multiple sensory modalities. These cells are inhibited by
the anterior paired lateral neuron, thus creating a sparse high dimensional
representation of the inputs. In this work we study a mathematical
formalization of this network motif and apply it to learning the correlational
structure between words and their context in a corpus of unstructured text, a
common natural language processing (NLP) task. We show that this network can
learn semantic representations of words and can generate both static and
context-dependent word embeddings. Unlike conventional methods (e.g., BERT,
GloVe) that use dense representations for word embedding, our algorithm encodes
semantic meaning of words and their context in the form of sparse binary hash
codes. The quality of the learned representations is evaluated on word
similarity analysis, word-sense disambiguation, and document classification. It
is shown that not only can the fruit fly network motif achieve performance
comparable to existing methods in NLP, but, additionally, it uses only a
fraction of the computational resources (shorter training time and smaller
memory footprint).
|
Despite previous success in generating audio-driven talking heads, most of
the previous studies focus on the correlation between speech content and the
mouth shape. Facial emotion, which is one of the most important features on
natural human faces, is always neglected in their methods. In this work, we
present Emotional Video Portraits (EVP), a system for synthesizing high-quality
video portraits with vivid emotional dynamics driven by audios. Specifically,
we propose the Cross-Reconstructed Emotion Disentanglement technique to
decompose speech into two decoupled spaces, i.e., a duration-independent
emotion space and a duration dependent content space. With the disentangled
features, dynamic 2D emotional facial landmarks can be deduced. Then we propose
the Target-Adaptive Face Synthesis technique to generate the final high-quality
video portraits, by bridging the gap between the deduced landmarks and the
natural head poses of target videos. Extensive experiments demonstrate the
effectiveness of our method both qualitatively and quantitatively.
|
Unbiased random vectors i.e. distributed uniformly in n-dimensional space,
are widely applied and the computational cost of generating a vector increases
only linearly with n. On the other hand, generating uniformly distributed
random vectors in its subspaces typically involves the inefficiency of
rejecting vectors falling outside, or re-weighting a non-uniformly distributed
set of samples. Both approaches become severely ineffective as n increases. We
present an efficient algorithm to generate uniformly distributed random
directions in n-dimensional cones, to aid searching and sampling tasks in high
dimensions.
|
By using an equivalent form of the uniform Lopatinski condition for 1-shocks,
we prove that the stability condition found by the energy method in [A.
Morando, Y. Trakhinin, P. Trebeschi, Structural stability of shock waves in 2D
compressible elastodynamics, Math. Ann. 378 (2020) 1471-1504] for the
rectilinear shock waves in two-dimensional flows of compressible isentropic
inviscid elastic materials is not only sufficient but also necessary for
uniform stability (implying structural nonlinear stability of corresponding
curved shock waves). The key point of our spectral analysis is a delicate study
of the transition between uniform and weak stability. Moreover, we prove that
the rectilinear shock waves are never violently unstable, i.e., they are always
either uniformly or weakly stable.
|
We consider the Grassman manifold $G(E)$ as the subset of all orthogonal
projections of a given Euclidean space $E$ and obtain some explicit formulas
concerning the differential geometry of $G(E)$ as a submanifold of $L(E,E)$
endowed with the Hilbert-Schmidt inner product. Most of these formulas can be
naturally extended to the infinite dimensional Hilbert space case.
|
Fair representation learning is an attractive approach that promises fairness
of downstream predictors by encoding sensitive data. Unfortunately, recent work
has shown that strong adversarial predictors can still exhibit unfairness by
recovering sensitive attributes from these representations. In this work, we
present Fair Normalizing Flows (FNF), a new approach offering more rigorous
fairness guarantees for learned representations. Specifically, we consider a
practical setting where we can estimate the probability density for sensitive
groups. The key idea is to model the encoder as a normalizing flow trained to
minimize the statistical distance between the latent representations of
different groups. The main advantage of FNF is that its exact likelihood
computation allows us to obtain guarantees on the maximum unfairness of any
potentially adversarial downstream predictor. We experimentally demonstrate the
effectiveness of FNF in enforcing various group fairness notions, as well as
other attractive properties such as interpretability and transfer learning, on
a variety of challenging real-world datasets.
|
Broader disclosive transparency$-$truth and clarity in communication
regarding the function of AI systems$-$is widely considered desirable.
Unfortunately, it is a nebulous concept, difficult to both define and quantify.
This is problematic, as previous work has demonstrated possible trade-offs and
negative consequences to disclosive transparency, such as a confusion effect,
where "too much information" clouds a reader's understanding of what a system
description means. Disclosive transparency's subjective nature has rendered
deep study into these problems and their remedies difficult. To improve this
state of affairs, We introduce neural language model-based probabilistic
metrics to directly model disclosive transparency, and demonstrate that they
correlate with user and expert opinions of system transparency, making them a
valid objective proxy. Finally, we demonstrate the use of these metrics in a
pilot study quantifying the relationships between transparency, confusion, and
user perceptions in a corpus of real NLP system descriptions.
|
We introduce a novel combination of Bayesian Models (BMs) and Neural Networks
(NNs) for making predictions with a minimum expected risk. Our approach
combines the best of both worlds, the data efficiency and interpretability of a
BM with the speed of a NN. For a BM, making predictions with the lowest
expected loss requires integrating over the posterior distribution. When exact
inference of the posterior predictive distribution is intractable,
approximation methods are typically applied, e.g. Monte Carlo (MC) simulation.
For MC, the variance of the estimator decreases with the number of samples -
but at the expense of increased computational cost. Our approach removes the
need for iterative MC simulation on the CPU at prediction time. In brief, it
works by fitting a NN to synthetic data generated using the BM. In a single
feed-forward pass, the NN gives a set of point-wise approximations to the BM's
posterior predictive distribution for a given observation. We achieve risk
minimized predictions significantly faster than standard methods with a
negligible loss on the test dataset. We combine this approach with Active
Learning to minimize the amount of data required for fitting the NN. This is
done by iteratively labeling more data in regions with high predictive
uncertainty of the NN.
|
We study the roots of a random polynomial over the field of $p$-adic numbers.
For a random monic polynomial with i.i.d. coefficients in $\mathbb{Z}_p$, we
obtain an estimate for the expected number of roots of this polynomial. In
particular, if the coefficients take the values $\pm1$ with equal probability,
the expected number of $p$-adic roots converges to
$\left(p-1\right)/\left(p+1\right)$ as the degree of the polynomial tends to
$\infty$.
|
Datasets are mathematical objects (e.g., point clouds, matrices, graphs,
images, fields/functions) that have shape. This shape encodes important
knowledge about the system under study. Topology is an area of mathematics that
provides diverse tools to characterize the shape of data objects. In this work,
we study a specific tool known as the Euler characteristic (EC). The EC is a
general, low-dimensional, and interpretable descriptor of topological spaces
defined by data objects. We revise the mathematical foundations of the EC and
highlight its connections with statistics, linear algebra, field theory, and
graph theory. We discuss advantages offered by the use of the EC in the
characterization of complex datasets; to do so, we illustrate its use in
different applications of interest in chemical engineering such as process
monitoring, flow cytometry, and microscopy. We show that the EC provides a
descriptor that effectively reduces complex datasets and that this reduction
facilitates tasks such as visualization, regression, classification, and
clustering.
|
Purpose: To develop a deep learning method on a nonlinear manifold to explore
the temporal redundancy of dynamic signals to reconstruct cardiac MRI data from
highly undersampled measurements.
Methods: Cardiac MR image reconstruction is modeled as general compressed
sensing (CS) based optimization on a low-rank tensor manifold. The nonlinear
manifold is designed to characterize the temporal correlation of dynamic
signals. Iterative procedures can be obtained by solving the optimization model
on the manifold, including gradient calculation, projection of the gradient to
tangent space, and retraction of the tangent space to the manifold. The
iterative procedures on the manifold are unrolled to a neural network, dubbed
as Manifold-Net. The Manifold-Net is trained using in vivo data with a
retrospective electrocardiogram (ECG)-gated segmented bSSFP sequence.
Results: Experimental results at high accelerations demonstrate that the
proposed method can obtain improved reconstruction compared with a compressed
sensing (CS) method k-t SLR and two state-of-the-art deep learning-based
methods, DC-CNN and CRNN.
Conclusion: This work represents the first study unrolling the optimization
on manifolds into neural networks. Specifically, the designed low-rank manifold
provides a new technical route for applying low-rank priors in dynamic MR
imaging.
|
Surgical training in medical school residency programs has followed the
apprenticeship model. The learning and assessment process is inherently
subjective and time-consuming. Thus, there is a need for objective methods to
assess surgical skills. Here, we use the Preferred Reporting Items for
Systematic Reviews and Meta-Analyses (PRISMA) guidelines to systematically
survey the literature on the use of Deep Neural Networks for automated and
objective surgical skill assessment, with a focus on kinematic data as putative
markers of surgical competency. There is considerable recent interest in deep
neural networks (DNN) due to the availability of powerful algorithms, multiple
datasets, some of which are publicly available, as well as efficient
computational hardware to train and host them. We have reviewed 530 papers, of
which we selected 25 for this systematic review. Based on this review, we
concluded that DNNs are powerful tools for automated, objective surgical skill
assessment using both kinematic and video data. The field would benefit from
large, publicly available, annotated datasets that are representative of the
surgical trainee and expert demographics and multimodal data beyond kinematics
and videos.
|
The constant changes in the software industry, practices, and methodologies
impose challenges to teaching and learning current software engineering
concepts and skills. DevOps is particularly challenging because it covers
technical concepts, such as pipeline automation, and non-technical ones, such
as team roles and project management. The present study investigates a course
setup to introduce these concepts to software engineering undergraduates. We
designed the course by employing coding to associate DevOps concepts to Agile,
Lean, and Open source practices and tools. We present the main aspects of this
project-oriented DevOps course, with 240 students enrolled in it since its
first offering in 2016. We conducted an empirical study, with both a
quantitative and qualitative analysis, to evaluate this project-oriented course
setup. We collected the data from the projects repository and students
perceptions from a questionnaire. We mined 148 repositories (corresponding to
72 projects) and obtained 86 valid responses to the questionnaire. We also
mapped the concepts which are more challenging to students learn from
experience. The results evidence that first-hand experience facilitates the
comprehension of DevOps concepts and enriches classes discussions. We present a
set of lessons learned, which may help professors better design and conduct
project-oriented courses to cover DevOps concepts.
|
We examine the problem of generating temporally and spatially dense 4D human
body motion. On the one hand generative modeling has been extensively studied
as a per time-frame static fitting problem for dense 3D models such as mesh
representations, where the temporal aspect is left out of the generative model.
On the other hand, temporal generative models exist for sparse human models
such as marker-based capture representations, but have not to our knowledge
been extended to dense 3D shapes. We propose to bridge this gap with a
generative auto-encoder-based framework, which encodes morphology, global
locomotion including translation and rotation, and multi-frame temporal motion
as a single latent space vector. To assess its generalization and factorization
abilities, we train our model on a cyclic locomotion subset of AMASS,
leveraging the dense surface models it provides for an extensive set of motion
captures. Our results validate the ability of the model to reconstruct 4D
sequences of human locomotions within a low error bound, and the meaningfulness
of latent space interpolation between latent vectors representing different
multi-frame sequences and locomotion types. We also illustrate the benefits of
the approach for 4D human motion prediction of future frames from initial human
locomotion frames, showing promising abilities of our model to learn realistic
spatio-temporal features of human motion. We show that our model allows for
data completion of both spatially and temporally sparse data.
|
The results of the analysis of the deviation of the force equilibrium for
ions from the neoclassical theory prediction, calculated using the direct
measurements of the radial electric field, in the view of its possible local
and nonlocal correlation with the profiles of electron, Te, and ion, Ti,
temperatures in the T-10 tokamak are presented. Local correlations are analyzed
by means of the Pearson's correlation. Nonlocal correlations are treated with
an inverse problem under the assumption of an integral equation relationship
between the deviation and Te and Ti profiles. The discharges with zero, weak
and strong auxiliary heating (electron cyclotron resonance heating) are
analyzed. It is found that the electrons substantially (not less than ions)
contribute to the deviation of the ion equilibrium from the neoclassical theory
prediction both in the local and nonlocal models.
|
Two-dimensional transition metal dichalcogenides (TMDs) can adopt one of the
several possible structures, with most common being the trigonal prismatic and
octahedral symmetry phases. Since the structure determines the electronic
properties, being able to predict phase-preferences of TMDs from just the
knowledge of the constituent atoms is highly desired, but has remained a
long-standing problem. In this study, we combine high-throughput quantum
mechanical computations with machine learning algorithms to discover novel TMDs
and study their chemical stability, as well as their phase preferences. Our
analysis provides insights into determining physiochemical factors that dictate
the phase-preference of a TMD, identifying and even going beyond the attributes
considered by earlier researchers in predicting crystal structures. We show
that the machine learning algorithms are powerful tools that can be used not
only to find new materials with targeted properties, but also to find
connections between elemental attributes and the target property/properties
that were not previously obvious.
|
Incorporating external knowledge into Named Entity Recognition (NER) systems
has been widely studied in the generic domain. In this paper, we focus on
clinical domain where only limited data is accessible and interpretability is
important. Recent advancement in technology and the acceleration of clinical
trials has resulted in the discovery of new drugs, procedures as well as
medical conditions. These factors motivate towards building robust zero-shot
NER systems which can quickly adapt to new medical terminology. We propose an
auxiliary gazetteer model and fuse it with an NER system, which results in
better robustness and interpretability across different clinical datasets. Our
gazetteer based fusion model is data efficient, achieving +1.7 micro-F1 gains
on the i2b2 dataset using 20% training data, and brings + 4.7 micro-F1 gains on
novel entity mentions never presented during training. Moreover, our fusion
model is able to quickly adapt to new mentions in gazetteers without
re-training and the gains from the proposed fusion model are transferable to
related datasets.
|
Using a series of detector measurements taken at different locations to
localize a source of radiation is a well-studied problem. The source of
radiation is sometimes constrained to a single point-like source, in which case
the location of the point source can be found using techniques such as maximum
likelihood. Recent advancements have shown the ability to locate point sources
in 2D and even 3D, but few have studied the effect of intervening material on
the problem. In this work we examine gamma-ray data taken from a freely moving
system and develop voxelized 3-D models of the scene using data from the
onboard LiDAR. Ray casting is used to compute the distance each gamma ray
travels through the scene material, which is then used to calculate attenuation
assuming a single attenuation coefficient for solids within the geometry.
Parameter estimation using maximum likelihood is performed to simultaneously
find the attenuation coefficient, source activity, and source position that
best match the data. Using a simulation, we validate the ability of this method
to reconstruct the true location and activity of a source, along with the true
attenuation coefficient of the structure it is inside, and then we apply the
method to measured data with sources and find good agreement.
|
Quantum classification and hypothesis testing are two tightly related
subjects, the main difference being that the former is data driven: how to
assign to quantum states $\rho(x)$ the corresponding class $c$ (or hypothesis)
is learnt from examples during training, where $x$ can be either tunable
experimental parameters or classical data "embedded" into quantum states. Does
the model generalize? This is the main question in any data-driven strategy,
namely the ability to predict the correct class even of previously unseen
states. Here we establish a link between quantum machine learning
classification and quantum hypothesis testing (state and channel
discrimination) and then show that the accuracy and generalization capability
of quantum classifiers depend on the (R\'enyi) mutual informations $I(C{:}Q)$
and $I_2(X{:}Q)$ between the quantum state space $Q$ and the classical
parameter space $X$ or class space $C$. Based on the above characterization, we
then show how different properties of $Q$ affect classification accuracy and
generalization, such as the dimension of the Hilbert space, the amount of
noise, and the amount of neglected information from $X$ via, e.g., pooling
layers. Moreover, we introduce a quantum version of the Information Bottleneck
principle that allows us to explore the various tradeoffs between accuracy and
generalization. Finally, in order to check our theoretical predictions, we
study the classification of the quantum phases of an Ising spin chain, and we
propose the Variational Quantum Information Bottleneck (VQIB) method to
optimize quantum embeddings of classical data to favor generalization.
|
Data augmentation has been successfully used in many areas of deep-learning
to significantly improve model performance. Typically data augmentation
simulates realistic variations in data in order to increase the apparent
diversity of the training-set. However, for opcode-based malware analysis,
where deep learning methods are already achieving state of the art performance,
it is not immediately clear how to apply data augmentation. In this paper we
study different methods of data augmentation starting with basic methods using
fixed transformations and moving to methods that adapt to the data. We propose
a novel data augmentation method based on using an opcode embedding layer
within the network and its corresponding opcode embedding matrix to perform
adaptive data augmentation during training. To the best of our knowledge this
is the first paper to carry out a systematic study of different augmentation
methods applied to opcode sequence based malware classification.
|
We consider the problem of finding the matching map between two sets of $d$
dimensional vectors from noisy observations, where the second set contains
outliers. The matching map is then an injection, which can be consistently
estimated only if the vectors of the second set are well separated. The main
result shows that, in the high-dimensional setting, a detection region of
unknown injection can be characterized by the sets of vectors for which the
inlier-inlier distance is of order at least $d^{1/4}$ and the inlier-outlier
distance is of order at least $d^{1/2}$. These rates are achieved using the
estimated matching minimizing the sum of logarithms of distances between
matched pairs of points. We also prove lower bounds establishing optimality of
these rates. Finally, we report results of numerical experiments on both
synthetic and real world data that illustrate our theoretical results and
provide further insight into the properties of the estimators studied in this
work.
|
Nonlinear Compton scattering is an inelastic scattering process where a
photon is emitted due to the interaction between an electron and an intense
laser field. With the development of X-ray free-electron lasers, the intensity
of X-ray laser is greatly enhanced, and the signal from X-ray nonlinear Compton
scattering is no longer weak. Although the nonlinear Compton scattering by an
initially free electron has been thoroughly investigated, the mechanis of
nonrelativistic nonlinear Compton scattering of X-ray photons by bound
electrons is unclear yet. Here, we present a frequency-domain formulation based
on the nonperturbative quantum electrodynamic to study nonlinear Compton
scattering of two photons off a bound electron inside an atom in a strong X-ray
laser field. In contrast to previous theoretical works, our results clearly
reveal the existence of anomalous redshift phenomenon observed experimentally
by Fuchs et al. (Nat. Phys. 11, 964 (2015)) and suggest its origin as the
binding energy of the electron as well as the momentum transfer from incident
photons to the electron during the scattering process. Our work builds a bridge
between intense-laser atomic physics and Compton scattering process that can be
used to study atomic structure and dynamics at high laser intensities.
|
Two-dimensional (2D) ferromagnetic and ferroelectric materials attract
unprecedented attention due to the spontaneous-symmetry-breaking induced novel
properties and multifarious potential applications. Here we systematically
investigate a large family (148) of 2D MGeX3 (M = metal elements, X =
O/S/Se/Te) by means of the high-throughput first-principles calculations, and
focus on their possible ferroic properties including ferromagnetism,
ferroelectricity, and ferroelasticity. We discover eight stable 2D ferromagnets
including five semiconductors and three half-metals, 21 2D antiferromagnets,
and 11 stable 2D ferroelectric semiconductors including two multiferroic
materials. Particularly, MnGeSe3 and MnGeTe3 are predicted to be
room-temperature 2D ferromagnetic half metals with Tc of 490 and 308 K,
respectively. It is probably for the first time that ferroelectricity is
uncovered in 2D MGeX3 family, which derives from the spontaneous symmetry
breaking induced by unexpected displacements of Ge-Ge atomic pairs, and we also
reveal that the electric polarizations are in proportion to the ratio of
electronegativity of X and M atoms, and IVB group metal elements are highly
favored for 2D ferroelectricity. Magnetic tunnel junction and water-splitting
photocatalyst based on 2D ferroic MGeX3 are proposed as examples of wide
potential applications. The atlas of ferroicity in 2D MGeX3 materials will spur
great interest in experimental studies and would lead to diverse applications.
|
Understanding user dynamics in online communities has become an active
research topic and can provide valuable insights for human behavior analysis
and community management. In this work, we investigate the "bandwagon fan"
phenomenon, a special case of user dynamics, to provide a large-scale
characterization of online fan loyalty in the context of professional sports
teams. We leverage the existing structure of NBA-related discussion forums on
Reddit, investigate the general bandwagon patterns, and trace the behavior of
bandwagon fans to capture latent behavioral characteristics. We observe that
better teams attract more bandwagon fans, but they do not necessarily come from
weak teams. Our analysis of bandwagon fan flow also shows different trends for
different teams, as the playoff season progresses. Furthermore, we compare
bandwagon users with non-bandwagon users in terms of their activity and
language usage. We find that bandwagon users write shorter comments but receive
better feedback, and use words that show less attachment to their affiliated
teams. Our observations allow for more effective identification of bandwagon
users and prediction of users' future bandwagon behavior in a season, as
demonstrated by the significant improvement over the baseline method in our
evaluation results.
|
Subsets and Splits