abstract
stringlengths 42
2.09k
|
---|
The observation by BESIII and LHCb of states with hidden charm and open
strangeness ($c\bar c q\bar s$) presents new opportunities for the development
of a global model of heavy-quark exotics. Here we extend the dynamical diquark
model to encompass such states, using the same values of Hamiltonian parameters
previously obtained from the nonstrange and hidden-strange sectors. The large
mass splitting between $Z_{cs}(4000)$ and $Z_{cs}(4220)$ suggests substantial
SU(3)$_{\rm flavor}$ mixing between all $J^P \! = \! 1^+$ states, while their
average mass compared to that of other sectors offers a direct probe of flavor
octet-singlet mixing among exotics. We also explore the inclusion of
$\eta$-like exchanges within the states, and find their effects to be quite
limited. In addition, using the same diquark-mass parameters, we find
$P_c(4312)$ and $P_{cs}(4459)$ to fit well as corresponding nonstrange and
open-strange pentaquarks.
|
Appearance-based detectors achieve remarkable performance on common scenes,
but tend to fail for scenarios lack of training data. Geometric motion
segmentation algorithms, however, generalize to novel scenes, but have yet to
achieve comparable performance to appearance-based ones, due to noisy motion
estimations and degenerate motion configurations. To combine the best of both
worlds, we propose a modular network, whose architecture is motivated by a
geometric analysis of what independent object motions can be recovered from an
egomotion field. It takes two consecutive frames as input and predicts
segmentation masks for the background and multiple rigidly moving objects,
which are then parameterized by 3D rigid transformations. Our method achieves
state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
The inferred rigid motions lead to a significant improvement for depth and
scene flow estimation. At the time of submission, our method ranked 1st on
KITTI scene flow leaderboard, out-performing the best published method (scene
flow error: 4.89% vs 6.31%).
|
The planning of whole-body motion and step time for bipedal locomotion is
constructed as a model predictive control (MPC) problem, in which a sequence of
optimization problems needs to be solved online. While directly solving these
problems is extremely time-consuming, we propose a predictive gait synthesizer
to offer immediate solutions. Based on the full-dimensional model, a library of
gaits with different speeds and periods is first constructed offline. Then the
proposed gait synthesizer generates real-time gaits at 1kHz by synthesizing the
gait library based on the online prediction of centroidal dynamics. We prove
that the constructed MPC problem can ensure the uniform ultimate boundedness
(UUB) of the CoM states and show that our proposed gait synthesizer can provide
feasible solutions to the MPC optimization problems. Simulation and
experimental results on a bipedal robot with 8 degrees of freedom (DoF) are
provided to show the performance and robustness of this approach.
|
Meshfree discretizations of state-based peridynamic models are attractive due
to their ability to naturally describe fracture of general materials. However,
two factors conspire to prevent meshfree discretizations of state-based
peridynamics from converging to corresponding local solutions as resolution is
increased: quadrature error prevents an accurate prediction of bulk mechanics,
and the lack of an explicit boundary representation presents challenges when
applying traction loads. In this paper, we develop a reformulation of the
linear peridynamic solid (LPS) model to address these shortcomings, using
improved meshfree quadrature, a reformulation of the nonlocal dilitation, and a
consistent handling of the nonlocal traction condition to construct a model
with rigorous accuracy guarantees. In particular, these improvements are
designed to enforce discrete consistency in the presence of evolving fractures,
whose {\it a priori} unknown location render consistent treatment difficult. In
the absence of fracture, when a corresponding classical continuum mechanics
model exists, our improvements provide asymptotically compatible convergence to
corresponding local solutions, eliminating surface effects and issues with
traction loading which have historically plagued peridynamic discretizations.
When fracture occurs, our formulation automatically provides a sharp
representation of the fracture surface by breaking bonds, avoiding the loss of
mass. We provide rigorous error analysis and demonstrate convergence for a
number of benchmarks, including manufactured solutions, free-surface,
nonhomogeneous traction loading, and composite material problems. Finally, we
validate simulations of brittle fracture against a recent experiment of dynamic
crack branching in soda-lime glass, providing evidence that the scheme yields
accurate predictions for practical engineering problems.
|
Nowadays, target recognition technique plays an important role in many
fields. However, the current target image information based methods suffer from
the influence of image quality and the time cost of image reconstruction. In
this paper, we propose a novel imaging-free target recognition method combining
ghost imaging (GI) and generative adversarial networks (GAN). Based on the
mechanism of GI, a set of random speckles sequence is employed to illuminate
target, and a bucket detector without resolution is utilized to receive echo
signal. The bucket signal sequence formed after continuous detections is
constructed into a bucket signal array, which is regarded as the sample of GAN.
Then, conditional GAN is used to map bucket signal array and target category.
In practical application, the speckles sequence in training step is employed to
illuminate target, and the bucket signal array is input GAN for recognition.
The proposed method can improve the problems caused by conventional recognition
methods that based on target image information, and provide a certain
turbulence-free ability. Extensive experiments show that the proposed method
achieves promising performance.
|
The problem of discriminating between many quantum channels with certainty is
analyzed under the assumption of prior knowledge of algebraic relations among
possible channels. It is shown, by explicit construction of a novel family of
quantum algorithms, that when the set of possible channels faithfully
represents a finite subgroup of SU(2) (e.g., $C_n, D_{2n}, A_4, S_4, A_5$) the
recently-developed techniques of quantum signal processing can be modified to
constitute subroutines for quantum hypothesis testing. These algorithms, for
group quantum hypothesis testing (G-QHT), intuitively encode discrete
properties of the channel set in SU(2) and improve query complexity at least
quadratically in $n$, the size of the channel set and group, compared to
na\"ive repetition of binary hypothesis testing. Intriguingly, performance is
completely defined by explicit group homomorphisms; these in turn inform simple
constraints on polynomials embedded in unitary matrices. These constructions
demonstrate a flexible technique for mapping questions in quantum inference to
the well-understood subfields of functional approximation and discrete algebra.
Extensions to larger groups and noisy settings are discussed, as well as paths
by which improved protocols for quantum hypothesis testing against structured
channel sets have application in the transmission of reference frames, proofs
of security in quantum cryptography, and algorithms for property testing.
|
The set of associative and commutative hypercomplex numbers, called the
perfect hypercomplex algebra (PHA) is investigated. Necessary and sufficient
conditions for an algebra to be a PHA via semi-tensor product(STP) of matrices
are reviewed. The zero set is defined for non-invertible hypercomplex numbers
in a given PHA, and a characteristic function is proposed for calculating zero
set. Then PHA of different dimensions are considered. First, $2$-dimensional
PHAs are considered as examples to calculate their zero sets etc. Second, all
the $3$-dimensional PHAs are obtained and the corresponding zero sets are
investigated. Third, $4$-dimensional or even higher dimensional PHAs are also
considered. Finally, matrices over pre-assigned PHA, called perfect
hypercomplex matrices (PHMs) are considered. Their properties are also
investigated.
|
We prove a family of identities, expressing generating functions of powers of
characteristic polynomials of permutations, as finite or infinite products.
These generalize formulae first obtained in a study of the geometry/topology of
symmetric products of real/algebraic tori. The proof uses formal power series
expansions of plethystic exponentials, and has been motivated by some recent
applications of these combinatorial tools in supersymmetric gauge and string
theories. Since the methods are elementary, we tried to be self-contained, and
relate to other topics such as the q-binoomial theorem, and the cycle index and
Molien series for the symmetric group.
|
The Complete Calibration of the Color-Redshift Relation (C3R2) survey is
obtaining spectroscopic redshifts in order to map the relation between galaxy
color and redshift to a depth of i ~ 24.5 (AB). The primary goal is to enable
sufficiently accurate photometric redshifts for Stage IV dark energy projects,
particularly Euclid and the Roman Space Telescope, which are designed to
constrain cosmological parameters through weak lensing. We present 676 new
high-confidence spectroscopic redshifts obtained by the C3R2 survey in the
2017B-2019B semesters using the DEIMOS, LRIS, and MOSFIRE multi-object
spectrographs on the Keck telescopes. Combined with the 4454 redshifts
previously published by this project, the C3R2 survey has now obtained and
published 5130 high-quality galaxy spectra and redshifts. If we restrict
consideration to only the 0.2 < z(phot) < 2.6 range of interest for the Euclid
cosmological goals, then with the current data release C3R2 has increased the
spectroscopic redshift coverage of the Euclid color space from 51% (as reported
by Masters et al. 2015) to the current 91%. Once completed and combined with
extensive data collected by other spectroscopic surveys, C3R2 should provide
the spectroscopic calibration set needed to enable photometric redshifts to
meet the cosmology requirements for Euclid, and make significant headway toward
solving the problem for Roman.
|
Connectivity maintenance is crucial for the real world deployment of
multi-robot systems, as it ultimately allows the robots to communicate,
coordinate and perform tasks in a collaborative way. A connectivity maintenance
controller must keep the multi-robot system connected independently from the
system's mission and in the presence of undesired real world effects such as
communication delays, model errors, and computational time delays, among
others. In this paper we present the implementation, on a real robotic setup,
of a connectivity maintenance control strategy based on Control Barrier
Functions. During experimentation, we found that the presence of communication
delays has a significant impact on the performance of the controlled system,
with respect to the ideal case. We propose a heuristic to counteract the
effects of communication delays, and we verify its efficacy both in simulation
and with physical robot experiments.
|
Both children and adults have been shown to benefit from the integration of
multisensory and sensorimotor enrichment into pedagogy. For example,
integrating pictures or gestures into foreign language (L2) vocabulary learning
can improve learning outcomes relative to unisensory learning. However, whereas
adults seem to benefit to a greater extent from sensorimotor enrichment such as
the performance of gestures in contrast to multisensory enrichment with
pictures, this is not the case in elementary school children. Here, we compared
multisensory- and sensorimotor-enriched learning in an intermediate age group
that falls between the age groups tested in previous studies (elementary school
children and young adults), in an attempt to determine the developmental time
point at which children's responses to enrichment mature from a child-like
pattern into an adult-like pattern. Twelve-year-old and fourteen-year-old
German children were trained over 5 consecutive days on auditorily-presented,
concrete and abstract, Spanish vocabulary. The vocabulary was learned under
picture-enriched, gesture-enriched, and non-enriched (auditory-only)
conditions. The children performed vocabulary recall and translation tests at 3
days, 2 months, and 6 months post-learning. Both picture and gesture enrichment
interventions were found to benefit children's L2 learning relative to
non-enriched learning up to 6 months post-training. Interestingly,
gesture-enriched learning was even more beneficial than picture-enriched
learning for the fourteen-year-olds, while the twelve-year-olds benefitted
equivalently from learning enriched with pictures and gestures. These findings
provide evidence for opting to integrate gestures rather than pictures into L2
pedagogy starting at fourteen years of age.
|
A classical approach to the restricted three-body problem is to analyze the
dynamics of the massless body in the synodic reference frame. A different
approach is represented by the perturbative treatment: in particular the
averaged problem of a mean-motion resonance allows to investigate the long-term
behavior of the solutions through a suitable approximation that focuses on a
particular region of the phase space. In this paper, we intend to bridge a gap
between the two approaches in the specific case of mean-motion resonant
dynamics, establish the limit of validity of the averaged problem, and take
advantage of its results in order to compute trajectories in the synodic
reference frame. After the description of each approach, we develop a rigorous
treatment of the averaging process, estimate the size of the transformation and
prove that the averaged problem is a suitable approximation of the restricted
three-body problem as long as the solutions are located outside the Hill's
sphere of the secondary. In such a case, a rigorous theorem of stability over
finite but large timescales can be proven. We establish that a solution of the
averaged problem provides an accurate approximation of the trajectories on the
synodic reference frame within a finite time that depend on the minimal
distance to the Hill's sphere of the secondary. The last part of this work is
devoted to the co-orbital motion (i.e., the dynamics in 1:1 mean-motion
resonance) in the circular-planar case. In this case, an interpretation of the
solutions of the averaged problem in the synodic reference frame is detailed
and a method that allows to compute co-orbital trajectories is displayed.
|
In a graph G, the cardinality of the smallest ordered set of vertices that
distinguishes every element of V (G) (resp. E(G)) is called the vertex (resp.
edge) metric dimension of G. In [16] it was shown that both vertex and edge
metric dimension of a unicyclic graph G always take values from just two
explicitly given consecutive integers that are derived from the structure of
the graph. A natural problem that arises is to determine under what conditions
these dimensions take each of the two possible values. In this paper for each
of these two metric dimensions we characterize three graph configurations and
prove that it takes the greater of the two possible values if and only if the
graph contains at least one of these configurations. One of these
configurations is the same for both dimensions, while the other two are
specific for each of them. This enables us to establish the exact value of the
metric dimensions for a unicyclic graph and also to characterize when each of
these two dimensions is greater than the other one.
|
In this work we consider the generalized zeta function method to obtain
temperature corrections to the vacuum (Casimir) energy density, at zero
temperature, associated with quantum vacuum fluctuations of a scalar field
subjected to a helix boundary condition and whose modes propagate in
(3+1)-dimensional Euclidean spacetime. We find closed and analytical
expressions for both the two-point heat kernel function and free energy density
in the massive and massless scalar field cases. In particular, for the massless
scalar field case, we also calculate the thermodynamics quantities internal
energy density and entropy density, with their corresponding high- and
low-temperature limits. We show that the temperature correction term in the
free energy density must suffer a finite renormalization, by subtracting the
scalar thermal blackbody radiation contribution, in order to provide the
correct classical limit at high temperatures. We check that, at low
temperature, the entropy density vanishes as the temperature goes to zero, in
accordance with the third law of thermodynamics. We also point out that, at low
temperatures, the dominant term in the free energy and internal energy
densities is the vacuum energy density at zero temperature. Finally, we also
show that the pressure obeys an equation of state.
|
The COVID-19 pandemic has been damaging to the lives of people all around the
world. Accompanied by the pandemic is an infodemic, an abundant and
uncontrolled spreading of potentially harmful misinformation. The infodemic may
severely change the pandemic's course by interfering with public health
interventions such as wearing masks, social distancing, and vaccination. In
particular, the impact of the infodemic on vaccination is critical because it
holds the key to reverting to pre-pandemic normalcy. This paper presents
findings from a global survey on the extent of worldwide exposure to the
COVID-19 infodemic, assesses different populations' susceptibility to false
claims, and analyzes its association with vaccine acceptance. Based on
responses gathered from over 18,400 individuals from 40 countries, we find a
strong association between perceived believability of misinformation and
vaccination hesitancy. Additionally, our study shows that only half of the
online users exposed to rumors might have seen the fact-checked information.
Moreover, depending on the country, between 6% and 37% of individuals
considered these rumors believable. Our survey also shows that poorer regions
are more susceptible to encountering and believing COVID-19 misinformation. We
discuss implications of our findings on public campaigns that proactively
spread accurate information to countries that are more susceptible to the
infodemic. We also highlight fact-checking platforms' role in better
identifying and prioritizing claims that are perceived to be believable and
have wide exposure. Our findings give insights into better handling of risk
communication during the initial phase of a future pandemic.
|
We consider numerical solutions for the Allen-Cahn equation with standard
double well potential and periodic boundary conditions. Surprisingly it is
found that using standard numerical discretizations with high precision
computational solutions may converge to completely incorrect steady states.
This happens for very smooth initial data and state-of-the-art algorithms. We
analyze this phenomenon and showcase the resolution of this problem by a new
symmetry-preserving filter technique. We develop a new theoretical framework
and rigorously prove the convergence to steady states for the filtered
solutions.
|
Strong coupling between light and matter is the foundation of promising
quantum photonic devices such as deterministic single photon sources, single
atom lasers and photonic quantum gates, which consist of an atom and a photonic
cavity. Unlike atom-based systems, a strong coupling unit based on an
emitter-plasmonic nanocavity system has the potential to bring these devices to
the microchip scale at ambient conditions. However, efficiently and precisely
positioning a single or a few emitters into a plasmonic nanocavity is
challenging. In addition, placing a strong coupling unit on a designated
substrate location is a demanding task. Here, fluorophore-modified DNA strands
are utilized to drive the formation of particle-on-film plasmonic nanocavities
and simultaneously integrate the fluorophores into the high field region of the
nanocavities. High cavity yield and fluorophore coupling yield are
demonstrated. This method is then combined with e-beam lithography to position
the strong coupling units on designated locations of a substrate. Furthermore,
the high correlation between electronic transition of the fluorophore and the
cavity resonance is observed, implying more vibrational modes may be involved.
Our system makes strong coupling units more practical on the microchip scale
and at ambient conditions and provides a stable platform for investigating
fluorophore-plasmonic nanocavity interaction.
|
Cloud-based services are surging into popularity in recent years. However,
outages, i.e., severe incidents that always impact multiple services, can
dramatically affect user experience and incur severe economic losses. Locating
the root-cause service, i.e., the service that contains the root cause of the
outage, is a crucial step to mitigate the impact of the outage. In current
industrial practice, this is generally performed in a bootstrap manner and
largely depends on human efforts: the service that directly causes the outage
is identified first, and the suspected root cause is traced back manually from
service to service during diagnosis until the actual root cause is found.
Unfortunately, production cloud systems typically contain a large number of
interdependent services. Such a manual root cause analysis is often
time-consuming and labor-intensive. In this work, we propose COT, the first
outage triage approach that considers the global view of service correlations.
COT mines the correlations among services from outage diagnosis data. After
learning from historical outages, COT can infer the root cause of emerging ones
accurately. We implement COT and evaluate it on a real-world dataset containing
one year of data collected from Microsoft Azure, one of the representative
cloud computing platforms in the world. Our experimental results show that COT
can reach a triage accuracy of 82.1%~83.5%, which outperforms the
state-of-the-art triage approach by 28.0%~29.7%.
|
Voice Conversion (VC) emerged as a significant domain of research in the
field of speech synthesis in recent years due to its emerging application in
voice-assisting technology, automated movie dubbing, and speech-to-singing
conversion to name a few. VC basically deals with the conversion of vocal style
of one speaker to another speaker while keeping the linguistic contents
unchanged. VC task is performed through a three-stage pipeline consisting of
speech analysis, speech feature mapping, and speech reconstruction. Nowadays
the Generative Adversarial Network (GAN) models are widely in use for speech
feature mapping from source to target speaker. In this paper, we propose an
adaptive learning-based GAN model called ALGAN-VC for an efficient one-to-one
VC of speakers. Our ALGAN-VC framework consists of some approaches to improve
the speech quality and voice similarity between source and target speakers. The
model incorporates a Dense Residual Network (DRN) like architecture to the
generator network for efficient speech feature learning, for source to target
speech feature conversion. We also integrate an adaptive learning mechanism to
compute the loss function for the proposed model. Moreover, we use a boosted
learning rate approach to enhance the learning capability of the proposed
model. The model is trained by using both forward and inverse mapping
simultaneously for a one-to-one VC. The proposed model is tested on Voice
Conversion Challenge (VCC) 2016, 2018, and 2020 datasets as well as on our
self-prepared speech dataset, which has been recorded in Indian regional
languages and in English. A subjective and objective evaluation of the
generated speech samples indicated that the proposed model elegantly performed
the voice conversion task by achieving high speaker similarity and adequate
speech quality.
|
Software systems have been continuously evolved and delivered with high
quality due to the widespread adoption of automated tests. A recurring issue
hurting this scenario is the presence of flaky tests, a test case that may pass
or fail non-deterministically. A promising, but yet lacking more empirical
evidence, approach is to collect static data of automated tests and use them to
predict their flakiness. In this paper, we conducted an empirical study to
assess the use of code identifiers to predict test flakiness. To do so, we
first replicate most parts of the previous study of Pinto~et~al.~(MSR~2020).
This replication was extended by using a different ML Python platform
(Scikit-learn) and adding different learning algorithms in the analyses. Then,
we validated the performance of trained models using datasets with other flaky
tests and from different projects. We successfully replicated the results of
Pinto~et~al.~(2020), with minor differences using Scikit-learn; different
algorithms had performance similar to the ones used previously. Concerning the
validation, we noticed that the recall of the trained models was smaller, and
classifiers presented a varying range of decreases. This was observed in both
intra-project and inter-projects test flakiness prediction.
|
We demonstrate a method to double the collection efficiency in Laser Tweezers
Raman Spectroscopy (LTRS) by collecting both the forward and back-scattered
light in a single-shot multitrack measurement. Our method can collect signals
at different sample volumes, granting both the pinpoint spatial selectivity of
confocal Raman and the bulk sensitivity of non-confocal Raman simultaneously.
Further, we display that our approach allows for reduced detector integration
time and laser power. Thus, our method will enable the monitoring of biological
samples sensitive to high intensities for longer times. Additionally, we
demonstrate that by a simple modification, we can add polarization sensitivity
and retrieve extra biochemical information.
|
Although there are several proposals of relativistic spin in the literature,
the recognition of intrinsicality as a key characteristic for the definition of
this concept is responsible for selecting a single tensor operator that
adequately describes such a quantity. This intrinsic definition does not
correspond to Wigner's spin operator, which is the definition that is widely
adopted in the relativistic quantum information theory literature. Here, the
differences between the predictions obtained considering the intrinsic spin and
Wigner's spin are investigated. The measurements involving the intrinsic spin
are modeled by means of the interaction with an electromagnetic field in a
relativistic Stern-Gerlach setup.
|
This study uses an innovative measure, the Semantic Brand Score, to assess
the interest of stakeholders in different company core values. Among others, we
focus on corporate social responsibility (CSR) core value statements, and on
the attention they receive from five categories of stakeholders (customers,
company communication teams, employees, associations and media). Combining big
data methods and tools of Social Network Analysis and Text Mining, we analyzed
about 58,000 Italian tweets and found that different stakeholders have
different prevailing interests. CSR gets much less attention than expected.
Core values related to customers and employees are in the foreground.
|
Given $E_0, E_1, F_0, F_1, E$ rearrangement invariant function spaces, $a_0$,
$a_1$, $b_0$, $b_1$, $b$ slowly varying functions and $0< \theta_0<\theta_1<1$,
we characterize the interpolation spaces $$(\overline{X}^{\mathcal
R}_{\theta_0,b_0,E_0,a_0,F_0}, \overline{X}^{\mathcal R}_{\theta_1,
b_1,E_1,a_1,F_1})_{\theta,b,E},\quad (\overline{X}^{\mathcal L}_{\theta_0,
b_0,E_0,a_0,F_0}, \overline{X}^{\mathcal
L}_{\theta_1,b_1,E_1,a_1,F_1})_{\theta,b,E}$$ and $$(\overline{X}^{\mathcal
R}_{\theta_0,b_0,E_0,a_0,F_0}, \overline{X}^{\mathcal L}_{\theta_1,
b_1,E_1,a_1,F_1})_{\theta,b,E},\quad (\overline{X}^{\mathcal L}_{\theta_0,
b_0,E_0,a_0,F_0}, \overline{X}^{\mathcal
R}_{\theta_1,b_1,E_1,a_1,F_1})_{\theta,b,E},$$ for all possible values of
$\theta\in[0,1]$. Applications to interpolation identities for grand and small
Lebesgue spaces, Gamma spaces and $A$ and $B$-type spaces are given.
|
Given the impeding timeline of developing good quality quantum processing
units, it is the moment to rethink the approach to advance quantum computing
research. Rather than waiting for quantum hardware technologies to mature, we
need to start assessing in tandem the impact of the occurrence of quantum
computing in various scientific fields. However, to this purpose, we need to
use a complementary but quite different approach than proposed by the NISQ
vision, which is heavily focused on and burdened by the engineering challenges.
That is why we propose and advocate the PISQ approach: Perfect Intermediate
Scale Quantum computing based on the already known concept of perfect qubits.
This will allow researchers to focus much more on the development of new
applications by defining the algorithms in terms of perfect qubits and evaluate
them on quantum computing simulators that are executed on supercomputers. It is
not the long-term solution but will currently allow universities to research on
quantum logic and algorithms and companies can already start developing their
internal know-how on quantum solutions.
|
The main goal of this paper is to prove $L^1$-comparison and contraction
principles for weak solutions (in the sense of distributions) of Hele-Shaw flow
with a linear Drift. The flow is considered with a general reaction term
including the Lipschitz continuous case, and subject to mixed homogeneous
boundary conditions : Dirichlet and Neumann. Our approach combines
DiPerna-Lions renormalization type with Kruzhkov device of doubling and
de-doubling variables. The $L^1$-contraction principle allows afterwards to
handle the problem in a general framework of nonlinear semigroup theory in
$L^1,$ taking thus advantage of this strong theory to study existence,
uniqueness, comparison of weak solutions, $L^1$-stability as well as many
further questions.
|
This paper presents a novel task together with a new benchmark for detecting
generic, taxonomy-free event boundaries that segment a whole video into chunks.
Conventional work in temporal video segmentation and action detection focuses
on localizing pre-defined action categories and thus does not scale to generic
videos. Cognitive Science has known since last century that humans consistently
segment videos into meaningful temporal chunks. This segmentation happens
naturally, without pre-defined event categories and without being explicitly
asked to do so. Here, we repeat these cognitive experiments on mainstream CV
datasets; with our novel annotation guideline which addresses the complexities
of taxonomy-free event boundary annotation, we introduce the task of Generic
Event Boundary Detection (GEBD) and the new benchmark Kinetics-GEBD. Our
Kinetics-GEBD has the largest number of boundaries (e.g. 32 of ActivityNet, 8
of EPIC-Kitchens-100) which are in-the-wild, taxonomy-free, cover generic event
change, and respect human perception diversity. We view GEBD as an important
stepping stone towards understanding the video as a whole, and believe it has
been previously neglected due to a lack of proper task definition and
annotations. Through experiment and human study we demonstrate the value of the
annotations. Further, we benchmark supervised and un-supervised GEBD approaches
on the TAPOS dataset and our Kinetics-GEBD. We release our annotations and
baseline codes at CVPR'21 LOVEU Challenge:
https://sites.google.com/view/loveucvpr21.
|
Results. We illustrate our profile-fitting technique and present the K\,{\sc
i} velocity structure of the dense ISM along the paths to all targets. As a
validation test of the dust map, we show comparisons between distances to
several reconstructed clouds with recent distance assignments based on
different techniques. Target star extinctions estimated by integration in the
3D map are compared with their K\,{\sc i} 7699 A absorptions and the degree of
correlation is found comparable to the one between the same K\,{\sc i} line and
the total hydrogen column for stars distributed over the sky that are part of a
published high resolution survey. We show images of the updated dust
distribution in a series of vertical planes in the Galactic longitude interval
150-182.5 deg and our estimated assignments of radial velocities to the opaque
regions. Most clearly defined K\,{\sc i} absorptions may be assigned to a dense
dust cloud between the Sun and the target star. It appeared relatively
straightforward to find a velocity pattern consistent will all absorptions and
ensuring coherence between adjacent lines of sight, at the exception of a few
weak lines. We compare our results with recent determinations of velocities of
several clouds and find good agreement. These results demonstrate that the
extinction-K\,{\sc i} relationship is tight enough to allow linking the radial
velocity of the K\,{\sc i} lines to the dust clouds seen in 3D, and that their
combination may be a valuable tool in building a 3D kinetic structure of the
dense ISM. We discuss limitations and perspectives for this technique.
|
Aluminum scandium nitride alloy (Al1-xScxN) is regarded as a promising
material for high-performance acoustic devices used in wireless communication
systems. Phonon scattering and heat conduction processes govern the energy
dissipation in acoustic resonators, ultimately determining their performance
quality. This work reports, for the first time, on phonon scattering processes
and thermal conductivity in Al1-xScxN alloys with the Sc content (x) up to
0.26. The thermal conductivity measured presents a descending trend with
increasing x. Temperature-dependent measurements show an increase in thermal
conductivity as the temperature increases at temperatures below 200K, followed
by a plateau at higher temperatures (T> 200K). Application of a virtual crystal
phonon conduction model allows us to elucidate the effects of boundary and
alloy scattering on the observed thermal conductivity behaviors. We further
demonstrate that the alloy scattering is caused mainly by strain-field
difference, and less by the atomic mass difference between ScN and AlN, which
is in contrast to the well-studied Al1-xGaxN and SixGe1-x alloy systems where
atomic mass difference dominates the alloy scattering. This work studies and
provides the quantitative knowledge for phonon scattering and the thermal
conductivity in Al1-xScxN, paving the way for future investigation of materials
and design of acoustic devices.
|
We present two further classical novae, V906 Car and V5668 Sgr, that show
jets and accretion disc spectral signatures in their H-alpha complexes
throughout the first 1000 days following their eruptions. From extensive
densely time-sampled spectroscopy, we measure the appearance of the first
high-velocity absorption component in V906 Car, and the duration of the
commencement of the main H-alpha emission. We constrain the time taken for
V5668 Sgr to transition to the nebular phase using [N II] 6584\r{A}. We find
these timings to be consistent with the jet and accretion disc model for
explaining optical spectral line profile changes in classical novae, and
discuss the implications of this model for enrichment of the interstellar
medium.
|
We utilize transverse ac susceptibility measurements to characterize magnetic
anisotropy in archetypal exchange-bias bilayers of ferromagnet Permalloy (Py)
and antiferromagnet CoO. Unidirectional anisotropy is observed for thin Py, but
becomes negligible at larger Py thicknesses, even though the directional
asymmetry of the magnetic hysteresis loop remains significant. Additional
magnetoelectronic measurements, magneto-optical imaging, as well as
micromagnetic simulations show that these surprising behaviors are likely
associated with asymmetry of spin flop distribution created in CoO during Py
magnetization reversal, which facilitates the rotation of the latter back into
its field-cooled direction. Our findings suggest new possibilities for
efficient realization of multistable nanomagnetic systems for neuromorphic
applications.
|
High-resolution gamma-ray spectroscopy of 18N is performed with the Advanced
GAmma Tracking Array AGATA, following deep-inelastic processes induced by an
18O beam on a 181Ta target. Six states are newly identified, which together
with the three known excitations exhaust all negative-parity excited states
expected in 18N below the neutron threshold. Spin and parities are proposed for
all located states on the basis of decay branchings and comparison with
large-scale shell-model calculations performed in the p-sd space, with the YSOX
interaction. Of particular interest is the location of the 0^-_1 and 1^-_2
excitations, which provide strong constrains for cross-shell p-sd matrix
elements based on realistic interactions, and help to simultaneously reproduce
the ground and first-excited states in 16N and 18N, for the first time.
Understanding the 18N structure may also have significant impact on
neutron-capture cross-section calculations in r-process modeling including
light neutron-rich nuclei.
|
The Willmore Problem seeks the surface in $\mathbb S^3\subset\mathbb R^4$ of
a given topological type minimizing the squared-mean-curvature energy $W = \int
|\mathbf{H}_{\mathbb{R}^4}|^2 = \operatorname{area} + \int H_{\mathbb{S}^3}^2$.
The longstanding Willmore Conjecture that the Clifford torus minimizes $W$
among genus-$1$ surfaces is now a theorem of Marques and Neves [19], but the
general conjecture [10] that Lawson's [16] minimal surface
$\xi_{g,1}\subset\mathbb S^3$ minimizes $W$ among surfaces of genus $g>1$
remains open. Here we prove this conjecture under the additional assumption
that the competitor surfaces $M\subset\mathbb S^3$ share the ambient symmetries
of $\xi_{g,1}$. Specifcally, we show each Lawson surface $\xi_{m,k}$ satisfies
the analogous $W$-minimizing property under a somewhat smaller symmetry group
${G}_{m,k}<SO(4)$, using a local computation of the orbifold Euler number
$\chi_o(M/{G}_{m,k})$ to exclude certain intersection patterns of $M$ with the
great circles fixed by generators of ${G}_{m,k}$. We also describe a genus 2
example where the Willmore Problem may not be solvable among surfaces with its
symmetry.
|
Quantum error correcting codes (QECCs) are the means of choice whenever
quantum systems suffer errors, e.g., due to imperfect devices, environments, or
faulty channels. By now, a plethora of families of codes is known, but there is
no universal approach to finding new or optimal codes for a certain task and
subject to specific experimental constraints. In particular, once found, a QECC
is typically used in very diverse contexts, while its resilience against errors
is captured in a single figure of merit, the distance of the code. This does
not necessarily give rise to the most efficient protection possible given a
certain known error or a particular application for which the code is employed.
In this paper, we investigate the loss channel, which plays a key role in
quantum communication, and in particular in quantum key distribution over long
distances. We develop a numerical set of tools that allows to optimize an
encoding specifically for recovering lost particles without the need for
backwards communication, where some knowledge about what was lost is available,
and demonstrate its capabilities. This allows us to arrive at new codes ideal
for the distribution of entangled states in this particular setting, and also
to investigate if encoding in qudits or allowing for non-deterministic
correction proves advantageous compared to known QECCs. While we here focus on
the case of losses, our methodology is applicable whenever the errors in a
system can be characterized by a known linear map.
|
The Facility for Antiproton and Ion Research (FAIR), an international
accelerator centre, is under construction in Darmstadt, Germany. FAIR will
provide high-intensity primary beams of protons and heavy-ions, and intense
secondary beams of antiprotons and of rare short-lived isotopes. These beams,
together with a variety of modern experimental setups, will allow to perform a
unique research program on nuclear astrophysics, including the exploration of
the nucleosynthesis in the universe, and the exploration of QCD matter at high
baryon densities, in order to shed light on the properties of neutron stars,
and the dynamics of neutron star mergers. The Compressed Baryonic Matter (CBM)
experiment at FAIR will investigate collisions between heavy nuclei, and
measure various diagnostic probes, which are sensitive to the high-density
equation-of-state (EOS), and to the microscopic degrees-of-freedom of
high-density matter. The CBM physics program will be discussed.
|
The phase diagram of cuprate high-temperature superconductors is investigated
on the basis of the three-band d-p model. We use the optimization variational
Monte Carlo method, where improved many-body wave functions have been proposed
to make the ground-state wave function more precise. We investigate the
stability of antiferromagnetic state by changing the band parameters such as
the hole number, level difference $\Delta_{dp}$ between $d$ and $p$ electrons
and transfer integrals. We show that the antiferromagnetic correlation weakens
when $\Delta_{dp}$ decreases and the pure $d$-wave superconducting phase may
exist in this region. We present phase diagrams including antiferromagnetic and
superconducting regions by varying the band parameters. The phase diagram
obtained by changing the doping rate $x$ contains antiferromagnetic,
superconducting and also phase-separated phases. We propose that
high-temperature superconductivity will occur near the antiferromagnetic
boundary in the space of band parameters.
|
Natural language contexts display logical regularities with respect to
substitutions of related concepts: these are captured in a functional
order-theoretic property called monotonicity. For a certain class of NLI
problems where the resulting entailment label depends only on the context
monotonicity and the relation between the substituted concepts, we build on
previous techniques that aim to improve the performance of NLI models for these
problems, as consistent performance across both upward and downward monotone
contexts still seems difficult to attain even for state-of-the-art models. To
this end, we reframe the problem of context monotonicity classification to make
it compatible with transformer-based pre-trained NLI models and add this task
to the training pipeline. Furthermore, we introduce a sound and complete
simplified monotonicity logic formalism which describes our treatment of
contexts as abstract units. Using the notions in our formalism, we adapt
targeted challenge sets to investigate whether an intermediate context
monotonicity classification task can aid NLI models' performance on examples
exhibiting monotonicity reasoning.
|
A renewal system divides the slotted timeline into back to back time periods
called renewal frames. At the beginning of each frame, it chooses a policy from
a set of options for that frame. The policy determines the duration of the
frame, the penalty incurred during the frame (such as energy expenditure), and
a vector of performance metrics (such as instantaneous number of jobs served).
The starting points of this line of research are Chapter 7 of the book
[Nee10a], the seminal work [Nee13a], and Chapter 5 of the PhD thesis of
Chih-ping Li [Li11]. These works consider stochastic optimization over a single
renewal system. By way of contrast, this thesis considers optimization over
multiple parallel renewal systems, which is computationally more challenging
and yields much more applications. The goal is to minimize the time average
overall penalty subject to time average overall constraints on the
corresponding performance metrics. The main difficulty, which is not present in
earlier works, is that these systems act asynchronously due to the fact that
the renewal frames of different renewal systems are not aligned. The goal of
the thesis is to resolve this difficulty head-on via a new asynchronous
algorithm and a novel supermartingale stopping time analysis which shows our
algorithms not only converge to the optimal solution but also enjoy fast
convergence rates. Based on this general theory, we further develop novel
algorithms for data center server provision problems with performance
guarantees as well as new heuristics for the multi-user file downloading
problems.
|
Selecting skilled mutual funds through the multiple testing framework has
received increasing attention from finance researchers and statisticians. The
intercept $\alpha$ of Carhart four-factor model is commonly used to measure the
true performance of mutual funds, and positive $\alpha$'s are considered as
skilled. We observe that the standardized OLS estimates of $\alpha$'s across
the funds possess strong dependence and nonnormality structures, indicating
that the conventional multiple testing methods are inadequate for selecting the
skilled funds. We start from a decision theoretic perspective, and propose an
optimal testing procedure to minimize a combination of false discovery rate and
false non-discovery rate. Our proposed testing procedure is constructed based
on the probability of each fund not being skilled conditional on the
information across all of the funds in our study. To model the distribution of
the information used for the testing procedure, we consider a mixture model
under dependence and propose a new method called ``approximate empirical Bayes"
to fit the parameters. Empirical studies show that our selected skilled funds
have superior long-term and short-term performance, e.g., our selection
strongly outperforms the S\&P 500 index during the same period.
|
Due to the increase of Internet-of-Things (IoT) devices, IoT networks are
getting overcrowded. Networks can be extended with more gateways, increasing
the number of supported devices. However, as investigated in this work, massive
MIMO has the potential to increase the number of simultaneous connections,
while also lowering the energy expenditure of these devices. We present a study
of the channel characteristics of massive MIMO in the unlicensed sub-GHz band.
The goal is to support IoT applications with strict requirements in terms of
number of devices, power consumption, and reliability. The assessment is based
on experimental measurements using both a uniform linear and a rectangular
array. Our study demonstrates and validates the advantages of deploying massive
MIMO gateways to serve IoT nodes. While the results are general, here we
specifically focus on static nodes. The array gain and channel hardening effect
yield opportunities to lower the transmit-power of IoT nodes while also
increasing reliability. The exploration confirms that exploiting large arrays
brings great opportunities to connect a massive number of IoT devices by
separating the nodes in the spatial domain. In addition, we give an outlook on
how static IoT nodes could be scheduled based on partial channel state
information.
|
Given the complexity of typical data science projects and the associated
demand for human expertise, automation has the potential to transform the data
science process.
Key insights:
* Automation in data science aims to facilitate and transform the work of
data scientists, not to replace them.
* Important parts of data science are already being automated, especially in
the modeling stages, where techniques such as automated machine learning
(AutoML) are gaining traction.
* Other aspects are harder to automate, not only because of technological
challenges, but because open-ended and context-dependent tasks require human
interaction.
|
We present PLONQ, a progressive neural image compression scheme which pushes
the boundary of variable bitrate compression by allowing quality scalable
coding with a single bitstream. In contrast to existing learned variable
bitrate solutions which produce separate bitstreams for each quality, it
enables easier rate-control and requires less storage. Leveraging the latent
scaling based variable bitrate solution, we introduce nested quantization, a
method that defines multiple quantization levels with nested quantization
grids, and progressively refines all latents from the coarsest to the finest
quantization level. To achieve finer progressiveness in between any two
quantization levels, latent elements are incrementally refined with an
importance ordering defined in the rate-distortion sense. To the best of our
knowledge, PLONQ is the first learning-based progressive image coding scheme
and it outperforms SPIHT, a well-known wavelet-based progressive image codec.
|
A silicon quantum photonic circuit was proposed and demonstrated as an
integrated quantum light source for telecom band polarization entangled Bell
state generation and dynamical manipulation. Biphoton states were firstly
generated in four silicon waveguides by spontaneous four wave mixing. They were
transformed to polarization entangled Bell states through on-chip quantum
interference and quantum superposition, and then coupled to optical fibers. The
property of polarization entanglement in generated photon pairs was
demonstrated by two-photon interferences under two non-orthogonal polarization
bases. The output state could be dynamically switched between two polarization
entangled Bell states, which was demonstrated by the experiment of simplified
Bell state measurement. The experiment results indicate that its manipulation
speed supported a modulation rate of several tens kHz, showing its potential on
applications of quantum communication and quantum information processing
requiring dynamical quantum entangled Bell state control.
|
The independence equivalence class of a graph $G$ is the set of graphs that
have the same independence polynomial as $G$. Beaton, Brown and Cameron
(arXiv:1810.05317) found the independence equivalence classes of even cycles,
and raised the problem of finding the independence equivalence class of odd
cycles. The problem is completely solved in this paper.
|
With the aid of both a semi-analytical and a numerically exact method we
investigate the charge dynamics in the vicinity of half-filling in the one- and
two-dimensional $t$-$J$ model derived from a Fermi-Hubbard model in the limit
of large interaction $U$ and hence small exchange coupling $J$. The spin
degrees of freedom are taken to be disordered. So we consider the limit $0 < J
\ll T \ll W$ where $W$ is the band width. We focus on evaluating the spectral
density of a single hole excitation and the charge gap which separates the
upper and the lower Hubbard band. One of the key findings is the evidence for
the absence of sharp edges of the Hubbard band, instead Gaussian tails appear.
|
Natural numbers satisfying a certain unusual property are defined by the
author in a previous note. Later, the author called such numbers
$v$-palindromic numbers and proved a periodic phenomenon pertaining to such
numbers and repeated concatenations of the digits of a number. It was left as a
problem of further investigation to find the smallest period. In this paper, we
provide a method to find the smallest period. Some theorems from signal
processing are used, but we also supply our own proofs.
|
In this thesis, the properties of mixtures of Bose-Einstein condensates at $T
= 0$ have been investigated using quantum Monte Carlo (QMC) methods and Density
Functional Theory (DFT) with the aim of understanding physics beyond the
mean-field theory in Bose-Bose mixtures.
|
We consider the simple exclusion process on Z x {0, 1}, that is, an
"horizontal ladder" composed of 2 lanes. Particles can jump according to a
lane-dependent translation-invariant nearest neighbour jump kernel, i.e.
"horizontally" along each lane, and "vertically" along the scales of the
ladder. We prove that generically, the set of extremal invariant measures
consists of (i) translation-invariant product Bernoulli measures; and, modulo
translations along Z: (ii) at most two shock measures (i.e. asymptotic to
Bernoulli measures at $\pm$$\infty$) with asymptotic densities 0 and 2; (iii)
at most three shock measures with a density jump of magnitude 1. We fully
determine this set for certain parameter values. In fact, outside degenerate
cases, there is at most one shock measure of type (iii). The result can be
partially generalized to vertically cyclic ladders with arbitrarily many lanes.
For the latter, we answer an open question of [5] about rotational invariance
of stationary measures.
|
This paper revisits the multi-agent epistemic logic presented in [10], where
agents and sets of agents are replaced by abstract, intensional "names". We
make three contributions. First, we study its model theory, providing adequate
notions of bisimulation and frame morphisms, and use them to study the logic's
expressive power and definability. Second, we show that the logic has a natural
neighborhood semantics, which in turn allows to show that the axiomatization in
[10] does not rely on possibly controversial introspective properties of
knowledge. Finally, we extend the logic with common and distributed knowledge
operators, and provide a sound and complete axiomatization for each of these
extensions. These results together put the original epistemic logic with names
in a more modern context and opens the door for a logical analysis of epistemic
phenomena where group membership is uncertain or variable.
|
We give a short introduction to the contact invariant in bordered Floer
homology defined by F\"oldv\'ari, Hendricks, and the authors. The construction
relies on a special class of foliated open books. We discuss a procedure to
obtain such a foliated open book and present a definition of the contact
invariant. We also provide a "local proof", through an explicit bordered
computation, of the vanishing of the contact invariant for overtwisted
structures.
|
The main purpose of our paper is a new approach to design of algorithms of
Kaczmarz type in the framework of operators in Hilbert space. Our applications
include a diverse list of optimization problems, new Karhunen-Lo\`eve
transforms, and Principal Component Analysis (PCA) for digital images. A key
feature of our algorithms is our use of recursive systems of projection
operators. Specifically, we apply our recursive projection algorithms for new
computations of PCA probabilities and of variance data. For this we also make
use of specific reproducing kernel Hilbert spaces, factorization for kernels,
and finite-dimensional approximations. Our projection algorithms are designed
with view to maximum likelihood solutions, minimization of "cost" problems,
identification of principal components, and data-dimension reduction.
|
The deformability of a compact object under the presence of a tidal
perturbation is encoded in the tidal Love numbers (TLNs), which vanish for
isolated black holes in vacuum. We show that the TLNs of black holes surrounded
by matter fields do not vanish and can be used to probe the environment around
binary black holes. In particular, we compute the TLNs for the case of a black
hole surrounded by a scalar condensate under the presence of scalar and vector
tidal perturbations, finding a strong power-law behavior of the TLN in terms of
the mass of the scalar field. Using this result as a proxy for gravitational
tidal perturbations, we show that future gravitational-wave detectors like the
Einstein Telescope and LISA can impose stringent constraints on the mass of
ultralight bosons that condensate around black holes due to accretion or
superradiance. Interestingly, LISA could measure the tidal deformability of
dressed black holes across the range from stellar-mass ($\approx 10^2 M_\odot$)
to supermassive ($\approx 10^7 M_\odot$) objects, providing a measurement of
the mass of ultralight bosons in the range $(10^{-17} - 10^{-13}) \, {\rm eV}$
with less than $10\%$ accuracy, thus filling the gap between other
superradiance-driven constraints coming from terrestrial and space
interferometers. Altogether, LISA and Einstein Telescope can probe tidal
effects from dressed black holes in the combined mass range $(10^{-17} -
10^{-11}) \, {\rm eV}$.
|
In this paper, we propose a generalized natural inflation (GNI) model to
study axion-like particle (ALP) inflation and dark matter (DM). GNI contains
two additional parameters $(n_1, n_2)$ in comparison with the natural
inflation, that make GNI more general. The $n_1$ build the connection between
GNI and other ALP inflation model, $n_2$ controls the inflaton mass. After
considering the cosmic microwave background and other cosmological observation
limits, the model can realize small-field inflation with a wide mass range, and
the ALP inflaton considering here can serve as the DM candidate for certain
parameter spaces.
|
In this work, an $r$-linearly converging adaptive solver is constructed for
parabolic evolution equations in a simultaneous space-time variational
formulation. Exploiting the product structure of the space-time cylinder, the
family of trial spaces that we consider are given as the spans of
wavelets-in-time and (locally refined) finite element spaces-in-space.
Numerical results illustrate our theoretical findings.
|
Dynamical variational auto-encoders (DVAEs) are a class of deep generative
models with latent variables, dedicated to time series data modeling. DVAEs can
be considered as extensions of the variational autoencoder (VAE) that include
the modeling of temporal dependencies between successive observed and/or latent
vectors in data sequences. Previous work has shown the interest of DVAEs and
their better performance over the VAE for speech signals (spectrogram)
modeling. Independently, the VAE has been successfully applied to speech
enhancement in noise, in an unsupervised noise-agnostic set-up that does not
require the use of a parallel dataset of clean and noisy speech samples for
training, but only requires clean speech signals. In this paper, we extend
those works to DVAE-based single-channel unsupervised speech enhancement, hence
exploiting both speech signals unsupervised representation learning and
dynamics modeling. We propose an unsupervised speech enhancement algorithm
based on the most general form of DVAEs, that we then adapt to three specific
DVAE models to illustrate the versatility of the framework. More precisely, we
combine DVAE-based speech priors with a noise model based on nonnegative matrix
factorization, and we derive a variational expectation-maximization (VEM)
algorithm to perform speech enhancement. Experimental results show that the
proposed approach based on DVAEs outperforms its VAE counterpart and a
supervised speech enhancement baseline.
|
In this paper, we propose a new approach to pathological speech synthesis.
Instead of using healthy speech as a source, we customise an existing
pathological speech sample to a new speaker's voice characteristics. This
approach alleviates the evaluation problem one normally has when converting
typical speech to pathological speech, as in our approach, the voice conversion
(VC) model does not need to be optimised for speech degradation but only for
the speaker change. This change in the optimisation ensures that any
degradation found in naturalness is due to the conversion process and not due
to the model exaggerating characteristics of a speech pathology. To show a
proof of concept of this method, we convert dysarthric speech using the
UASpeech database and an autoencoder-based VC technique. Subjective evaluation
results show reasonable naturalness for high intelligibility dysarthric
speakers, though lower intelligibility seems to introduce a marginal
degradation in naturalness scores for mid and low intelligibility speakers
compared to ground truth. Conversion of speaker characteristics for low and
high intelligibility speakers is successful, but not for mid. Whether the
differences in the results for the different intelligibility levels is due to
the intelligibility levels or due to the speakers needs to be further
investigated.
|
We propose a novel deep neural network architecture to integrate imaging and
genetics data, as guided by diagnosis, that provides interpretable biomarkers.
Our model consists of an encoder, a decoder and a classifier. The encoder
learns a non-linear subspace shared between the input data modalities. The
classifier and the decoder act as regularizers to ensure that the
low-dimensional encoding captures predictive differences between patients and
controls. We use a learnable dropout layer to extract interpretable biomarkers
from the data, and our unique training strategy can easily accommodate missing
data modalities across subjects. We have evaluated our model on a population
study of schizophrenia that includes two functional MRI (fMRI) paradigms and
Single Nucleotide Polymorphism (SNP) data. Using 10-fold cross validation, we
demonstrate that our model achieves better classification accuracy than
baseline methods, and that this performance generalizes to a second dataset
collected at a different site. In an exploratory analysis we further show that
the biomarkers identified by our model are closely associated with the
well-documented deficits in schizophrenia.
|
In this paper, we study covert communications between {a pair of} legitimate
transmitter-receiver against a watchful warden over slow fading channels. There
coexist multiple friendly helper nodes who are willing to protect the covert
communication from being detected by the warden. We propose an uncoordinated
jammer selection scheme where those helpers whose instantaneous channel gains
to the legitimate receiver fall below a pre-established selection threshold
will be chosen as jammers radiating jamming signals to defeat the warden. By
doing so, the detection accuracy of the warden is expected to be severely
degraded while the desired covert communication is rarely affected. We then
jointly design the optimal selection threshold and message transmission rate
for maximizing covert throughput under the premise that the detection error of
the warden exceeds a certain level. Numerical results are presented to validate
our theoretical analyses. It is shown that the multi-jammer assisted covert
communication outperforms the conventional single-jammer method in terms of
covert throughput, and the maximal covert throughput improves significantly as
the total number of helpers increases, which demonstrates the validity and
superiority of our proposed scheme.
|
Loop closure detection is an essential component of Simultaneous Localization
and Mapping (SLAM) systems, which reduces the drift accumulated over time. Over
the years, several deep learning approaches have been proposed to address this
task, however their performance has been subpar compared to handcrafted
techniques, especially while dealing with reverse loops. In this paper, we
introduce the novel LCDNet that effectively detects loop closures in LiDAR
point clouds by simultaneously identifying previously visited places and
estimating the 6-DoF relative transformation between the current scan and the
map. LCDNet is composed of a shared encoder, a place recognition head that
extracts global descriptors, and a relative pose head that estimates the
transformation between two point clouds. We introduce a novel relative pose
head based on the unbalanced optimal transport theory that we implement in a
differentiable manner to allow for end-to-end training. Extensive evaluations
of LCDNet on multiple real-world autonomous driving datasets show that our
approach outperforms state-of-the-art loop closure detection and point cloud
registration techniques by a large margin, especially while dealing with
reverse loops. Moreover, we integrate our proposed loop closure detection
approach into a LiDAR SLAM library to provide a complete mapping system and
demonstrate the generalization ability using different sensor setup in an
unseen city.
|
An issue documents discussions around required changes in issue-tracking
systems, while a commit contains the change itself in the version control
systems. Recovering links between issues and commits can facilitate many
software evolution tasks such as bug localization, and software documentation.
A previous study on over half a million issues from GitHub reports only about
42.2% of issues are manually linked by developers to their pertinent commits.
Automating the linking of commit-issue pairs can contribute to the improvement
of the said tasks. By far, current state-of-the-art approaches for automated
commit-issue linking suffer from low precision, leading to unreliable results,
sometimes to the point that imposes human supervision on the predicted links.
The low performance gets even more severe when there is a lack of textual
information in either commits or issues. Current approaches are also proven
computationally expensive.
We propose Hybrid-Linker to overcome such limitations by exploiting two
information channels; (1) a non-textual-based component that operates on
non-textual, automatically recorded information of the commit-issue pairs to
predict a link, and (2) a textual-based one which does the same using textual
information of the commit-issue pairs. Then, combining the results from the two
classifiers, Hybrid-Linker makes the final prediction. Thus, every time one
component falls short in predicting a link, the other component fills the gap
and improves the results. We evaluate Hybrid-Linker against competing
approaches, namely FRLink and DeepLink on a dataset of 12 projects.
Hybrid-Linker achieves 90.1%, 87.8%, and 88.9% based on recall, precision, and
F-measure, respectively. It also outperforms FRLink and DeepLink by 31.3%, and
41.3%, regarding the F-measure. Moreover, Hybrid-Linker exhibits extensive
improvements in terms of performance as well.
|
We derive the torque on a spheroid of an arbitrary aspect ratio $\kappa$
sedimenting in a linearly stratified ambient. The analysis demarcates regions
in parameter space corresponding to broadside-on and edgewise (longside-on)
settling in the limit $Re, Ri_v \ll 1$, where $Re = \rho_0UL/\mu$ and $Ri_v
=\gamma L^3g/\mu U$, the Reynolds and viscous Richardson numbers, respectively,
are dimensionless measures of the importance of inertial and buoyancy forces
relative to viscous ones. Here, $L$ is the spheroid semi-major axis, $U$ an
appropriate settling velocity scale, $\mu$ the fluid viscosity, and
$\gamma\,(>0)$ the (constant)\,density gradient characterizing the stably
stratified ambient, with $\rho_0$ being the fluid density taken to be a
constant within the Boussinesq framework. A reciprocal theorem formulation
identifies three contributions to the torque: (1) an $O(Re)$ inertial
contribution that already exists in a homogeneous ambient, and orients the
spheroid broadside-on; (2) an $O(Ri_v)$ hydrostatic contribution due to the
ambient linear stratification that also orients the spheroid broadside-on; and
(3) a hydrodynamic contribution arising from the perturbation of the ambient
stratification by the spheroid whose nature depends on $Pe$; $Pe = UL/D$ being
the Peclet number with $D$ the diffusivity of the stratifying agent. For $Pe
\gg 1$, the hydrodynamic contribution is $O(Ri_v^{\frac{2}{3}}$) in the Stokes
stratification regime characterized by $Re \ll Ri_v^{\frac{1}{3}}$, and orients
the spheroid edgewise regardless of $\kappa$. The differing orientation
dependencies of the inertial and large-$Pe$ hydrodynamic stratification torques
imply that the broadside-on and edgewise settling regimes are separated by two
distinct $\kappa$-dependent critical curves in the
$Ri_v/Re^{\frac{3}{2}}-\kappa$ plane. The predictions are consistent with
recent experimental observations.
|
The Central Molecular Zone (CMZ; the central ~500 pc of the Milky Way) hosts
molecular clouds in an extreme environment of strong shear, high gas pressure
and density, and complex chemistry. G0.253+0.016, also known as `the Brick', is
the densest, most compact and quiescent of these clouds. High-resolution
observations with the Atacama Large Millimeter/submillimeter Array (ALMA) have
revealed its complex, hierarchical structure. In this paper we compare the
properties of recent hydrodynamical simulations of the Brick to those of the
ALMA observations. To facilitate the comparison, we post-process the simulation
and create synthetic ALMA maps of molecular line emission from eight molecules.
We correlate the line emission maps to each other and to the mass column
density, and find that HNCO is the best mass tracer of the eight emission
lines. Additionally, we characterise the spatial structure of the observed and
simulated cloud using the density probability distribution function (PDF),
spatial power spectrum, fractal dimension, and moments of inertia. While we
find good agreement between the observed and simulated data in terms of power
spectra and fractal dimensions, there are key differences in terms of the
density PDFs and moments of inertia, which we attribute to the omission of
magnetic fields in the simulations. Models that include the external
gravitational potential generated by the stars in the CMZ better reproduce the
observed structure, highlighting that cloud structure in the CMZ results from
the complex interplay between internal physics (turbulence, self-gravity,
magnetic fields) and the impact of the extreme environment.
|
Deep Neural Networks (DNNs) could be easily fooled by Adversarial Examples
(AEs) with the imperceptible difference to original samples in human eyes. To
keep the difference imperceptible, the existing attacking bound the adversarial
perturbations by the $\ell_\infty$ norm, which is then served as the standard
to align different attacks for a fair comparison. However, when investigating
attack transferability, i.e., the capability of the AEs from attacking one
surrogate DNN to cheat other black-box DNN, we find that only using the
$\ell_\infty$ norm is not sufficient to measure the attack strength, according
to our comprehensive experiments concerning 7 transfer-based attacks, 4
white-box surrogate models, and 9 black-box victim models. Specifically, we
find that the $\ell_2$ norm greatly affects the transferability in
$\ell_\infty$ attacks. Since larger-perturbed AEs naturally bring about better
transferability, we advocate that the strength of all attacks should be
measured by both the widely used $\ell_\infty$ and also the $\ell_2$ norm.
Despite the intuitiveness of our conclusion and advocacy, they are very
necessary for the community, because common evaluations (bounding only the
$\ell_\infty$ norm) allow tricky enhancements of the "attack transferability"
by increasing the "attack strength" ($\ell_2$ norm) as shown by our simple
counter-example method, and the good transferability of several existing
methods may be due to their large $\ell_2$ distances.
|
Traditional link adaptation (LA) schemes in cellular network must be revised
for networks beyond the fifth generation (b5G), to guarantee the strict latency
and reliability requirements advocated by ultra reliable low latency
communications (URLLC). In particular, a poor error rate prediction potentially
increases retransmissions, which in turn increase latency and reduce
reliability. In this paper, we present an interference prediction method to
enhance LA for URLLC. To develop our prediction method, we propose a kernel
based probability density estimation algorithm, and provide an in depth
analysis of its statistical performance. We also provide a low complxity
version, suitable for practical scenarios. The proposed scheme is compared with
state-of-the-art LA solutions over fully compliant 3rd generation partnership
project (3GPP) calibrated channels, showing the validity of our proposal.
|
Low magnetic field scanning tunneling spectroscopy of individual Abrikosov
vortices in heavily overdoped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ unveils a
clear d-wave electronic structure of the vortex core, with a zero-bias
conductance peak at the vortex center that splits with increasing distance from
the core. We show that previously reported unconventional electronic
structures, including the low energy checkerboard charge order in the vortex
halo and the absence of a zero-bias conductance peak at the vortex center, are
direct consequences of short inter-vortex distance and consequent vortex-vortex
interactions prevailing in earlier experiments.
|
For a tree $T$ and a positive integer $n$, let $B_nT$ denote the $n$-strand
braid group on $T$. We use discrete Morse theory techniques to show that
$H^*(B_nT)$ is the exterior face ring determined by an explicit simplicial
complex that measures $n$-local interactions among essential vertices of $T$.
In this first version of the paper we work out proof details in the case of a
binary tree.
|
We investigate the monopole-antimonopole pair solution in the SU(2) x U(1)
Weinberg-Salam theory with $\phi$-winding number, $n=3$ for bifurcation
phenomena. The magnetic monopole merges with antimonopole to form a vortex ring
with finite diameter at $n=3$. Other than the fundamental solution, two new
bifurcating solution branches were found when Higgs coupling constant
$\lambda$, reaches a critical value $\lambda_c$. The two new branches possess
higher energies than the fundamental solutions. These bifurcating solutions
behave differently from the vortex ring configuration in SU(2) Yang-Mills-Higgs
theory since thery are full vortex-ring. We investigate on the total energy
$E$, vortex ring diameter $d_{\rho}$, and magnetic dipole moment $\mu_m$, for
$0 \leq \lambda \leq 49$.
|
This paper surveys the recent attempts at leveraging machine learning to
solve constrained optimization problems. It focuses on surveying the work on
integrating combinatorial solvers and optimization methods with machine
learning architectures. These approaches hold the promise to develop new hybrid
machine learning and optimization methods to predict fast, approximate,
solutions to combinatorial problems and to enable structural logical inference.
This paper presents a conceptual review of the recent advancements in this
emerging area.
|
Every polynomial $P(X)\in \mathbb Z[X]$ satisfies the congruences
$P(n+m)\equiv P(n) \mod m$ for all integers $n, m\ge 0$. An integer valued
sequence $(a_n)_{n\ge 0}$ is called a pseudo-polynomial when it satisfies these
congruences. Hall characterized pseudo-polynomials and proved that they are not
necessarily polynomials. A long standing conjecture of Ruzsa says that a
pseudo-polynomial $a_n$ is a polynomial as soon as $\limsup_n \vert
a_n\vert^{1/n}<e$. Under this growth assumption, Perelli and Zannier proved
that the generating series $\sum_{n=0}^\infty a_n z^n$ is a $G$-function. A
primary pseudo-polynomial is an integer valued sequence $(a_n)_{n\ge 0}$ such
that $a_{n+p}\equiv a_n \mod p$ for all integers $n\ge 0$ and all prime numbers
$p$. The same conjecture has been formulated for them, which implies Ruzsa's,
and this paper revolves around this conjecture. We obtain a Hall type
characterization of primary pseudo-polynomials and draw various consequences
from it. We give a new proof and generalize a result due to Zannier that any
primary pseudo-polynomial with an algebraic generating series is a polynomial.
This leads us to formulate a conjecture on diagonals of rational fractions and
primary pseudo-polynomials, which is related to classic conjectures of Christol
and van der Poorten. We make the Perelli-Zannier Theorem effective. We prove a
P\'olya type result: if there exists a function $F$ analytic in a right-half
plane with not too large exponential growth (in a precise sense) and such that
for all large $n$ the primary pseudo-polynomial $a_n=F(n)$, then $a_n$ is a
polynomial. Finally, we show how to construct a non-polynomial primary
pseudo-polynomial starting from any primary pseudo-polynomial generated by a
$G$-function different of $1/(1-x)$.
|
A novel Bayesian approach to the problem of variable selection using Gaussian
process regression is proposed. The selection of the most relevant variables
for a problem at hand often results in an increased interpretability and in
many cases is an essential step in terms of model regularization. In detail,
the proposed method relies on so-called nearest neighbor Gaussian processes,
that can be considered as highly scalable approximations of classical Gaussian
processes. To perform a variable selection the mean and the covariance function
of the process are conditioned on a random set $\mathcal{A}$. This set holds
the indices of variables that contribute to the model. While the specification
of a priori beliefs regarding $\mathcal{A}$ allows to control the number of
selected variables, so-called reference priors are assigned to the remaining
model parameters. The application of the reference priors ensures that the
process covariance matrix is (numerically) robust. For the model inference a
Metropolis within Gibbs algorithm is proposed. Based on simulated data, an
approximation problem from computer experiments and two real-world datasets,
the performance of the new approach is evaluated.
|
The worldwide refugee crisis is a major current challenge, affecting the
health and education of millions of families with children due to displacement.
Despite the various challenges and risks of migration practices, numerous
refugee families have access to interactive technologies during these
processes. The aim of this ongoing study is to explore the role of technologies
in the transitions of refugee families in Scotland. Based on Tudge's
ecocultural theory, a qualitative case-study approach has been adopted.
Semi-structured interviews have been conducted with volunteers who work with
refugee families in a big city in Scotland, and proxy observations of young
children were facilitated remotely by their refugee parents. A preliminary
overview of the participants' insights of the use and role of technology for
transitioning into a new culture is provided here.
|
New physics increasing the expansion rate just prior to recombination is
among the least unlikely solutions to the Hubble tension, and would be expected
to leave an important signature in the early Integrated Sachs-Wolfe (eISW)
effect, a source of Cosmic Microwave Background (CMB) anisotropies arising from
the time-variation of gravitational potentials when the Universe was not
completely matter dominated. Why, then, is there no clear evidence for new
physics from the CMB alone, and why does the $\Lambda$CDM model fit CMB data so
well? These questions and the vastness of the Hubble tension theory model space
motivate general consistency tests of $\Lambda$CDM. I perform an eISW-based
consistency test of $\Lambda$CDM introducing the parameter $A_{\rm eISW}$,
which rescales the eISW contribution to the CMB power spectra. A fit to Planck
CMB data yields $A_{\rm eISW}=0.988 \pm 0.027$, in perfect agreement with the
$\Lambda$CDM expectation $A_{\rm eISW}=1$, and posing an important challenge
for early-time new physics, which I illustrate in a case study focused on early
dark energy (EDE). I explicitly show that the increase in $\omega_c$ needed for
EDE to preserve the fit to the CMB, which has been argued to worsen the fit to
weak lensing and galaxy clustering measurements, is specifically required to
lower the amplitude of the eISW effect, which would otherwise exceed
$\Lambda$CDM's prediction by $\approx 20\%$: this is a generic problem beyond
EDE and likely applying to most models enhancing the expansion rate around
recombination. Early-time new physics models invoked to address the Hubble
tension are therefore faced with the significant challenge of making a similar
prediction to $\Lambda$CDM for the eISW effect, while not degrading the fit to
other measurements in doing so.
|
Portfolio optimization approaches inevitably rely on multivariate modeling of
markets and the economy. In this paper, we address three sources of error
related to the modeling of these complex systems: 1. oversimplifying
hypothesis; 2. uncertainties resulting from parameters' sampling error; 3.
intrinsic non-stationarity of these systems. For what concerns point 1. we
propose a L0-norm sparse elliptical modeling and show that sparsification is
effective. The effects of points 2. and 3. are quantifified by studying the
models' likelihood in- and out-of-sample for parameters estimated over train
sets of different lengths. We show that models with larger off-sample
likelihoods lead to better performing portfolios up to when two to three years
of daily observations are included in the train set. For larger train sets, we
found that portfolio performances deteriorate and detach from the models'
likelihood, highlighting the role of non-stationarity. We further investigate
this phenomenon by studying the out-of-sample likelihood of individual
observations showing that the system changes significantly through time. Larger
estimation windows lead to stable likelihood in the long run, but at the cost
of lower likelihood in the short-term: the `optimal' fit in finance needs to be
defined in terms of the holding period. Lastly, we show that sparse models
outperform full-models in that they deliver higher out of sample likelihood,
lower realized portfolio volatility and improved portfolios' stability,
avoiding typical pitfalls of the Mean-Variance optimization.
|
Berry's phase, which is associated with the slow cyclic motion with a finite
period, looks like a Dirac monopole when seen from far away but smoothly
changes to a dipole near the level crossing point in the parameter space in an
exactly solvable model. This topology change of Berry's phase is visualized as
a result of lensing effect; the monopole supposed to be located at the level
crossing point appears at the displaced point when the variables of the model
deviate from the precisely adiabatic movement. The effective magnetic field
generated by Berry's phase is determined by a simple geometrical consideration
of the magnetic flux coming from the displaced Dirac monopole.
|
The study of high-energy gamma rays from passive Giant Molecular Clouds
(GMCs) in our Galaxy is an indirect way to characterize and probe the paradigm
of the "sea" of cosmic rays in distant parts of the Galaxy. By using data from
the High Altitude Water Cherenkov (HAWC) observatory, we measure the gamma-ray
flux above 1 TeV of a set of these clouds to test the paradigm.
We selected high-galactic latitude clouds that are in HAWC's field-of-view
and which are within 1~kpc distance from the Sun. We find no significant excess
emission in the cloud regions, nor when we perform a stacked log-likelihood
analysis of GMCs. Using a Bayesian approach, we calculate 95\% credible
intervals upper limits of the gamma-ray flux and estimate limits on the
cosmic-ray energy density of these regions. These are the first limits to
constrain gamma-ray emission in the multi-TeV energy range ($>$1 TeV) using
passive high-galactic latitude GMCs. Assuming that the main gamma-ray
production mechanism is due to proton-proton interaction, the upper limits are
consistent with a cosmic-ray flux and energy density similar to that measured
at Earth.
|
In Sun and sun-like stars, it is believed that the cycles of the large-scale
magnetic field are produced due to the existence of differential rotation and
helicity in the plasma flows in their convection zones (CZs). Hence, it is
expected that for each star, there is a critical dynamo number for the
operation of a large-scale dynamo. As a star slows down, it is expected that
the large-scale dynamo ceases to operate above a critical rotation period. In
our study, we explore the possibility of the operation of the dynamo in the
subcritical region using the Babcock--Leighton type kinematic dynamo model. In
some parameter regimes, we find that the dynamo shows hysteresis behavior,
i.e., two dynamo solutions are possible depending on the initial parameters --
decaying solution if started with weak field and strong oscillatory solution
(subcritical dynamo) when started with a strong field. However, under large
fluctuations in the dynamo parameter, the subcritical dynamo mode is unstable
in some parameter regimes. Therefore, our study supports the possible existence
of subcritical dynamo in some stars which was previously shown in a mean-field
dynamo model with distributed $\alpha$ and MHD turbulent dynamo simulations.
|
Familiar laws of physics are applied to study human relations, modelled by
their world lines (worldlines, WLs) combined with social networks. We focus
upon the simplest, basic element of any society: a married couple, stable due
to the dynamic balance between attraction and repulsion. By building
worldlines/worldsheets, we arrive at a two-level coordinate systems: one
describing the behaviour of a string-like binary system (here, a married
couple), the other one, external, corresponding to the motion of this couple in
the medium, in which the worldline is embedded, sweeping there a string-like
sheet or brane.
The approach is illustrated by simple examples (semi-quantitative toy models)
of worldlines/sheets, open to further extension, perfections and
generalization. World lines (WLs) are combined with social networks (SN). Our
innovation is in the application of basic physical laws, attraction and
repulsion to human behaviour. Simple illustrative examples with empirical
inputs taken from intuition and/or observation are appended. This is an initial
attempt, open to unlimited applications.
|
Model errors are increasingly seen as a fundamental performance limiter in
both Numerical Weather Prediction and Climate Prediction simulations run with
state of the art Earth system digital twins.This has motivated recent efforts
aimed at estimating and correcting the systematic, predictable components of
model error in a consistent data assimilation framework. While encouraging
results have been obtained with a careful examination of the spatial aspects of
the model error estimates, less attention has been devoted to the time
correlation aspects of model errors and their impact on the assimilation cycle.
In this work we employ a Lagged Analysis Increment Covariance (LAIG) diagnostic
to gain insight in the temporal evolution of systematic model errors in the
ECMWF operational data assimilation system, evaluate the effectiveness of the
current weak constraint 4DVar algorithm in reducing these types of errors and,
based on these findings,start exploring new ideas for the development of model
error estimation and correction strategies in data assimilation.
|
Art and science are different ways of exploring the world, but together they
have the potential to be thought-provoking, facilitate a science-society
dialogue, raise public awareness of science, and develop an understanding of
art. For several years, we have been teaching an astro-animation class at the
Maryland Institute College of Art as a collaboration between students and NASA
scientists. Working in small groups, the students create short animations based
on the research of the scientists who are going to follow the projects as
mentors. By creating these animations, students bring the power of their
imagination to see the research of the scientists through a different lens.
Astro-animation is an undergraduate-level course jointly taught by an
astrophysicist and an animator. In this paper we present the motivation behind
the class, describe the details of how it is carried out, and discuss the
interactions between artists and scientists. We describe how such a program
offers an effective way for art students, not only to learn about science but
to have a glimpse of "science in action". The students have the opportunity to
become involved in the process of science as artists, as observers first and
potentially through their own art research. Every year, one or more internships
at NASA Goddard Space Flight Center have been available for our students in the
summer. Two students describe their experiences undertaking these internships
and how science affects their creation of animations for this program and in
general. We also explain the genesis of our astro-animation program, how it is
taught in our animation department, and we present the highlights of an
investigation of the effectiveness of this program we carried out with the
support of an NEA research grant. In conclusion we discuss how the program may
grow in new directions, such as contributing to informal STE(A)M learning.
|
In this paper, we give a proof to a statement in Perelman's paper for finite
extinction time of Ricci flow. Our proof draws on different techniques from the
one given in Morgan-Tian's exposition and is extrinsic in nature, which relies
on the co-area formula instead of the Gauss-Bonnet theorem, and is potentially
generalizable to higher dimensions.
|
Optical implementations of neural networks (ONNs) herald the next-generation
high-speed and energy-efficient deep learning computing by harnessing the
technical advantages of large bandwidth and high parallelism of optics.
However, due to the problems of incomplete numerical domain, limited hardware
scale, or inadequate numerical accuracy, the majority of existing ONNs were
studied for basic classification tasks. Given that regression is a fundamental
form of deep learning and accounts for a large part of current artificial
intelligence applications, it is necessary to master deep learning regression
for further development and deployment of ONNs. Here, we demonstrate a
silicon-based optical coherent dot-product chip (OCDC) capable of completing
deep learning regression tasks. The OCDC adopts optical fields to carry out
operations in complete real-value domain instead of in only positive domain.
Via reusing, a single chip conducts matrix multiplications and convolutions in
neural networks of any complexity. Also, hardware deviations are compensated
via in-situ backpropagation control provided the simplicity of chip
architecture. Therefore, the OCDC meets the requirements for sophisticated
regression tasks and we successfully demonstrate a representative neural
network, the AUTOMAP (a cutting-edge neural network model for image
reconstruction). The quality of reconstructed images by the OCDC and a 32-bit
digital computer is comparable. To the best of our knowledge, there is no
precedent of performing such state-of-the-art regression tasks on ONN chip. It
is anticipated that the OCDC can promote novel accomplishment of ONNs in modern
AI applications including autonomous driving, natural language processing, and
scientific study.
|
The paper describes a number of simple but quite effective methods for
constructing exact solutions of PDEs, that involve a relatively small amount of
intermediate calculations. The methods employ two main ideas: (i) simple exact
solutions can serve to construct more complex solutions of the equations under
consideration and (ii) exact solutions of some equations can serve to construct
solutions of other, more complex equations. In particular, we propose a method
for constructing complex solutions from simple solutions using translation and
scaling. We show that in some cases, rather complex solutions can be obtained
by adding one or more terms to simpler solutions. There are situations where
nonlinear superposition allows us to construct a complex composite solution
using similar simple solutions. We also propose a few methods for constructing
complex exact solutions to linear and nonlinear PDEs by introducing
complex-valued parameters into simpler solutions. The effectiveness of the
methods is illustrated by a large number of specific examples (over 30 in
total). These include nonlinear heat/diffusion equations, wave type equations,
Klein--Gordon type equations, hydrodynamic boundary layer equations,
Navier--Stokes equations, and some other PDEs. Apart from exact solutions to
`ordinary' PDEs, we also describe some exact solutions to more complex
nonlinear delay PDEs. Along with the unknown function at the current time,
$u=u(x,t)$, these equations contain the same function at a past time,
$w=u(x,t-\tau)$, where $\tau>0$ is the delay time. Furthermore, we look at
nonlinear partial functional-differential equations of the pantograph type,
which in addition to the unknown $u=u(x,t)$, also contain the same functions
with dilated or contracted arguments, $w=u(px,qt)$, where $p$ and $q$ are
scaling parameters.
|
Light's internal reflectivity near a critical angle is very sensitive to the
angle of incidence and the optical properties of the external medium near the
interface. Novel applications in biology and medicine of subcritical internal
reflection are being pursued. In many practical situations the refractive index
of the external medium may vary with respect to its bulk value due to different
physical phenomena at surfaces. Thus, there is a pressing need to understand
the effects of a refractive-index gradient at a surface for near-critical-angle
reflection. In this work we investigate theoretically the reflectivity near the
critical angle at an interface with glass assuming the external medium has a
continuous depth-dependent refractive index. We present graphs of the internal
reflectivity as a function of the angle of incidence, which exhibit the effects
of a refractive-index gradient at the interface. We analyse the behaviour of
the reflectivity curves before total internal reflection is achieved. Our
results provide insight into how one can recognise the existence of a
refractive-index gradient at the interface and shed light on the viability of
characterising it.
|
In this paper, we present difference of convex algorithms for solving bilevel
programs in which the upper level objective functions are difference of convex
functions, and the lower level programs are fully convex. This nontrivial class
of bilevel programs provides a powerful modelling framework for dealing with
applications arising from hyperparameter selection in machine learning. Thanks
to the full convexity of the lower level program, the value function of the
lower level program turns out to be convex and hence the bilevel program can be
reformulated as a difference of convex bilevel program. We propose two
algorithms for solving the reformulated difference of convex program and show
their convergence under very mild assumptions. Finally we conduct numerical
experiments to a bilevel model of support vector machine classification.
|
We propose a new method for unsupervised continual knowledge consolidation in
generative models that relies on the partitioning of Variational Autoencoder's
latent space. Acquiring knowledge about new data samples without forgetting
previous ones is a critical problem of continual learning. Currently proposed
methods achieve this goal by extending the existing model while constraining
its behavior not to degrade on the past data, which does not exploit the full
potential of relations within the entire training dataset. In this work, we
identify this limitation and posit the goal of continual learning as a
knowledge accumulation task. We solve it by continuously re-aligning latent
space partitions that we call bands which are representations of samples seen
in different tasks, driven by the similarity of the information they contain.
In addition, we introduce a simple yet effective method for controlled
forgetting of past data that improves the quality of reconstructions encoded in
latent bands and a latent space disentanglement technique that improves
knowledge consolidation. On top of the standard continual learning evaluation
benchmarks, we evaluate our method on a new knowledge consolidation scenario
and show that the proposed approach outperforms state-of-the-art by up to
twofold across all testing scenarios.
|
Registration is a transformation estimation problem between two point clouds,
which has a unique and critical role in numerous computer vision applications.
The developments of optimization-based methods and deep learning methods have
improved registration robustness and efficiency. Recently, the combinations of
optimization-based and deep learning methods have further improved performance.
However, the connections between optimization-based and deep learning methods
are still unclear. Moreover, with the recent development of 3D sensors and 3D
reconstruction techniques, a new research direction emerges to align
cross-source point clouds. This survey conducts a comprehensive survey,
including both same-source and cross-source registration methods, and summarize
the connections between optimization-based and deep learning methods, to
provide further research insight. This survey also builds a new benchmark to
evaluate the state-of-the-art registration algorithms in solving cross-source
challenges. Besides, this survey summarizes the benchmark data sets and
discusses point cloud registration applications across various domains.
Finally, this survey proposes potential research directions in this rapidly
growing field.
|
We perform asymptotic analysis for the Euler--Riesz system posed in either
$\mathbb{T}^d$ or $\mathbb{R}^d$ in the high-force regime and establish a
quantified relaxation limit result from the Euler--Riesz system to the
fractional porous medium equation. We provide a unified approach for asymptotic
analysis regardless of the presence of pressure, based on the modulated energy
estimates, the Wasserstein distance of order $2$, and the bounded Lipschitz
distance.
|
Resistive random-access memory is one of the most promising candidates for
the next generation of non-volatile memory technology. However, its crossbar
structure causes severe "sneak-path" interference, which also leads to strong
inter-cell correlation. Recent works have mainly focused on sub-optimal data
detection schemes by ignoring inter-cell correlation and treating sneak-path
interference as independent noise. We propose a near-optimal data detection
scheme that can approach the performance bound of the optimal detection scheme.
Our detection scheme leverages a joint data and sneak-path interference
recovery and can use all inter-cell correlations. The scheme is appropriate for
data detection of large memory arrays with only linear operation complexity.
|
We present a robust version of the life-cycle optimal portfolio choice
problem in the presence of labor income, as introduced in Biffis, Gozzi and
Prosdocimi ("Optimal portfolio choice with path dependent labor income: the
infinite horizon case", SIAM Journal on Control and Optimization, 58(4),
1906-1938.) and Dybvig and Liu ("Lifetime consumption and investment:
retirement and constrained borrowing", Journal of Economic Theory, 145, pp.
885-907). In particular, in Biffis, Gozzi and Prosdocimi the influence of past
wages on the future ones is modelled linearly in the evolution equation of
labor income, through a given weight function. The optimization relies on the
resolution of an infinite dimensional HJB equation. We improve the state of art
in three ways. First, we allow the weight to be a Radon measure. This
accommodates for more realistic weighting of the sticky wages, like e.g. on a
discrete temporal grid according to some periodic income. Second, there is a
general correlation structure between labor income and stocks market. This
naturally affects the optimal hedging demand, which may increase or decrease
according to the correlation sign. Third, we allow the weight to change with
time, possibly lacking perfect identification. The uncertainty is specified by
a given set of Radon measures $K$, in which the weight process takes values.
This renders the inevitable uncertainty on how the past affects the future, and
includes the standard case of error bounds on a specific estimate for the
weight. Under uncertainty averse preferences, the decision maker takes a maxmin
approach to the problem. Our analysis confirms the intuition: in the infinite
dimensional setting, the optimal policy remains the best investment strategy
under the worst case weight.
|
Conventional multi-agent path planners typically determine a path that
optimizes a single objective, such as path length. Many applications, however,
may require multiple objectives, say time-to-completion and fuel use, to be
simultaneously optimized in the planning process. Often, these criteria may not
be readily compared and sometimes lie in competition with each other. Simply
applying standard multi-objective search algorithms to multi-agent path finding
may prove to be inefficient because the size of the space of possible
solutions, i.e., the Pareto-optimal set, can grow exponentially with the number
of agents (the dimension of the search space). This paper presents an approach
that bypasses this so-called curse of dimensionality by leveraging our prior
multi-agent work with a framework called subdimensional expansion. One example
of subdimensional expansion, when applied to A*, is called M* and M* was
limited to a single objective function. We combine principles of dominance and
subdimensional expansion to create a new algorithm named multi-objective M*
(MOM*), which dynamically couples agents for planning only when those agents
have to "interact" with each other. MOM* computes the complete Pareto-optimal
set for multiple agents efficiently and naturally trades off sub-optimal
approximations of the Pareto-optimal set and computational efficiency. Our
approach is able to find the complete Pareto-optimal set for problem instances
with hundreds of solutions which the standard multi-objective A* algorithms
could not find within a bounded time.
|
The chromatic index of a cubic graph is either 3 or 4. Edge-Kempe switching,
which can be used to transform edge-colorings, is here considered for
3-edge-colorings of cubic graphs. Computational results for edge-Kempe
switching of cubic graphs up to order 30 and bipartite cubic graphs up to order
36 are tabulated. Families of cubic graphs of orders $4n+2$ and $4n+4$ with
$2^n$ edge-Kempe equivalence classes are presented; it is conjectured that
there are no cubic graphs with more edge-Kempe equivalence classes. New
families of nonplanar bipartite cubic graphs with exactly one edge-Kempe
equivalence class are also obtained. Edge-Kempe switching is further connected
to cycle switching of Steiner triple systems, for which an improvement of the
established classification algorithm is presented.
|
Type-B permutation tableaux are combinatorial objects introduced by Lam and
Williams that have an interesting connection with the partially asymmetric
simple exclusion process (PASEP). In this paper, we compute the expected value
of several statistics on these tableaux. Some of these computations are
motivated by a similar paper on permutation tableaux. Others are motivated by
the PASEP. In particular, we compute the expected number of rows, unrestricted
rows, diagonal ones, adjacent south steps, and adjacent west steps.
|
Adversarial training (AT) is currently one of the most successful methods to
obtain the adversarial robustness of deep neural networks. However, the
phenomenon of robust overfitting, i.e., the robustness starts to decrease
significantly during AT, has been problematic, not only making practitioners
consider a bag of tricks for a successful training, e.g., early stopping, but
also incurring a significant generalization gap in the robustness. In this
paper, we propose an effective regularization technique that prevents robust
overfitting by optimizing an auxiliary `consistency' regularization loss during
AT. Specifically, we discover that data augmentation is a quite effective tool
to mitigate the overfitting in AT, and develop a regularization that forces the
predictive distributions after attacking from two different augmentations of
the same instance to be similar with each other. Our experimental results
demonstrate that such a simple regularization technique brings significant
improvements in the test robust accuracy of a wide range of AT methods. More
remarkably, we also show that our method could significantly help the model to
generalize its robustness against unseen adversaries, e.g., other types or
larger perturbations compared to those used during training. Code is available
at https://github.com/alinlab/consistency-adversarial.
|
Deep learning techniques are increasingly being adopted for classification
tasks over the past decade, yet explaining how deep learning architectures can
achieve state-of-the-art performance is still an elusive goal. While all the
training information is embedded deeply in a trained model, we still do not
understand much about its performance by only analyzing the model. This paper
examines the neuron activation patterns of deep learning-based classification
models and explores whether the models' performances can be explained through
neurons' activation behavior. We propose two approaches: one that models
neurons' activation behavior as a graph and examines whether the neurons form
meaningful communities, and the other examines the predictability of neurons'
behavior using entropy. Our comprehensive experimental study reveals that both
the community quality and entropy can provide new insights into the deep
learning models' performances, thus paves a novel way of explaining deep
learning models directly from the neurons' activation pattern.
|
Cavity ring-down spectroscopy is a ubiquitous optical method used to study
light-matter interactions with high resolution, sensitivity and accuracy.
However, it has never been performed with the multiplexing advantages of direct
frequency comb spectroscopy without sacrificing orders of magnitude of
resolution. We present dual-comb cavity ring-down spectroscopy (DC-CRDS) based
on the parallel heterodyne detection of ring-down signals with a local
oscillator comb to yield absorption and dispersion spectra. These spectra are
obtained from widths and positions of cavity modes. We present two approaches
which leverage the dynamic cavity response to coherently or randomly driven
changes in the amplitude or frequency of the probe field. Both techniques yield
accurate spectra of methane - an important greenhouse gas and breath biomarker.
The high sensitivity and accuracy of broadband DC-CRDS, shows promise for
applications like studies of the structure and dynamics of large molecules,
multispecies trace gas detection and isotopic composition.
|
Using direct numerical simulations of rotating Rayleigh-B\'enard convection,
we explore the transitions between turbulent states from a 3D flow state
towards a quasi-2D condensate known as the large-scale vortex (LSV). We vary
the Rayleigh number $Ra$ as control parameter and study the system response
(strength of the LSV) in terms of order parameters assessing the energetic
content in the flow and the upscale energy flux. By sensitively probing the
boundaries of the domain of existence of the LSV, we find discontinuous
transitions and we identify the presence of a hysteresis loop as well as
nucleation & growth type of dynamics, manifesting a remarkable correspondence
with first-order phase transitions in equilibrium statistical mechanics. We
show furthermore that the creation of the condensate state coincides with a
discontinuous transition of the energy transport into the largest mode of the
system.
|
Radial imaging techniques, such as projection-reconstruction (PR), are used
in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and
short-imaging. They are robust to flow and motion, have diffuse aliasing
patterns, and support short readouts and echo times. One drawback is that
standard implementations do not support anisotropic field-of-view (FOV) shapes,
which are used to match the imaging parameters to the object or
region-of-interest. A set of fast, simple algorithms for 2-D and 3-D PR, and
3-D cones acquisitions are introduced that match the sampling density in
frequency space to the desired FOV shape. Tailoring the acquisitions allows for
reduction of aliasing artifacts in undersampled applications or scan time
reductions without introducing aliasing in fully-sampled applications. It also
makes possible new radial imaging applications that were previously unsuitable,
such as imaging elongated regions or thin slabs. 2-D PR longitudinal leg images
and thin-slab, single breath-hold 3-D PR abdomen images, both with isotropic
resolution, demonstrate these new possibilities. No scan time to volume
efficiency is lost by using anisotropic FOVs. The acquisition trajectories can
be computed on a scan by scan basis.
|
We develop a visual analytics system, NewsKaleidoscope, to investigate the
how news reporting of events varies. NewsKaleidoscope combines several backend
text language processing techniques with a coordinated visualization interface
tailored for visualization non-expert users. To robustly evaluate
NewsKaleidoscope, we conduct a trio of user studies. (1) A usability study with
news novices assesses the overall system and the specific insights promoted for
journalism-agnostic users. (2) A follow-up study with news experts assesses the
insights promoted for journalism-savvy users. (3) Based on identified system
limitations in these two studies, we amend NewsKaleidoscope design and conduct
a third study to validate these improvements. Results indicate that, for both
news novice and experts, NewsKaleidoscope supports an effective, task-driven
workflow for analyzing the diversity of news coverage about events, though
journalism expertise has a significant influence on the user insights and
takeaways. Our insights while developing and evaluating NewsKaleidoscope can
aid future interface designs that combine visualization with natural language
processing to analyze coverage diversity in news event reporting.
|
We investigate the possibility of simultaneously explaining inflation, the
neutrino masses and the baryon asymmetry through extending the Standard Model
by a triplet Higgs. The neutrino masses are generated by the vacuum expectation
value of the triplet Higgs, while a combination of the triplet and doublet
Higgs' plays the role of the inflaton. Additionally, the dynamics of the
triplet, and its inherent lepton number violating interactions, lead to the
generation of a lepton asymmetry during inflation. The resultant baryon
asymmetry, inflationary predictions and neutrino masses are consistent with
current observational and experimental results.
|
Prosody plays an important role in characterizing the style of a speaker or
an emotion, but most non-parallel voice or emotion style transfer algorithms do
not convert any prosody information. Two major components of prosody are pitch
and rhythm. Disentangling the prosody information, particularly the rhythm
component, from the speech is challenging because it involves breaking the
synchrony between the input speech and the disentangled speech representation.
As a result, most existing prosody style transfer algorithms would need to rely
on some form of text transcriptions to identify the content information, which
confines their application to high-resource languages only. Recently,
SpeechSplit has made sizeable progress towards unsupervised prosody style
transfer, but it is unable to extract high-level global prosody style in an
unsupervised manner. In this paper, we propose AutoPST, which can disentangle
global prosody style from speech without relying on any text transcriptions.
AutoPST is an Autoencoder-based Prosody Style Transfer framework with a
thorough rhythm removal module guided by the self-expressive representation
learning. Experiments on different style transfer tasks show that AutoPST can
effectively convert prosody that correctly reflects the styles of the target
domains.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.