abstract
stringlengths 42
2.09k
|
---|
3D mask face presentation attack detection (PAD) plays a vital role in
securing face recognition systems from emergent 3D mask attacks. Recently,
remote photoplethysmography (rPPG) has been developed as an intrinsic liveness
clue for 3D mask PAD without relying on the mask appearance. However, the rPPG
features for 3D mask PAD are still needed expert knowledge to design manually,
which limits its further progress in the deep learning and big data era. In
this letter, we propose a pure rPPG transformer (TransRPPG) framework for
learning intrinsic liveness representation efficiently. At first, rPPG-based
multi-scale spatial-temporal maps (MSTmap) are constructed from facial skin and
background regions. Then the transformer fully mines the global relationship
within MSTmaps for liveness representation, and gives a binary prediction for
3D mask detection. Comprehensive experiments are conducted on two benchmark
datasets to demonstrate the efficacy of the TransRPPG on both intra- and
cross-dataset testings. Our TransRPPG is lightweight and efficient (with only
547K parameters and 763M FLOPs), which is promising for mobile-level
applications.
|
We present an analytic model for the splashback mass function of dark matter
halos, which is parameterized by a single coefficient and constructed in the
framework of the generalized excursion set theory and the self-similar
spherical infall model. The value of the single coefficient that quantifies the
diffusive nature of the splashback boundary is determined at various redshifts
by comparing the model with the numerical results from the Erebos N-body
simulations for the Planck and the WMAP7 cosmologies. Showing that the analytic
model with the best-fit coefficient provides excellent matches to the numerical
results in the mass range of $5\le M/(10^{12}h^{-1}\,M_{\odot})< 10^{3}$, we
employ the Bayesian and Akaike Information Criterion tests to confirm that our
model is most preferred by the numerical results to the previous models at
redshifts of $0.3\le z\le 3$ for both of the cosmologies. It is also found that
the diffusion coefficient decreases almost linearly with redshifts, converging
to zero at a certain threshold redshift, $z_{c}$, whose value significantly
differs between the Planck and WMAP7 cosmologies. Our result implies that the
splashback mass function of dark matter halos at $z\ge z_{c}$ is well described
by a parameter-free analytic formula and that $z_{c}$ may have a potential to
independently constrain the initial conditions of the universe.
|
Spiking Neural Networks (SNNs) have the potential for achieving low energy
consumption due to their biologically sparse computation. Several studies have
shown that the off-chip memory (DRAM) accesses are the most energy-consuming
operations in SNN processing. However, state-of-the-art in SNN systems do not
optimize the DRAM energy-per-access, thereby hindering achieving high
energy-efficiency. To substantially minimize the DRAM energy-per-access, a key
knob is to reduce the DRAM supply voltage but this may lead to DRAM errors
(i.e., the so-called approximate DRAM). Towards this, we propose SparkXD, a
novel framework that provides a comprehensive conjoint solution for resilient
and energy-efficient SNN inference using low-power DRAMs subjected to
voltage-induced errors. The key mechanisms of SparkXD are: (1) improving the
SNN error tolerance through fault-aware training that considers bit errors from
approximate DRAM, (2) analyzing the error tolerance of the improved SNN model
to find the maximum tolerable bit error rate (BER) that meets the targeted
accuracy constraint, and (3) energy-efficient DRAM data mapping for the
resilient SNN model that maps the weights in the appropriate DRAM location to
minimize the DRAM access energy. Through these mechanisms, SparkXD mitigates
the negative impact of DRAM (approximation) errors, and provides the required
accuracy. The experimental results show that, for a target accuracy within 1%
of the baseline design (i.e., SNN without DRAM errors), SparkXD reduces the
DRAM energy by ca. 40% on average across different network sizes.
|
Franson-type nonlocal quantum correlation based on the particle nature of
quantum mechanics has been intensively studied for both fundamental physics and
potential applications of quantum key distribution between remotely separated
parties over the last several decades. Recently, a coherence theory of
deterministic quantum features has been applied for Franson-type nonlocal
correlation [arXiv:2102.06463] to understand its quantumness in a purely
classical manner, where the resulting features are deterministic and
macroscopic. Here, nearly sub-Poisson distributed coherent photon pairs
obtained from an attenuated laser are used for the experimental demonstrations
of the coherence Franson-type nonlocal correlation. As an essential requirement
of quantum mechanics, quantum superposition is macroscopically provided using
polarization basis-randomness via a half-wave plate, satisfying fairness
compared with the original scheme based on phase bases. The observed coherence
quantum feature of the modified Franson correlation successfully demonstrates
the proposed wave nature of quantum mechanics, where the unveiled nonlocal
correlation is relied on a definite phase relation between the paired coherent
photons.
|
For given integers $n$ and $d$, both at least 2, we consider a homogeneous
multivariate polynomial $f_d$ of degree $d$ in variables indexed by the edges
of the complete graph on $n$ vertices and coefficients depending on
cardinalities of certain unions of edges. Cardinaels, Borst and Van Leeuwaarden
(arXiv:2111.05777, 2021) asked whether $f_d$, which arises in a model of
job-occupancy in redundancy scheduling, attains its minimum over the standard
simplex at the uniform probability vector. Brosch, Laurent and Steenkamp [SIAM
J. Optim. 31 (2021), 2227--2254] proved that $f_d$ is convex over the standard
simplex if $d=2$ and $d=3$, implying the desired result for these $d$. We give
a symmetry reduction to show that for fixed $d$, the polynomial is convex over
the standard simplex (for all $n\geq 2$) if a constant number of constant
matrices (with size and coefficients independent of $n$) are positive
semidefinite. This result is then used in combination with a computer-assisted
verification to show that the polynomial $f_d$ is convex for $d\leq 9$.
|
Mitral valve repair is a surgery to restore the function of the mitral valve.
To achieve this, a prosthetic ring is sewed onto the mitral annulus. Analyzing
the sutures, which are punctured through the annulus for ring implantation, can
be useful in surgical skill assessment, for quantitative surgery and for
positioning a virtual prosthetic ring model in the scene via augmented reality.
This work presents a neural network approach which detects the sutures in
endoscopic images of mitral valve repair and therefore solves a landmark
detection problem with varying amount of landmarks, as opposed to most other
existing deep learning-based landmark detection approaches. The neural network
is trained separately on two data collections from different domains with the
same architecture and hyperparameter settings. The datasets consist of more
than 1,300 stereo frame pairs each, with a total over 60,000 annotated
landmarks. The proposed heatmap-based neural network achieves a mean positive
predictive value (PPV) of 66.68$\pm$4.67% and a mean true positive rate (TPR)
of 24.45$\pm$5.06% on the intraoperative test dataset and a mean PPV of
81.50\pm5.77\% and a mean TPR of 61.60$\pm$6.11% on a dataset recorded during
surgical simulation. The best detection results are achieved when the camera is
positioned above the mitral valve with good illumination. A detection from a
sideward view is also possible if the mitral valve is well perceptible.
|
It has long been believed that core-hole lifetime (CHL) of an atom is an
intrinsic physical property, and controlling it is significant yet is very
hard. Here, CHL of the 2p state of W atom is manipulated experimentally through
adjusting the emission rate of a resonant fluorescence channel with the
assistance of an x-ray thin-film planar cavity. The emission rate is
accelerated by a factor linearly proportional to the cavity field amplitude,
that can be directly controlled by choosing different cavity modes or changing
the angle offset in experiment. This experimental observation is in good
agreement with theoretical predictions. It is found that the manipulated
resonant fluorescence channel even can dominate the CHL. The controllable CHL
realized here will facilitate the nonlinear investigations and modern x-ray
scattering techniques in hard x-ray region.
|
Hadronic cross sections are important ingredients in many of the ongoing
research methods in high-energy nuclear physics, and it is always important to
measure and/or calculate the probabilities of different types of reactions. In
heavy-ion transport simulations at a few GeV energies, these hadronic cross
sections are essential and so far mostly the exclusive processes are used,
however, if one interested in total production rates the inclusive cross
sections are also necessary to know. In this paper, we introduce a
statistical-based method, which is able to give good estimates to exclusive and
inclusive cross sections as well in the energy range of a few GeV. The method
and its estimates for not well-known cross sections, will be used in a
Boltzmann-Uehling-Uhlenbeck (BUU) type off-shell transport code to explain
charmonium and bottomonium mass shifts in heavy-ion collisions.
|
In machine learning (ML), it is in general challenging to provide a detailed
explanation on how a trained model arrives at its prediction. Thus, usually we
are left with a black-box, which from a scientific standpoint is not
satisfactory. Even though numerous methods have been recently proposed to
interpret ML models, somewhat surprisingly, interpretability in ML is far from
being a consensual concept, with diverse and sometimes contrasting motivations
for it. Reasonable candidate properties of interpretable models could be model
transparency (i.e. how does the model work?) and post hoc explanations (i.e.,
what else can the model tell me?). Here, I review the current debate on ML
interpretability and identify key challenges that are specific to ML applied to
materials science.
|
We propose a reliable scheme to recover the photon blockade effect in the
dispersive-Jaynes-Cummings model, which describes a two-level atom coupled to a
single-mode cavity field in the large-detuning regime. This is achieved by
introducing a transversal driving to the atom and then photonic nonlinearity is
obtained. The eigen-energy spectrum of the system is obtained analytically, and
the photon blockade effect is confirmed by numerically calculating the
photon-number distributions and the equal-time second-order correlation
function of the cavity field in the presence of system dissipations. We find
that the photon blockade effect can be recovered at proper atomic and
cavity-field drivings. This work will provide a new method to generate photon
blockade in the dispersively coupled quantum optical systems.
|
The reconstruction of spectral function from correlation function in
Euclidean space is a challenging task. In this paper, we employ the Machine
Learning techniques in terms of the radial basis functions networks to
reconstruct the spectral function from a finite number of correlation data. To
test our method, we first generate one type of correlation data using a mock
spectral function by mixing several Breit-Wigner propagators. We found that
compared with other traditional methods, TSVD, Tikhonov, and MEM, our approach
gives a continuous and unified reconstruction for both positive definite and
negative spectral function, which is especially useful for studying the QCD
phase transition. Moreover, our approach has considerably better performance in
the low frequency region. This has advantages for the extraction of transport
coefficients which are related to the zero frequency limit of the spectral
function. With the mock data generated through a model spectral function of
stress energy tensor, we find our method gives a precise and stable extraction
of the transport coefficients.
|
Non-singular weighted surface algebras satisfy the necessary condition found
in [6] for existence of cluster tilting modules. We show that any such algebra
whose Gabriel quiver is bipartite, has a module satisfying the necessary ext
vanishing condition. We show that it is 3-cluster tilting precisely for the
non-singular triangular or spherical algebras but not for any other weighted
surface algebra with bipartite Gabriel quiver.
|
Field emission tips with apex radius of curvature below 100nm are not
adequately described by the standard theoretical models based on the
Fowler-Nordheim and Murphy-Good formalisms. This is due to the breakdown of the
`constant electric field' assumption within the tunneling region leading to
substantial errors in current predictions. A uniformly applicable
curvature-corrected field emission theory requires that the tunneling potential
be approximately universal irrespective of the emitter shape. Using the line
charge model, it is established analytically that smooth generic emitter tips
approximately follow this universal trend when the anode is far away. This is
verified using COMSOL for various emitter shapes including the locally
non-parabolic `hemisphere on a cylindrical post'. It is also found numerically
that the curvature-corrected tunneling potential provides an adequate
approximation when the anode is in close proximity as well as in the presence
of other emitters.
|
The direct shooting method is a classic approach for the solution of Optimal
Control Problems (OCPs). It parameterizes the control variables and transforms
the OCP to the Nonlinear Programming (NLP) problem to solve. This method is
easy to use and it often introduces less parameters compared with all-variable
parameterization method like the Pseudo-spectral (PS) method. However, it is
long believed that its solution is not guaranteed to satisfy the optimality
conditions of the OCP and the costates are not available in using this method.
In this paper, we show that the direct shooting method may also provide the
costate information, and it is proved that both the state and the costate
solutions converge to the optimal as long as the control variable tends to the
optimal, while the parameterized control may approach the optimal control with
reasonable parameterization. This gives us the credit for the optimal control
computation when employing the direct shooting method.
|
The breakthrough of achieving fully homomorphic encryption sparked enormous
studies on where and how to apply homomorphic encryption schemes so that
operations can be performed on encrypted data without the secret key while
still obtaining the correct outputs. Due to the computational cost, inflated
ciphertext size and limited arithmetic operations that are allowed in most
encryption schemes, feasible applications of homomorphic encryption are few.
While theorists are working on the mathematical and cryptographical foundations
of homomorphic encryption in order to overcome the current limitations,
practitioners are also re-designing queries and algorithms to adapt the
functionalities of the current encryption schemes. As an initial study on
working with homomorphically encrypted graphs, this paper provides an
easy-to-implement interactive algorithm to check whether or not a
homomorphically encrypted graph is chordal. This algorithm is simply a
refactoring of a current method to run on the encrypted adjacency matrices.
|
The paper adopts parallel computing systems for predictive analysis in both
CPU and GPU leveraging Spark Big Data platform. The traffic dataset is adopted
to predict the traffic jams in Los Angeles County. It is collected from a
popular platform in the USA for tracking information on the road using the
device information and reports shared by the users. Large-scale traffic data
set can be stored and processed using both GPU and CPU in this Scalable Big
Data systems. The major contribution of this paper is to improve the
performance of machine learning in distributed parallel computing systems with
GPU to predict the traffic congestion. We show that the parallel computing can
be achieve using both GPU and CPU with the existing Apache Spark platform. Our
method can be applicable to other large scale datasets in different domains.
The process modeling, as well as results, are interpreted using computing time
and metrics: AUC, Precision and Recall. It should help the traffic management
in Smart City.
|
The Cauchy problem for the Hardy-H\'enon parabolic equation is studied in the
critical and subcritical regime in weighted Lebesgue spaces on the Euclidean
space $\mathbb{R}^d$. Well-posedness for singular initial data and existence of
non-radial forward self-similar solution of the problem are previously shown
only for the Hardy and Fujita cases ($\gamma\le 0$) in earlier works. The
weighted spaces enable us to treat the potential $|x|^{\gamma}$ as an increase
or decrease of the weight, thereby we can prove well-posedness to the problem
for all $\gamma$ with $-\min\{2,d\}<\gamma$ including the H\'enon case
($\gamma>0$). As a byproduct of the well-posedness, the self-similar solutions
to the problem are also constructed for all $\gamma$ without restrictions. A
non-existence result of local solution for supercritical data is also shown.
Therefore our critical exponent $s_c$ turns out to be optimal in regards to the
solvability.
|
We present inverted spin-valves fabricated from CVD-grown bilayer graphene
(BLG) that show more than a doubling in device performance at room temperature
compared to state-of-the art bilayer graphene spin-valves. This is made
possible by a PDMS droplet-assisted full-dry transfer technique that
compensates for previous process drawbacks in device fabrication.
Gate-dependent Hanle measurements show spin lifetimes of up to 5.8 ns and a
spin diffusion length of up to 26 $\mu$m at room temperature combined with a
charge carrier mobility of $\approx$ 24 000 cm$^{2}$(Vs)$^{-1}$ for the best
device. Our results demonstrate that CVD-grown BLG shows equally good room
temperature spin transport properties as both CVD-graphene and even exfoliated
single-layer graphene.
|
The intelligent reflecting surface (IRS) is a promising new paradigm in
wireless communications for meeting the growing connectivity demands in
next-generation mobile networks. IRS, also known as software-controlled
metasurfaces, consist of an array of adjustable radio wave reflectors, enabling
smart radio environments, e.g., for enhancing the signal-to-noise ratio (SNR)
and spatial diversity of wireless channels. Research on IRS to date has been
largely focused on constructive applications. In this work, we demonstrate for
the first time that the IRS provides a practical low-cost toolkit for attackers
to easily perform complex signal manipulation attacks on the physical layer in
real time. We introduce the environment reconfiguration attack (ERA) as a novel
class of jamming attacks in wireless radio networks. Here, an adversary
leverages the IRS to rapidly vary the electromagnetic propagation environment
to disturb legitimate receivers. The IRS gives the adversary a key advantage
over traditional jamming: It no longer has to actively emit jamming signals,
instead the IRS reflects existing legitimate signals. In addition, the
adversary doesn't need any knowledge about the legitimate channel. We
thoroughly investigate the ERA in wireless systems based on the widely employed
orthogonal frequency division multiplexing (OFDM) modulation. We present
insights into the attack through analytical analysis, simulations, as well as
experiments. Our results show that the ERA allows to severely degrade the
available data rates even with reasonably small IRS sizes. Finally, we
implement an attacker setup and demonstrate a practical ERA to slow down an
entire Wi-Fi network.
|
Epsilon-near-zero and epsilon near-pole materials enable reflective systems
supporting a class of symmetry-protected and accidental embedded eigenstates
(EE) characterized by a diverging phase-resonance. Here we show that pairs of
topologically protected scattering singularities necessarily emerge from EEs
when a non-Hermitian parameter is introduced, lifting the degeneracy between
oppositely charged singularities. The underlying topological charges are
characterized by an integer winding number and appear as phase vortices of the
complex reflection coefficient. By creating and annihilating them, we show that
these singularities obey charge conservation, and provide versatile control of
amplitude, phase and polarization in reflection, with potential applications
for polarization control and sensing.
|
This paper develops a novel control-theoretic framework to analyze the
non-asymptotic convergence of Q-learning. We show that the dynamics of
asynchronous Q-learning with a constant step-size can be naturally formulated
as a discrete-time stochastic affine switching system. Moreover, the evolution
of the Q-learning estimation error is over- and underestimated by trajectories
of two simpler dynamical systems. Based on these two systems, we derive a new
finite-time error bound of asynchronous Q-learning when a constant stepsize is
used. Our analysis also sheds light on the overestimation phenomenon of
Q-learning. We further illustrate and validate the analysis through numerical
simulations.
|
We study the shape reconstruction of an inclusion from the {faraway}
measurement of the associated electric field. This is an inverse problem of
practical importance in biomedical imaging and is known to be notoriously
ill-posed. By incorporating Drude's model of the permittivity parameter, we
propose a novel reconstruction scheme by using the plasmon resonance with a
significantly enhanced resonant field. We conduct a delicate sensitivity
analysis to establish a sharp relationship between the sensitivity of the
reconstruction and the plasmon resonance. It is shown that when plasmon
resonance occurs, the sensitivity functional blows up and hence ensures a more
robust and effective construction. Then we combine the Tikhonov regularization
with the Laplace approximation to solve the inverse problem, which is an
organic hybridization of the deterministic and stochastic methods and can
quickly calculate the minimizer while capture the uncertainty of the solution.
We conduct extensive numerical experiments to illustrate the promising features
of the proposed reconstruction scheme.
|
Image deblurring is relevant in many fields of science and engineering. To
solve this problem, many different approaches have been proposed and among the
various methods, variational ones are extremely popular. These approaches are
characterized by substituting the original problem with a minimization one
where the functional is composed of two terms, a data fidelity term and a
regularization term. In this paper we propose, in the classical $\ell^2-\ell^1$
minimization with the non-negativity constraint of the solution, the use of the
graph Laplacian as regularization operator. Firstly, we describe how to
construct the graph Laplacian from the observed noisy and blurred image. Once
the graph Laplacian has been built, we solve efficiently the proposed
minimization problem splitting the convolution operator and the graph Laplacian
by the alternating direction method of multipliers (ADMM). Some selected
numerical examples show the good performances of the proposed algorithm.
|
A toy statistical model which mimics localized-to-itinerant electron
transitions is introduced. We consider a system of two types of charge
carriers: (1) localized ones, and (2) itinerant ones. There is chemical
equilibrium between these two. The solution of the model shows that there is a
crossover, at a specific value of a local interaction energy parameter $J$, at
which carriers transfer from being itinerant to a localized/mixed valence
state. The model has a very crude analogy to the observed $\gamma\rta\alpha$
transition in metal cerium.
|
We study isogenies between K3 surfaces in positive characteristic. Our main
result is a characterization of K3 surfaces isogenous to a given K3 surface $X$
in terms of certain integral sublattices of the second rational $\ell$-adic and
crystalline cohomology groups of $X$. This is a positive characteristic analog
of a result of Huybrechts, and extends results of the second author. We give
applications to the reduction types of K3 surfaces and to the surjectivity of
the period morphism. To prove these results we describe a theory of B-fields
and Mukai lattices in positive characteristic, which may be of independent
interest. We also prove some results on lifting twisted Fourier--Mukai
equivalences to characteristic 0, generalizing results of Lieblich and Olsson.
|
Unknowingly, identifiers in the source code of a software system play a vital
role in determining the quality of the system. Ambiguous and confusing
identifier names lead developers to not only misunderstand the behavior of the
code but also increases comprehension time and thereby causes a loss in
productivity. Even though correcting poor names through rename operations is a
viable option for solving this problem, renaming itself is an act of rework and
is not immune to defect injection.
In this study, we aim to understand the motivations that drive developers to
name and rename identifiers and the decisions they make in determining the
name. Using our results, we propose the development of a linguistic model that
determines identifier names based on the behavior of the identifier. As a
prerequisite to constructing the model, we conduct multiple studies to
determine the features that should feed into the model. In this paper, we
discuss findings from our completed studies and justify the continuation of
research on this topic through further studies.
|
Chirped dynamically assisted pair production in spatial inhomogeneous
electric fields is studied by the Dirac-Heisenberg-Wigner formalism. The
effects of the chirp parameter on the reduced momentum spectrum, the reduced
total yield of the created pairs for either low or high frequency one-color
field and two-color dynamically assisted combinational fields are investigated
in detail. Also, the enhancement factor is obtained in the later two-color
field case. It is found that for the low frequency field, no matter whether it
is accompanied by the other high frequency field, its chirping has a little
effect on the pair production. For the one-color high frequency field or/and
two-color fields, the momentum spectrum exhibits incomplete interference and
the interference effect becomes more and more remarkable as chirp increases. We
also find that in the chirped dynamically assisted field, the reduced total
yield is enhanced significantly when the chirps are acting on the two fields,
compared with that the chirp is acting only for the low frequency strong field.
Specifically, by the chirping, it is found the reduced pair number is increased
by more than one order of magnitude in the field with a relative narrow spatial
scale, while it is enhanced at least two times in other case of field with
larger spatial scales or even in the quasi-homogeneous region. We also obtain
some optimal chirp parameters and spatial scales for the total yield and
enhancement factor in different scenarios of the studied external field. These
results may provide a theoretical basis for possible experiments in the future.
|
Cytoarchitecture describes the spatial organization of neuronal cells in the
brain, including their arrangement into layers and columns with respect to cell
density, orientation, or presence of certain cell types. It allows to segregate
the brain into cortical areas and subcortical nuclei, links structure with
connectivity and function, and provides a microstructural reference for human
brain atlases. Mapping boundaries between areas requires to scan histological
sections at microscopic resolution. While recent high-throughput scanners allow
to scan a complete human brain in the order of a year, it is practically
impossible to delineate regions at the same pace using the established gold
standard method. Researchers have recently addressed cytoarchitectonic mapping
of cortical regions with deep neural networks, relying on image patches from
individual 2D sections for classification. However, the 3D context, which is
needed to disambiguate complex or obliquely cut brain regions, is not taken
into account. In this work, we combine 2D histology with 3D topology by
reformulating the mapping task as a node classification problem on an
approximate 3D midsurface mesh through the isocortex. We extract deep features
from cortical patches in 2D histological sections which are descriptive of
cytoarchitecture, and assign them to the corresponding nodes on the 3D mesh to
construct a large attributed graph. By solving the brain mapping problem on
this graph using graph neural networks, we obtain significantly improved
classification results. The proposed framework lends itself nicely to
integration of additional neuroanatomical priors for mapping.
|
In sharp contrast to its classical counterpart, quantum measurement plays a
fundamental role in quantum mechanics and blurs the essential distinction
between the measurement apparatus and the objects under investigation. An
appealing phenomenon in quantum measurements, termed as quantum Zeno effect,
can be observed in particular subspaces selected by measurement Hamiltonian.
Here we apply the top-down Zeno mechanism to the particle physics. We indeed
develop an alternative insight into the properties of fundamental particles,
but not intend to challenge the Standard Model (SM). In a unified and simple
manner, our effective model allows to merge the origin of neutrino's small mass
and oscillations, the hierarchy pattern for masses of electric charged
fermions, the color confinement, and the discretization of quantum numbers,
using a perturbative theory for the dynamical quantum Zeno effect. Under
various conditions for vanishing transition amplitudes among particle
eigenstates in the effective model, it is remarkable to probe results that are
somewhat reminiscent of SM, including: (i) neutrino oscillations with big-angle
mixing and small masses emerge from the energy-momentum conservation, (ii)
electrically-charged fermions hold masses in a hierarchy pattern due to the
electric-charge conservation, (iii) color confinement and the associated
asymptotic freedom can be deduced from the color-charge conservation. We make
several anticipations about the basic properties for fundamental particles: (i)
the total mass of neutrinos and the existence of a nearly massless neutrino (of
any generation), (ii) the discretization in quantum numbers for the
new-discovered electrically-charged fermions, (iii) the confinement and the
associated asymptotic freedom for any particle containing more than two
conserved charges.
|
Purpose: Ureteroscopy is an efficient endoscopic minimally invasive technique
for the diagnosis and treatment of upper tract urothelial carcinoma (UTUC).
During ureteroscopy, the automatic segmentation of the hollow lumen is of
primary importance, since it indicates the path that the endoscope should
follow. In order to obtain an accurate segmentation of the hollow lumen, this
paper presents an automatic method based on Convolutional Neural Networks
(CNNs).
Methods: The proposed method is based on an ensemble of 4 parallel CNNs to
simultaneously process single and multi-frame information. Of these, two
architectures are taken as core-models, namely U-Net based in residual
blocks($m_1$) and Mask-RCNN($m_2$), which are fed with single still-frames
$I(t)$. The other two models ($M_1$, $M_2$) are modifications of the former
ones consisting on the addition of a stage which makes use of 3D Convolutions
to process temporal information. $M_1$, $M_2$ are fed with triplets of frames
($I(t-1)$, $I(t)$, $I(t+1)$) to produce the segmentation for $I(t)$.
Results: The proposed method was evaluated using a custom dataset of 11
videos (2,673 frames) which were collected and manually annotated from 6
patients. We obtain a Dice similarity coefficient of 0.80, outperforming
previous state-of-the-art methods.
Conclusion: The obtained results show that spatial-temporal information can
be effectively exploited by the ensemble model to improve hollow lumen
segmentation in ureteroscopic images. The method is effective also in presence
of poor visibility, occasional bleeding, or specular reflections.
|
The nearest stars provide a fundamental constraint for our understanding of
stellar physics and the Galaxy. The nearby sample serves as an anchor where all
objects can be seen and understood with precise data. This work is triggered by
the most recent data release of the astrometric space mission Gaia and uses its
unprecedented high precision parallax measurements to review the census of
objects within 10 pc. The first aim of this work was to compile all stars and
brown dwarfs within 10 pc observable by Gaia, and compare it with the Gaia
Catalogue of Nearby Stars as a quality assurance test. We complement the list
to get a full 10 pc census, including bright stars, brown dwarfs, and
exoplanets. We started our compilation from a query on all objects with a
parallax larger than 100 mas using SIMBAD. We completed the census by adding
companions, brown dwarfs with recent parallax measurements not in SIMBAD yet,
and vetted exoplanets. The compilation combines astrometry and photometry from
the recent Gaia Early Data Release 3 with literature magnitudes, spectral types
and line-of-sight velocities. We give a description of the astrophysical
content of the 10 pc sample. We find a multiplicity frequency of around 28%.
Among the stars and brown dwarfs, we estimate that around 61% are M stars and
more than half of the M stars are within the range M3.0 V to M5.0 V. We give an
overview of the brown dwarfs and exoplanets that should be detected in the next
Gaia data releases along with future developments. We provide a catalogue of
540 stars, brown dwarfs, and exoplanets in 339 systems, within 10 pc from the
Sun. This list is as volume-complete as possible from current knowledge and
provides benchmark stars that can be used, for instance, to define calibration
samples and to test the quality of the forthcoming Gaia releases. It also has a
strong outreach potential.
|
Relative weight analysis is a classic tool for detecting whether one variable
or interaction in a model is relevant. In this study, we focus on the
construction of relative weights for non-linear interactions using restricted
cubic splines. Our aim is to provide an accessible method to analyze a
multivariate model and identify one subset with the most representative set of
variables. Furthermore, we developed a procedure for treating control, fixed,
free and interaction terms simultaneously in the residual weight analysis. The
interactions are residualized properly against their main effects to maintain
their true effects in the model. We tested this method using two simulated
examples.
|
Thermal monopoles, identified after Abelian projection as magnetic currents
wrapping non-trivially around the thermal circle, are studied in $N_f = 2+1$
QCD at the physical point. The distribution in the number of wrappings, which
in pure gauge theories points to a condensation temperature coinciding with
deconfinement, points in this case to around 275 MeV, almost twice the QCD
crossover temperature $T_c$; similar indications emerge looking for the
formation of a percolating current cluster. The possible relation with other
non-perturbative phenomena observed above $T_c$ is discussed.
|
The one-phase and two-phase Muskat problems with arbitrary viscosity contrast
are studied in all dimensions. They are quasilinear parabolic equations for the
graph free boundary. We prove that small data in the scaling invariant
homogeneous Besov space $\dot B^1_{\infty, 1}$ lead to unique global solutions.
The proof exploits a new structure of the Dirichlet-Neumann operator which
allows us to implement a robust fixed-point argument. As a consequence of this
method, the initial data is only assumed to be in $\dot B^1_{\infty, 1}$ and
the solution map is Lipschitz continuous in the same topology. For the general
Muskat problem, the only known scaling invariant result was obtained in the
Wiener algebra (plus an $L^2$ assumption) which is strictly contained in $\dot
B^1_{\infty, 1}$.
|
Given a hypergraph with uncertain node weights following known probability
distributions, we study the problem of querying as few nodes as possible until
the identity of a node with minimum weight can be determined for each
hyperedge. Querying a node has a cost and reveals the precise weight of the
node, drawn from the given probability distribution. Using competitive
analysis, we compare the expected query cost of an algorithm with the expected
cost of an optimal query set for the given instance. For the general case, we
give a polynomial-time $f(\alpha)$-competitive algorithm, where $f(\alpha)\in
[1.618+\epsilon,2]$ depends on the approximation ratio $\alpha$ for an
underlying vertex cover problem. We also show that no algorithm using a similar
approach can be better than $1.5$-competitive. Furthermore, we give
polynomial-time $4/3$-competitive algorithms for bipartite graphs with
arbitrary query costs and for hypergraphs with a single hyperedge and uniform
query costs, with matching lower bounds.
|
During Multi-Agent Path Finding (MAPF) problems, agents can be delayed by
unexpected events. To address such situations recent work describes k-Robust
Conflict-BasedSearch (k-CBS): an algorithm that produces coordinated and
collision-free plan that is robust for up to k delays. In this work we
introducing a variety of pairwise symmetry breaking constraints, specific to
k-robust planning, that can efficiently find compatible and optimal paths for
pairs of conflicting agents. We give a thorough description of the new
constraints and report large improvements to success rate ina range of domains
including: (i) classic MAPF benchmarks;(ii) automated warehouse domains and;
(iii) on maps from the 2019 Flatland Challenge, a recently introduced railway
domain where k-robust planning can be fruitfully applied to schedule trains.
|
In the present work we demonstrate that C-doped Zr$_{5}$Pt$_{3}$ is an
electron-phonon superconductor (with critical temperature T$_\mathrm{C}$ =
3.7\,K) with a nonsymmorphic topological Dirac nodal-line semimetal state,
which we report here for the first time. The superconducting properties of
Zr$_{5}$Pt$_{3}$C$_{0.5}$ have been investigated by means of magnetization and
muon spin rotation and relaxation ($\mu$SR) measurements. We find that at low
temperatures the depolarization rate is almost constant and can be well
described by a single-band $s-$wave model with a superconducting gap of
$2\Delta(0)/k_\mathrm{B}T_\mathrm{C}$ = 3.84, close to the value of BCS theory.
From transverse field $\mu$SR analysis we estimate the London penetration depth
$\lambda_{L}$ = 469 nm, superconducting carrier density $n_{s}$ =
2$\times$10$^{26}$ $m^{-3}$, and effective mass m$^{*}$ = 1.584 $m_{e}$. Zero
field $\mu$SR confirms the absence of any spontaneous magnetic moment in the
superconducting ground state. To gain additional insights into the electronic
ground state of C-doped Zr$_5$Pt$_3$, we have also performed first-principles
calculations within the framework of density functional theory (DFT). The
observed homogenous electronic character of the Fermi surface as well as the
mutual decrease of $T_\mathrm{C}$ and density of states at the Fermi level are
consistent with the experimental findings. However, the band structure reveals
the presence of robust, gapless fourfold-degenarate nodal lines protected by
$6_{3}$ screw rotations and glide mirror planes. Therefore, Zr$_5$Pt$_3$
represents a novel, unprecedented condensed matter system to investigate the
intricate interplay between superconductivity and topology.
|
After inflation the Universe presumably undergoes a phase of reheating which
in effect starts the thermal big bang cosmology. However, so far we have very
little direct experimental or observational evidence of this important phase of
the Universe. In this letter, we argue that measuring the spectrum of freely
propagating relativistic particles, i.e. dark radiation, produced during
reheating may provide us with powerful information on the reheating phase. To
demonstrate this possibility we consider a situation where the dark radiation
is produced in the decays of heavy, non-relativistic particles. We show that
the spectrum crucially depends on whether the heavy particle once dominated the
Universe or not. Characteristic features caused by the dependence on the number
of the relativistic degrees of freedom may even allow to infer the temperature
when the decay of the heavy particle occurred.
|
We prove some new results on reflected BSDEs and doubly reflected BSDEs
driven by a multi-dimensional RCLL martingale. The goal is to develop a general
multi-asset framework encompassing a wide spectrum of nonlinear financial
models, including as particular cases the setups studied by Peng and Xu
\cite{PX2009} and Dumitrescu et al. \cite{DGQS2018} who dealt with BSDEs driven
by a one-dimensional Brownian motion and a purely discontinuous martingale with
a single jump. Our results are not covered by existing literature on reflected
and doubly reflected BSDEs driven by a Brownian motion and a Poisson random
measure.
|
A quantum random access memory (qRAM) is considered an essential computing
unit to enable polynomial speedups in quantum information processing. Proposed
implementations include using neutral atoms and superconducting circuits to
construct a binary tree, but these systems still require demonstrations of the
elementary components. Here, we propose a photonic integrated circuit (PIC)
architecture integrated with solid-state memories as a viable platform for
constructing a qRAM. We also present an alternative scheme based on quantum
teleportation and extend it to the context of quantum networks. Both
implementations rely on already demonstrated components: electro-optic
modulators, a Mach-Zehnder interferometer (MZI) network, and nanocavities
coupled to artificial atoms for spin-based memory writing and retrieval. Our
approaches furthermore benefit from built-in error-detection based on photon
heralding. Detailed theoretical analysis of the qRAM efficiency and query
fidelity shows that our proposal presents viable near-term designs for a
general qRAM.
|
We derive new constraints on the spectrum of two-dimensional conformal field
theories with central charge $c>1.$ Employing the pillow representation of the
four point correlator of identical scalars with dimension
$\Delta_{\mathcal{O}}$ and positivity of the coefficients of its expansion in
the elliptic nome we place central charge dependent bounds on the dimension of
the first excited Virasoro primary the scalar couples to, in the form
$\Delta_1<f(c,\Delta_{\mathcal{O}}).$ We give an analytic expression for
$f(c,\Delta_{\mathcal{O}})$ and write down transcendental equations that
significantly improve the analytic bound. We numerically evaluate the stronger
bounds for arbitrary fixed values of $c$ and $\Delta_{\mathcal{O}}.$
|
A binary neutron star (BNS) merger can lead to various outcomes, from
indefinitely stable neutron stars, through supramassive (SMNS) or hypermassive
(HMNS) neutron stars supported only temporarily against gravity, to black holes
formed promptly after the merger. Up-to-date constraints on the BNS total mass
and the neutron star equation of state suggest that a long-lived SMNS may form
in $\sim 0.45-0.9$ of BNS mergers. A maximally rotating SMNS needs to lose
$\sim 3-6\times 10^{52}$ erg of it's rotational energy before it collapses, on
a fraction of the spin-down timescale. A SMNS formation imprints on the
electromagnetic counterparts to the BNS merger. However, a comparison with
observations reveals tensions. First, the distribution of collapse times is too
wide and that of released energies too narrow (and the energy itself too large)
to explain the observed distributions of internal X-ray plateaus, invoked as
evidence for SMNS-powered energy injection. Secondly, the immense energy
injection into the blastwave should lead to extremely bright radio transients
which previous studies found to be inconsistent with deep radio observations of
short gamma-ray bursts. Furthermore, we show that upcoming all-sky radio
surveys will constrain the extracted energy distribution, independently of a
GRB jet formation. Our results can be self-consistently understood, provided
that most BNS merger remnants collapse shortly after formation (even if their
masses are low enough to allow for SMNS formation). This naturally occurs if
the remnant retains half or less of its initial energy by the time it enters
solid body rotation.
|
The derivation of nonlocal strong forms for many physical problems remains
cumbersome in traditional methods. In this paper, we apply the variational
principle/weighted residual method based on nonlocal operator method for the
derivation of nonlocal forms for elasticity, thin plate, gradient elasticity,
electro-magneto-elasticity and phase field fracture method. The nonlocal
governing equations are expressed as integral form on support and dual-support.
The first example shows that the nonlocal elasticity has the same form as
dual-horizon non-ordinary state-based peridynamics. The derivation is simple
and general and it can convert efficiently many local physical models into
their corresponding nonlocal forms. In addition, a criterion based on the
instability of the nonlocal gradient is proposed for the fracture modelling in
linear elasticity. Several numerical examples are presented to validate
nonlocal elasticity and the nonlocal thin plate .
|
Let $K$ be an algebraically closed, complete, non-Archimedean valued field of
characteristic zero, and let $\mathscr{X}$ be a $K$-analytic space (in the
sense of Huber). In this work, we pursue a non-Archimedean characterization of
Campana's notion of specialness. We say $\mathscr{X}$ is $K$-analytically
special if there exists a connected, finite type algebraic group $G/K$, a dense
open subset $\mathscr{U}\subset G^{\text{an}}$ with
$\text{codim}(G^{\text{an}}\setminus \mathscr{U}) \geq 2$, and an analytic
morphism $\mathscr{U} \to \mathscr{X}$ which is Zariski dense.
With this definition, we prove several results which illustrate that this
definition correctly captures Campana's notion of specialness in the
non-Archimedean setting. These results inspire us to make non-Archimedean
counterparts to conjectures of Campana. As preparation for our proofs, we prove
auxiliary results concerning the indeterminacy locus of a meromorphic mapping
between $K$-analytic spaces, the notion of pseudo-$K$-analytically Brody
hyperbolic, and extensions of meromorphic maps from smooth, irreducible
$K$-analytic spaces to the analytification of a semi-abelian variety.
|
Herein, we have compared the performance of SVM and MLP in emotion
recognition using speech and song channels of the RAVDESS dataset. We have
undertaken a journey to extract various audio features, identify optimal
scaling strategy and hyperparameter for our models. To increase sample size, we
have performed audio data augmentation and addressed data imbalance using
SMOTE. Our data indicate that optimised SVM outperforms MLP with an accuracy of
82 compared to 75%. Following data augmentation, the performance of both
algorithms was identical at ~79%, however, overfitting was evident for the SVM.
Our final exploration indicated that the performance of both SVM and MLP were
similar in which both resulted in lower accuracy for the speech channel
compared to the song channel. Our findings suggest that both SVM and MLP are
powerful classifiers for emotion recognition in a vocal-dependent manner.
|
Recent research on model interpretability in natural language processing
extensively uses feature scoring methods for identifying which parts of the
input are the most important for a model to make a prediction (i.e. explanation
or rationale). However, previous research has shown that there is no clear best
scoring method across various text classification tasks while practitioners
typically have to make several other ad-hoc choices regarding the length and
the type of the rationale (e.g. short or long, contiguous or not). Inspired by
this, we propose a simple yet effective and flexible method that allows
selecting optimally for each data instance: (1) a feature scoring method; (2)
the length; and (3) the type of the rationale. Our method is inspired by input
erasure approaches to interpretability which assume that the most faithful
rationale for a prediction should be the one with the highest difference
between the model's output distribution using the full text and the text after
removing the rationale as input respectively. Evaluation on four standard text
classification datasets shows that our proposed method provides more faithful,
comprehensive and highly sufficient explanations compared to using a fixed
feature scoring method, rationale length and type. More importantly, we
demonstrate that a practitioner is not required to make any ad-hoc choices in
order to extract faithful rationales using our approach.
|
Metamaterials and photonic/phononic crystals have been successfully developed
in recent years to achieve advanced wave manipulation and control, both in
electromagnetism and mechanics. However, the underlying concepts are yet to be
fully applied to the field of fluid dynamics and water waves. Here, we present
an example of the interaction of surface gravity waves with a mechanical
metamaterial, i.e. periodic underwater oscillating resonators. In particular,
we study a device composed by an array of periodic submerged harmonic
oscillators whose objective is to absorb wave energy and dissipate it inside
the fluid in the form of heat. The study is performed using a state of the art
direct numerical simulation of the Navier-Stokes equation in its
two-dimensional form with free boundary and moving bodies. We use a Volume of
Fluid interface technique for tracking the surface and an Immersed Boundary
method for the fluid-structure interaction. We first study the interaction of a
monochromatic wave with a single oscillator and then add up to four resonators
coupled only fluid-mechanically. We study the efficiency of the device in terms
of the total energy dissipation and find that by adding resonators, the
dissipation increases in a non trivial way. As expected, a large energy
attenuation is achieved when the wave and resonators are characterised by
similar frequencies. As the number of resonators is increased, the range of
attenuated frequencies also increases. The concept and results presented herein
are of relevance for applications in coastal protection.
|
Regional data analysis is concerned with the analysis and modeling of
measurements that are spatially separated by specifically accounting for
typical features of such data. Namely, measurements in close proximity tend to
be more similar than the ones further separated. This might hold also true for
cross-dependencies when multivariate spatial data is considered. Often,
scientists are interested in linear transformations of such data which are easy
to interpret and might be used as dimension reduction. Recently, for that
purpose spatial blind source separation (SBSS) was introduced which assumes
that the observed data are formed by a linear mixture of uncorrelated, weakly
stationary random fields. However, in practical applications, it is well-known
that when the spatial domain increases in size the weak stationarity
assumptions can be violated in the sense that the second order dependency is
varying over the domain which leads to non-stationary analysis. In our work we
extend the SBSS model to adjust for these stationarity violations, present
three novel estimators and establish the identifiability and affine
equivariance property of the unmixing matrix functionals defining these
estimators. In an extensive simulation study, we investigate the performance of
our estimators and also show their use in the analysis of a geochemical dataset
which is derived from the GEMAS geochemical mapping project.
|
We analyze the electromagnetic field of a small bunch that uniformly moves in
a circular waveguide and transverses a boundary between an area filled up with
cold magnetized electron plasma and a vacuum area. The magnetic field is
supposed to be strong but finite so the perturbation technique can be applied.
Two cases are studied in detail: the bunch is flying out of the plasma into a
vacuum, and, inversely, the bunch is flying into the plasma out of the vacuum
area of waveguide. The investigation of the waveguide mode components is
performed analytically with methods of the complex variable function theory.
The main peculiarities of the bunch radiation in such situations are revealed.
|
This paper proposes a control method for allowing aggregates of
thermostatically controlled loads to provide synthetic inertia and primary
frequency regulation services to the grid. The proposed control framework is
fully distributed and basically consists in the modification of the thermostat
logic as a function of the grid frequency. Three strategies are considered: in
the first one, the load aggregate provides synthetic inertia by varying its
active power demand proportionally to the frequency rate of change; in the
second one, the load aggregate provides primary frequency regulation by varying
its power demand proportionally to frequency; in the third one, the two
services are combined. The performances of the proposed control solutions are
analyzed in the forecasted scenario of the electric power system of Sardinia in
2030, characterized by a huge installation of wind and photovoltaic generation
and no coil and combustible oil power plants. The considered load aggregate is
composed by domestic refrigerators and water heaters. Results prove the
effectiveness of the proposed approach and show that, in the particular case of
refrigerators and water heaters, the contribution to the frequency regulation
is more significant in the case of positive frequency variations. Finally, the
correlation between the regulation performances and the level of penetration of
the load aggregate with respect to the system total load is evaluated.
|
In this paper we give some sufficient conditions for the nonnegativity of
immanants of square submatrices of Catalan-Stieltjes matrices and their
corresponding Hankel matrices. To obtain these sufficient conditions, we
construct new planar networks with a recursive nature for Catalan-Stieltjes
matrices. As applications, we provide a unified way to produce inequalities for
many combinatorial polynomials, such as the Eulerian polynomials, Schr\"{o}der
polynomials and Narayana polynomials.
|
In this paper, we propose a first-order distributed optimization algorithm
that is provably robust to Byzantine failures-arbitrary and potentially
adversarial behavior, where all the participating agents are prone to failure.
We model each agent's state over time as a two-state Markov chain that
indicates Byzantine or trustworthy behaviors at different time instants. We set
no restrictions on the maximum number of Byzantine agents at any given time. We
design our method based on three layers of defense: 1) Temporal gradient
averaging, 2) robust aggregation, and 3) gradient normalization. We study two
settings for stochastic optimization, namely Sample Average Approximation and
Stochastic Approximation, and prove that for strongly convex and smooth
non-convex cost functions, our algorithm achieves order-optimal statistical
error and convergence rates.
|
Robotic systems for retail have gained a lot of attention due to the
labor-intensive nature of such business environments. Many tasks have the
potential to be automated via intelligent robotic systems that have
manipulation capabilities. For example, empty shelves can be replenished, stray
products can be picked up or new items can be delivered. However, many
challenges make the realization of this vision a challenge. In particular,
robots are still too expensive and do not work out of the box. In this paper,
we discuss a work-in-progress approach for enabling power-on-and-go robots in
retail environments through a combination of active, physical sensors and
passive, artificial sensors. In particular, we use low-cost hardware sensors in
conjunction with machine learning techniques in order to generate high-quality
environmental information. More specifically, we present a setup in which a
standard monocular camera and Bluetooth low-energy yield a reliable robot
system that can immediately be used after placing a couple of sensors in the
environment. The camera information is used to synthesize accurate 3D point
clouds, whereas the BLE data is used to integrate the data into a complex map
of the environment. The combination of active and passive sensing enables
high-quality sensing capabilities at a fraction of the costs traditionally
associated with such tasks.
|
Financial speculators often seek to increase their potential gains with
leverage. Debt is a popular form of leverage, and with over 39.88B USD of total
value locked (TVL), the Decentralized Finance (DeFi) lending markets are
thriving. Debts, however, entail the risks of liquidation, the process of
selling the debt collateral at a discount to liquidators. Nevertheless, few
quantitative insights are known about the existing liquidation mechanisms.
In this paper, to the best of our knowledge, we are the first to study the
breadth of the borrowing and lending markets of the Ethereum DeFi ecosystem. We
focus on Aave, Compound, MakerDAO, and dYdX, which collectively represent over
85% of the lending market on Ethereum. Given extensive liquidation data
measurements and insights, we systematize the prevalent liquidation mechanisms
and are the first to provide a methodology to compare them objectively. We find
that the existing liquidation designs well incentivize liquidators but sell
excessive amounts of discounted collateral at the borrowers' expenses. We
measure various risks that liquidation participants are exposed to and quantify
the instabilities of existing lending protocols. Moreover, we propose an
optimal strategy that allows liquidators to increase their liquidation profit,
which may aggravate the loss of borrowers.
|
In this paper, we investigate a key problem of Narrowband-Internet of Things
(NB-IoT) in the context of 5G with Mobile Edge Computing (MEC). We address the
challenge that IoT devices may have different priorities when demanding
bandwidth for data transmission in specific applications and services. Due to
the scarcity of bandwidth in an MEC enabled IoT network, our objective is to
optimize bandwidth allocation for a group of NB-IoT devices in a way that the
group can work collaboratively to maximize their overall utility. To this end,
we design an optimal distributed algorithm and use simulations to demonstrate
its efficacy to effectively manage various IoT data streams in a fully
distributed framework.
|
Federated Learning is a distributed machine learning approach which enables
model training without data sharing. In this paper, we propose a new federated
learning algorithm, Federated Averaging with Client-level Momentum (FedCM), to
tackle problems of partial participation and client heterogeneity in real-world
federated learning applications. FedCM aggregates global gradient information
in previous communication rounds and modifies client gradient descent with a
momentum-like term, which can effectively correct the bias and improve the
stability of local SGD. We provide theoretical analysis to highlight the
benefits of FedCM. We also perform extensive empirical studies and demonstrate
that FedCM achieves superior performance in various tasks and is robust to
different levels of client numbers, participation rate and client
heterogeneity.
|
We derive braided $C^*$-tensor categories from gapped ground states on
two-dimensional quantum spin systems satisfying some additional condition which
we call the approximate Haag duality.
|
The active motion of phoretic colloids leads them to accumulate at boundaries
and interfaces. Such an excess accumulation, with respect to their passive
counterparts, makes the dynamics of phoretic colloids particularly sensitive to
the presence of boundaries and pave new routes to externally control their
single particle as well as collective behavior. Here we review some recent
theoretical results about the dynamics of phoretic colloids close to and
adsorbed at fluid interfaces in particular highlighting similarities and
differences with respect to solid-fluid interfaces.
|
We propose an effective two-stage approach to tackle the problem of
language-based Human-centric Spatio-Temporal Video Grounding (HC-STVG) task. In
the first stage, we propose an Augmented 2D Temporal Adjacent Network
(Augmented 2D-TAN) to temporally ground the target moment corresponding to the
given description. Primarily, we improve the original 2D-TAN from two aspects:
First, a temporal context-aware Bi-LSTM Aggregation Module is developed to
aggregate clip-level representations, replacing the original max-pooling.
Second, we propose to employ Random Concatenation Augmentation (RCA) mechanism
during the training phase. In the second stage, we use pretrained MDETR model
to generate per-frame bounding boxes via language query, and design a set of
hand-crafted rules to select the best matching bounding box outputted by MDETR
for each frame within the grounded moment.
|
In model selection, several types of cross-validation are commonly used and
many variants have been introduced. While consistency of some of these methods
has been proven, their rate of convergence to the oracle is generally still
unknown. Until now, an asymptotic analysis of crossvalidation able to answer
this question has been lacking. Existing results focus on the ''pointwise''
estimation of the risk of a single estimator, whereas analysing model selection
requires understanding how the CV risk varies with the model. In this article,
we investigate the asymptotics of the CV risk in the neighbourhood of the
optimal model, for trigonometric series estimators in density estimation.
Asymptotically, simple validation and ''incomplete'' V --fold CV behave like
the sum of a convex function fn and a symmetrized Brownian changed in time W
gn/V. We argue that this is the right asymptotic framework for studying model
selection.
|
First and foremost, we show that a 4-dimensional conformally flat generalized
Ricci recurrent spacetime $(GR)_4$ is an Einstein manifold. We examine such a
spacetime as a solution of $f(R, G)$-gravity theory and it is shown that the
additional terms from the modification of the gravitational sector can be
expressed as a perfect fluid. Several energy conditions are investigated with
$f(R, G) = R +\sqrt{G}$ and $f(R, G) = R^2+GlnG$. For both the models, weak,
null and dominant energy conditions are satisfied while strong energy condition
is violated, which is a good agreement with the recent observational studies
which reveals that the current universe is in accelerating phase.
|
Ice growth from liquid phase has been extensively investigated in various
conditions, especially for ice freely grown in undercooled water and aqueous
solutions. Although unidirectional ice growth plays a significant role in sea
ice and freeze casting, the detailed pattern formation of unidirectionally
grown ice in an aqueous solution remains elusive. For the first time, we in
situ proved a crossover from lamellar to spongy ice morphologies of a single
ice crystal via unidirectional freezing of an aqueous solution. The spongy ice
morphology originates from the intersect of tilted lamellar ice and is observed
in a single ice crystal, which is intrinsically different from the competitive
growth of bi-crystal composed of two differently orientated grains in
directional solidification. These results provide a complete physical picture
of unidirectionally grown ice from aqueous solution and are believed to promote
our understanding of various pattern of ice in many relevant domains where
pattern formation of ice crystal is vital.
|
The letter provides a geometrical interpretation of frequency in electric
circuits. According to this interpretation, the frequency is defined as a
multivector with symmetric and antisymmetric components. The conventional
definition of frequency is shown to be a special case of the proposed
theoretical framework. Several examples serve to show the features, generality
as well as practical aspects of the proposed approach.
|
Fix a Calabi-Yau 3-fold $X$ satisfying the Bogomolov-Gieseker conjecture of
Bayer-Macr\`i-Toda, such as the quintic 3-fold. We express Joyce's generalised
DT invariants counting Gieseker semistable sheaves of any rank $r\ge1$ on $X$
in terms of those counting sheaves of rank 0 and pure dimension 2.
The basic technique is to reduce the ranks of sheaves by replacing them by
the cokernels of their Mochizuki/Joyce-Song pairs and then use wall crossing to
handle their stability.
|
Analyzing performance within asynchronous many-task-based runtime systems is
challenging because millions of tasks are launched concurrently. Especially for
long-term runs the amount of data collected becomes overwhelming. We study HPX
and its performance-counter framework and APEX to collect performance data and
energy consumption. We added HPX application-specific performance counters to
the Octo-Tiger full 3D AMR astrophysics application. This enables the combined
visualization of physical and performance data to highlight bottlenecks with
respect to different solvers. We examine the overhead introduced by these
measurements, which is around 1%, with respect to the overall application
runtime. We perform a convergence study for four different levels of refinement
and analyze the application's performance with respect to adaptive grid
refinement. The measurements' overheads are small, enabling the combined use of
performance data and physical properties with the goal of improving the code's
performance. All of these measurements were obtained on NERSC's Cori, Louisiana
Optical Network Infrastructure's QueenBee2, and Indiana University's Big Red 3.
|
A parallelized three-dimensional (3D) boundary element method is used to
simulate the interaction between an incoming solitary wave and a 3D submerged
horizontal plate under the assumption of potential flow. The numerical setup
follows closely the setup of laboratory experiments recently performed at
Shanghai Jiao Tong University. The numerical results are compared with the
experimental results. An overall good agreement is found for the
two-dimensional wave elevation, the horizontal force and the vertical force
exerted on the plate, and the pitching moment. Even though there are some
discrepancies, the comparison shows that a model solving the fully nonlinear
potential flow equations with a free surface using a 3D boundary element method
can satisfactorily capture the main features of the interaction between
nonlinear waves and a submerged horizontal plate.
|
It is an important yet challenging setting to continually learn new tasks
from a few examples. Although numerous efforts have been devoted to either
continual learning or few-shot learning, little work has considered this new
setting of few-shot continual learning (FSCL), which needs to minimize the
catastrophic forgetting to the old tasks and gradually improve the ability of
few-shot generalization. In this paper, we provide a first systematic study on
FSCL and present an effective solution with deep neural networks. Our solution
is based on the observation that continual learning of a task sequence
inevitably interferes few-shot generalization, which makes it highly nontrivial
to extend few-shot learning strategies to continual learning scenarios. We draw
inspirations from the robust brain system and develop a method that (1)
interdependently updates a pair of fast / slow weights for continual learning
and few-shot learning to disentangle their divergent objectives, inspired by
the biological model of meta-plasticity and fast / slow synapse; and (2)
applies a brain-inspired two-step consolidation strategy to learn a task
sequence without forgetting in the fast weights while improve generalization
without overfitting in the slow weights. Extensive results on various
benchmarks show that our method achieves a better performance than joint
training of all the tasks ever seen. The ability of few-shot generalization is
also substantially improved from incoming tasks and examples.
|
Computer voice is experiencing a renaissance through the growing popularity
of voice-based interfaces, agents, and environments. Yet, how to measure the
user experience (UX) of voice-based systems remains an open and urgent
question, especially given that their form factors and interaction styles tend
to be non-visual, intangible, and often considered disembodied or "body-less."
As a first step, we surveyed the ACM and IEEE literatures to determine which
quantitative measures and measurements have been deemed important for voice UX.
Our findings show that there is little consensus, even with similar situations
and systems, as well as an overreliance on lab work and unvalidated scales. In
response, we offer two high-level descriptive frameworks for guiding future
research, developing standardized instruments, and informing ongoing review
work. Our work highlights the current strengths and weaknesses of voice UX
research and charts a path towards measuring voice UX in a more comprehensive
way.
|
Experimental measurements on commercial adaptive cruise control (ACC)
vehicles \RoundTwo{are} becoming increasingly available from around the world,
providing an unprecedented opportunity to study the traffic flow
characteristics that arise from this technology. This paper adds new
experimental evidence to this knowledge base and presents a comprehensive
empirical study on the ACC equilibrium behaviors via the resulting fundamental
diagrams. We find that like human-driven vehicles, ACC systems display a linear
equilibrium spacing-speed relationship (within the range of available data) but
the key parameters of these relationships can differ significantly from
human-driven traffic depending on input settings: At the minimum headway
setting, equilibrium capacities in excess of 3500 vehicles per hour are
observed, together with an extremely fast equilibrium wave speed of 100
kilometers per hour on average. These fast waves are unfamiliar to human
drivers, and may pose a safety risk. The results also suggest that ACC jam
spacing can be much larger than in human traffic, which reduces the network
storage capacity.
|
In this paper we derived in QCD the BFKL linear, inhomogeneous equation for
the factorial moments of multiplicity distribution($M_k$) from LMM equation. In
particular, the equation for the average multiplicity of the color-singlet
dipoles($N$) turns out to be the homogeneous BFKL while $M_k \propto N^k$ at
small $x$. Second, using the diffusion approximation for the BFKL kernel we
show that the factorial moments are equal to: $M_k=k!N( N-1)^{k-1}$ which leads
to the multiplicity distribution:$ \frac{\sigma_n}{\sigma_{in}}=\frac{1}{N} (
\frac{N\,-\,1}{N})^{n - 1}$. We also suggest a procedure for finding
corrections to this multiplicity distribution which will be useful for
descriptions of the experimental data.
|
In this work we formalize the (pure observational) task of predicting node
attribute evolution in temporal graphs. We show that node representations of
temporal graphs can be cast into two distinct frameworks: (a) The de-facto
standard approach, which we denote {\em time-and-graph}, where equivariant
graph (e.g., GNN) and sequence (e.g., RNN) representations are intertwined to
represent the temporal evolution of the graph; and (b) an approach that we
denote {\em time-then-graph}, where the sequences describing the node and edge
dynamics are represented first (e.g., RNN), then fed as node and edge
attributes into a (static) equivariant graph representation that comes after
(e.g., GNN). In real-world datasets, we show that our {\em time-then-graph}
framework achieves the same prediction performance as state-of-the-art {\em
time-and-graph} methods. Interestingly, {\em time-then-graph} representations
have an expressiveness advantage over {\em time-and-graph} representations when
both use component GNNs that are not most-expressive (e.g., 1-Weisfeiler-Lehman
GNNs). We introduce a task where this expressiveness advantage allows {\em
time-then-graph} methods to succeed while state-of-the-art {\em time-and-graph}
methods fail.
|
We study a family of Siegel modular forms that are constructed using Jacobi
forms that arise in Umbral moonshine. All but one of them arise as the
Weyl-Kac-Borcherds denominator formula of some Borcherds-Kac-Moody (BKM) Lie
superalgebras. These Lie superalgebras have a $\widehat{sl(2)}$ subalgebra
which we use to study the Siegel modular forms. We show that the expansion of
the Umbral Jacobi forms in terms of $\widehat{sl(2)}$ characters leads to
vector-valued modular forms. We obtain closed formulae for these vector-valued
modular forms. In the Lie algebraic context, the Fourier coefficients of these
vector-valued modular forms are related to multiplicities of roots appearing on
the sum side of the Weyl-Kac-Borcherds denominator formulae.
|
The internet is filled with fake face images and videos synthesized by deep
generative models. These realistic DeepFakes pose a challenge to determine the
authenticity of multimedia content. As countermeasures, artifact-based
detection methods suffer from insufficiently fine-grained features that lead to
limited detection performance. DNN-based detection methods are not efficient
enough, given that a DeepFake can be created easily by mobile apps and
DNN-based models require high computational resources. For the first time, we
show that DeepFake faces have fewer feature points than real ones, especially
in certain facial regions. Inspired by feature point detector-descriptors to
extract discriminative features at the pixel level, we propose the Fused Facial
Region_Feature Descriptor (FFR_FD) for effective and fast DeepFake detection.
FFR_FD is only a vector extracted from the face, and it can be constructed from
any feature point detector-descriptors. We train a random forest classifier
with FFR_FD and conduct extensive experiments on six large-scale DeepFake
datasets, whose results demonstrate that our method is superior to most state
of the art DNN-based models.
|
With the standard model working well in describing the collider data, the
focus is now on determining the standard model parameters as well as for any
hint of deviation. In particular, the determination of the couplings of the
Higgs boson with itself and with other particles of the model is important to
better understand the electroweak symmetry breaking sector of the model. In
this letter, we look at the process $pp \to WWH$, in particular through the
fusion of bottom quarks. Due to the non-negligible coupling of the Higgs boson
with the bottom quarks, there is a dependence on the $WWHH$ coupling in this
process. This sub-process receives largest contribution when the $W$ bosons are
longitudinally polarized. We compute one-loop QCD corrections to various final
states with polarized $W$ bosons. We find that the corrections to the final
state with the longitudinally polarized $W$ bosons are large. It is shown that
the measurement of the polarization of the $W$ bosons can be used as a tool to
probe the $WWHH$ coupling in this process. We also examine the effect of
varying $WWHH$ coupling in the $\kappa$-framework.
|
When a solution to an abstract inverse linear problem on Hilbert space is
approximable by finite linear combinations of vectors from the cyclic subspace
associated with the datum and with the linear operator of the problem, the
solution is said to be a Krylov solution, i.e., it belongs to the Krylov
subspace of the problem. Krylov solvability of the inverse problem allows for
solution approximations that, in applications, correspond to the very efficient
and popular Krylov subspace methods. We study here the possible behaviours of
persistence, gain, or loss of Krylov solvability under suitable small
perturbations of the inverse problem -- the underlying motivations being the
stability or instability of Krylov methods under small noise or uncertainties,
as well as the possibility to decide a priori whether an inverse problem is
Krylov solvable by investigating a potentially easier, perturbed problem. We
present a whole scenario of occurrences in the first part of the work. In the
second, we exploit the weak gap metric induced, in the sense of Hausdorff
distance, by the Hilbert weak topology, in order to conveniently monitor the
distance between perturbed and unperturbed Krylov subspaces.
|
In several domains of physics, including first principle simulations and
classical models for polarizable systems, the minimization of an energy
function with respect to a set of auxiliary variables must be performed to
define the dynamics of physical degrees of freedom. In this paper, we discuss a
recent algorithm proposed to efficiently and rigorously simulate this type of
systems: the Mass-Zero (MaZe) Constrained Dynamics. In MaZe the minimum
condition is imposed as a constraint on the auxiliary variables treated as
degrees of freedom of zero inertia driven by the physical system. The method is
formulated in the Lagrangian framework, enabling the properties of the approach
to emerge naturally from a fully consistent dynamical and statistical
viewpoint. We begin by presenting MaZe for typical minimization problems where
the imposed constraints are holonomic and summarizing its key formal
properties, notably the exact Born-Oppenheimer dynamics followed by the
physical variables and the exact sampling of the corresponding physical
probability density. We then generalize the approach to the case of conditions
on the auxiliary variables that linearly involve their velocities. Such
conditions occur, for example, when describing systems in external magnetic
field and they require to adapt MaZe to integrate semiholonomic constraints.
The new development is presented in the second part of this paper and
illustrated via a proof-of-principle calculation of the charge transport
properties of a simple classical polarizable model of NaCl.
|
A quadratic dynamical system with practical applications is taken into
considered. This system is transformed into a new bilinear system with Hadamard
products by means of the implicit matrix structure. The corresponding quadratic
bilinear equation is subsequently established via the Volterra series. Under
proper conditions the existence of the solution to the equation is proved by
using a fixed-point iteration.
|
This paper studies the informativity problem for reachability and
null-controllability of constrained systems. To be precise, we will focus on an
unknown linear systems with convex conic constraints from which we measure data
consisting of exact state trajectories of finite length. We are interested in
performing system analysis of such an unknown system on the basis of the
measured data. However, from such measurements it is only possible to obtain a
unique system explaining the data in very restrictive cases. This means that we
can not approach this problem using system identification combined with model
based analysis. As such, we will formulate conditions on the data under which
any such system consistent with the measurements is guaranteed to be reachable
or null-controllable. These conditions are stated in terms of spectral
conditions and subspace inclusions, and therefore they are easy to verify.
|
Generative adversarial networks (GANs) have been being widely used in various
applications. Arguably, GANs are really complex, and little has been known
about their generalization. In this paper, we make a comprehensive analysis
about generalization of GANs. We decompose the generalization error into an
explicit composition: generator error + discriminator error + optimization
error. The first two errors show the capacity of the player's families, are
irreducible and optimizer-independent. We then provide both uniform and
non-uniform generalization bounds in different scenarios, thanks to our new
bridge between Lipschitz continuity and generalization. Our bounds overcome
some major limitations of existing ones. In particular, our bounds show that
penalizing the zero- and first-order informations of the GAN loss will improve
generalization, answering the long mystery of why imposing a Lipschitz
constraint can help GANs perform better in practice. Finally, we show why data
augmentation penalizes the zero- and first-order informations of the loss,
helping the players generalize better, and hence explaining the highly
successful use of data augmentation for GANs.
|
Nigam et al. reported a genetic algorithm (GA) utilizing the SELFIES
representation and also propose an adaptive, neural network-based penalty that
is supposed to improve the diversity of the generated molecules. The main
claims of the paper are that this GA outperforms other generative techniques
(as measured by the penalized logP) and that a neural network-based adaptive
penalty increases the diversity of the generated molecules. In this work, we
investigated the reproducibility of their claims. Overall, we were able to
reproduce comparable results using the SELFIES-based GA, but mostly by
exploiting deficiencies of the (easily optimizable) fitness function (i.e.,
generating long, sulfur containing chains). In addition, we reproduce results
showing that the discriminator can be used to bias the generation of molecules
to ones that are similar to the reference set. Lastly, we attempted to quantify
the evolution of the diversity, understand the influence of some
hyperparameters, and propose improvements to the adaptive penalty.
|
Most of the state-of-the-art indirect visual SLAM methods are based on the
sparse point features. However, it is hard to find enough reliable point
features for state estimation in the case of low-textured scenes. Line features
are abundant in urban and indoor scenes. Recent studies have shown that the
combination of point and line features can provide better accuracy despite the
decrease in computational efficiency. In this paper, measurements of point and
line features are extracted from RGB-D data to create map features, and points
on a line are treated as keypoints. We propose an extended approach to make
more use of line observation information. And we prove that, in the local
bundle adjustment, the estimation uncertainty of keyframe poses can be reduced
when considering more landmarks with independent measurements in the
optimization process. Experimental results on two public RGB-D datasets
demonstrate that the proposed method has better robustness and accuracy in
challenging environments.
|
Background. Digital pathology has aroused widespread interest in modern
pathology. The key of digitalization is to scan the whole slide image (WSI) at
high magnification. The lager the magnification is, the richer details WSI will
provide, but the scanning time is longer and the file size of obtained is
larger. Methods. We design a strategy to scan slides with low resolution (5X)
and a super-resolution method is proposed to restore the image details when in
diagnosis. The method is based on a multi-scale generative adversarial network,
which sequentially generates three high-resolution images such as 10X, 20X and
40X. Results. The peak-signal-to-noise-ratio of 10X to 40X generated images are
24.16, 22.27 and 20.44, and the structural-similarity-index are 0.845, 0.680
and 0.512, which are better than other super-resolution networks. Visual
scoring average and standard deviation from three pathologists is 3.63
plus-minus 0.52, 3.70 plus-minus 0.57 and 3.74 plus-minus 0.56 and the p value
of analysis of variance is 0.37, indicating that generated images include
sufficient information for diagnosis. The average value of Kappa test is 0.99,
meaning the diagnosis of generated images is highly consistent with that of the
real images. Conclusion. This proposed method can generate high-quality 10X,
20X, 40X images from 5X images at the same time, in which the time and storage
costs of digitalization can be effectively reduced up to 1/64 of the previous
costs. The proposed method provides a better alternative for low-cost storage,
faster image share of digital pathology. Keywords. Digital pathology;
Super-resolution; Low resolution scanning; Low cost
|
In this work, we are interested in the transient dynamics of a fluid
configuration consisting of three fixed cylinders whose axes distribute over an
equilateral triangle in transverse flow << fluidic pinball >>. As the Reynolds
number is increased on the route to chaos, its transient dynamics tell us about
the contribution of the elementary degrees of freedom of the system to the lift
and drag coefficients.
|
We consider the stochastically forced Burgers equation with an emphasis on
spatially rough driving noise. We show that the law of the process at a fixed
time $t$, conditioned on no explosions, is absolutely continuous with respect
to the stochastic heat equation obtained by removing the nonlinearity from the
equation. This establishes a form of ellipticity in this infinite dimensional
setting. The results follow from a recasting of the Girsanov Theorem to handle
less spatially regular solutions while only proving absolute continuity at a
fixed time and not on path-space. The results are proven by decomposing the
solution into the sum of auxiliary processes which are then shown to be
absolutely continuous in law to a stochastic heat equation. The number of
levels in this decomposition diverges to infinite as we move to the
stochastically forced Burgers equation associated to the KPZ equation, which we
conjecture is just beyond the validity of our results (and certainly the
current proof). The analysis provides insights into the structure of the
solution as we approach the regularity of KPZ. A number of techniques from
singular SPDEs are employed as we are beyond the regime of classical solutions
for much of the paper.
|
A CFD-driven deterministic symbolic identification algorithm for learning
explicit algebraic Reynolds-stress models (EARSM) from high-fidelity data is
developed building on
the frozen-training SpaRTA algorithm of [1].
Corrections for the Reynolds stress tensor and the production of transported
turbulent quantities of a baseline linear eddy viscosity model (LEVM) are
expressed as functions of tensor polynomials selected from a library of
candidate functions. The CFD-driven training consists in solving a blackbox
optimization problem in which the fitness of candidate EARSM models is
evaluated by running RANS simulations. Unlike the frozen-training approach, the
proposed methodology is not restricted to data sets for which full fields of
high-fidelity data are available. However, the solution of a high-dimensional
expensive blackbox function optimization problem is required. Several steps are
then undertaken to reduce the associated computational burden. First, a
sensitivity analysis is used to identify the most influential terms and to
reduce the dimensionality of the search space. Afterwards, the Constrained
Optimization using Response Surface (CORS) algorithm, which approximates the
black-box cost function using a response surface constructed from a limited
number of CFD solves, is used to find the optimal model parameters. Model
discovery and cross-validation is performed for three configurations of 2D
turbulent separated flows in channels of variable section using different sets
of training data to show the flexibility of the method. The discovered models
are then applied to the prediction of an unseen 2D separated flow with higher
Reynolds number and different geometry. The predictions for the new case are
shown to be not only more accurate than the baseline LEVM, but also of a
multi-purpose EARSM model derived from purely physical arguments.
|
We provide a rigorous derivation of the precise late-time asymptotics for
solutions to the scalar wave equation on subextremal Kerr backgrounds,
including the asymptotics for projections to angular frequencies $\ell\geq 1$
and $\ell\geq 2$. The $\ell$-dependent asymptotics on Kerr spacetimes differ
significantly from the non-rotating Schwarzschild setting ("Price's law"). The
main differences with Schwarzschild are slower decay rates for higher angular
frequencies and oscillations along the null generators of the event horizon. We
introduce a physical space-based method that resolves the following two main
difficulties for establishing $\ell$-dependent asymptotics in the Kerr setting:
1) the coupling of angular modes and 2) a loss of ellipticity in the
ergoregion. Our mechanism identifies and exploits the existence of conserved
charges along null infinity via a time invertibility theory, which in turn
relies on new elliptic estimates in the full black hole exterior. This
framework is suitable for resolving the conflicting numerology in Kerr
late-time asymptotics that appears in the numerics literature.
|
We provide a universal characterization of the construction taking a scheme
$X$ to its stable $\infty$-category $\text{Mot}(X)$ of noncommutative motives,
patterned after the universal characterization of algebraic K-theory due to
Blumberg--Gepner--Tabuada. As a consequence, we obtain a corepresentability
theorem for secondary K-theory. We envision this as a fundamental tool for the
construction of trace maps from secondary K-theory.
Towards these main goals, we introduce a preliminary formalism of "stable
$(\infty, 2)$-categories"; notable examples of these include (quasicoherent or
constructible) sheaves of stable $\infty$-categories. We also develop the
rudiments of a theory of presentable enriched $\infty$-categories -- and in
particular, a theory of presentable $(\infty, n)$-categories -- which may be of
intependent interest.
|
In this paper, we scrutinize the effectiveness of various clustering
techniques, investigating their applicability in Cultural Heritage monitoring
applications. In the context of this paper, we detect the level of
decomposition and corrosion on the walls of Saint Nicholas fort in Rhodes
utilizing hyperspectral images. A total of 6 different clustering approaches
have been evaluated over a set of 14 different orthorectified hyperspectral
images. Experimental setup in this study involves K-means, Spectral, Meanshift,
DBSCAN, Birch and Optics algorithms. For each of these techniques we evaluate
its performance by the use of performance metrics such as Calinski-Harabasz,
Davies-Bouldin indexes and Silhouette value. In this approach, we evaluate the
outcomes of the clustering methods by comparing them with a set of annotated
images which denotes the ground truth regarding the decomposition and/or
corrosion area of the original images. The results depict that a few clustering
techniques applied on the given dataset succeeded decent accuracy, precision,
recall and f1 scores. Eventually, it was observed that the deterioration was
detected quite accurately.
|
Hierarchical least-squares programs with linear constraints (HLSP) are a type
of optimization problem very common in robotics. Each priority level contains
an objective in least-squares form which is subject to the linear constraints
of the higher priority hierarchy levels. Active-set methods (ASM) are a popular
choice for solving them. However, they can perform poorly in terms of
computational time if there are large changes of the active set. We therefore
propose a computationally efficient primal-dual interior-point method (IPM) for
HLSP's which is able to maintain constant numbers of solver iterations in these
situations. We base our IPM on the null-space method which requires only a
single decomposition per Newton iteration instead of two as it is the case for
other IPM solvers. After a priority level has converged we compose a set of
active constraints judging upon the dual and project lower priority levels into
their null-space. We show that the IPM-HLSP can be expressed in least-squares
form which avoids the formation of the quadratic Karush-Kuhn-Tucker (KKT)
Hessian. Due to our choice of the null-space basis the IPM-HLSP is as fast as
the state-of-the-art ASM-HLSP solver for equality only problems.
|
Aggregation of heating, ventilation, and air conditioning (HVAC) loads can
provide reserves to absorb volatile renewable energy, especially solar
photo-voltaic (PV) generation. However, the time-varying PV generation is not
perfectly known when the system operator decides the HVAC control schedules. To
consider the unknown uncertain PV generation, in this paper, we formulate a
distributionally robust chance-constrained (DRCC) building load control problem
under two typical ambiguity sets: the moment-based and Wasserstein ambiguity
sets. We derive mixed-integer linear programming (MILP) reformulations for DRCC
problems under both sets. Especially for the DRCC problem under the Wasserstein
ambiguity set, we utilize the right-hand side (RHS) uncertainty to derive a
more compact MILP reformulation than the commonly known MILP reformulations
with big-M constants. All the results also apply to general individual chance
constraints with RHS uncertainty. Furthermore, we propose an adjustable
chance-constrained variant to achieve a trade-off between the operational risk
and costs. We derive MILP reformulations under the Wasserstein ambiguity set
and second-order conic programming (SOCP) reformulations under the moment-based
set. Using real-world data, we conduct computational studies to demonstrate the
efficiency of the solution approaches and the effectiveness of the solutions.
|
The thermal conductivity and shear viscosity of dense nuclear matter, along
with the corresponding shear viscosity timescale of canonical neutron stars
(NSs), are investigated, where the effect of Fermi surface depletion (i.e., the
$Z$-factor effect) induced by the nucleon-nucleon correlation are taken into
account. The factors which are responsible for the transport coefficients,
including the equation of state for building the stellar structure, nucleon
effective masses, in-medium cross sections, and the $Z$-factor at Fermi
surfaces, are all calculated in the framework of the Brueckner theory. The
Fermi surface depletion is found to enhance the transport coefficients by
several times at high densities, which is more favorable to damping the
gravitational-wave-driven $r$-mode instability of NSs. Yet, the onset of the
$Z$-factor-quenched neutron triplet superfluidity provides the opposite
effects, which can be much more significant than the above mentioned $Z$-factor
effect itself. Therefore, different from the previous understanding, the
nucleon shear viscosity is still smaller than the lepton one in the superfluid
NS matter at low temperatures. Accordingly, the shear viscosity cannot stablize
canonical NSs against $r$-mode oscillations even at quite low core temperatures
$10^6$ K.
|
In robotics, ergodic control extends the tracking principle by specifying a
probability distribution over an area to cover instead of a trajectory to
track. The original problem is formulated as a spectral multiscale coverage
problem, typically requiring the spatial distribution to be decomposed as
Fourier series. This approach does not scale well to control problems requiring
exploration in search space of more than 2 dimensions. To address this issue,
we propose the use of tensor trains, a recent low-rank tensor decomposition
technique from the field of multilinear algebra. The proposed solution is
efficient, both computationally and storage-wise, hence making it suitable for
its online implementation in robotic systems. The approach is applied to a
peg-in-hole insertion task requiring full 6D end-effector poses, implemented
with a 7-axis Franka Emika Panda robot. In this experiment, ergodic exploration
allows the task to be achieved without requiring the use of force/torque
sensors.
|
Gait recognition is a promising video-based biometric for identifying
individual walking patterns from a long distance. At present, most gait
recognition methods use silhouette images to represent a person in each frame.
However, silhouette images can lose fine-grained spatial information, and most
papers do not regard how to obtain these silhouettes in complex scenes.
Furthermore, silhouette images contain not only gait features but also other
visual clues that can be recognized. Hence these approaches can not be
considered as strict gait recognition.
We leverage recent advances in human pose estimation to estimate robust
skeleton poses directly from RGB images to bring back model-based gait
recognition with a cleaner representation of gait. Thus, we propose GaitGraph
that combines skeleton poses with Graph Convolutional Network (GCN) to obtain a
modern model-based approach for gait recognition. The main advantages are a
cleaner, more elegant extraction of the gait features and the ability to
incorporate powerful spatio-temporal modeling using GCN. Experiments on the
popular CASIA-B gait dataset show that our method archives state-of-the-art
performance in model-based gait recognition.
The code and models are publicly available.
|
We propose a method for constructing confidence intervals that account for
many forms of spatial correlation. The interval has the familiar `estimator
plus and minus a standard error times a critical value' form, but we propose
new methods for constructing the standard error and the critical value. The
standard error is constructed using population principal components from a
given `worst-case' spatial covariance model. The critical value is chosen to
ensure coverage in a benchmark parametric model for the spatial correlations.
The method is shown to control coverage in large samples whenever the spatial
correlation is weak, i.e., with average pairwise correlations that vanish as
the sample size gets large. We also provide results on correct coverage in a
restricted but nonparametric class of strong spatial correlations, as well as
on the efficiency of the method. In a design calibrated to match economic
activity in U.S. states the method outperforms previous suggestions for
spatially robust inference about the population mean.
|
In light of the increasing coupling between electricity and gas networks,
this paper introduces two novel iterative methods for efficiently solving the
multiperiod optimal electricity and gas flow (MOEGF) problem. The first is an
iterative MILP-based method and the second is an iterative LP-based method with
an elaborate procedure for ensuring an integral solution. The convergence of
the two approaches is founded on two key features. The first is a penalty term
with a single, automatically tuned, parameter for controlling the step size of
the gas network iterates. The second is a sequence of supporting hyperplanes
together with an increasing number of carefully constructed halfspaces for
controlling the convergence of the electricity network iterates. Moreover, the
two proposed algorithms use as a warm start the solution from a novel
polyhedral relaxation of the MOEGF problem, for a noticeable improvement in
computation time as compared to a cold start. Unlike the first method, which
invokes a branch-and-bound algorithm to find an integral solution, the second
method implements an elaborate steering procedure that guides the continuous
variables to take integral values at the solution. Numerical evaluation
demonstrates that the two proposed methods can converge to high-quality
feasible solutions in computation times at least two orders of magnitude faster
than both a state-of-the-art nonlinear branch-and-bound (NLBB) MINLP solver and
a mixed-integer convex programming (MICP) relaxation of the MOEGF problem. The
experimental setup consists of five test cases, three of which involve the real
electricity and gas transmission networks of the state of Victoria with actual
linepack and demand profiles.
|
Attention Mechanism is a widely used method for improving the performance of
convolutional neural networks (CNNs) on computer vision tasks. Despite its
pervasiveness, we have a poor understanding of what its effectiveness stems
from. It is popularly believed that its effectiveness stems from the visual
attention explanation, advocating focusing on the important part of input data
rather than ingesting the entire input. In this paper, we find that there is
only a weak consistency between the attention weights of features and their
importance. Instead, we verify the crucial role of feature map multiplication
in attention mechanism and uncover a fundamental impact of feature map
multiplication on the learned landscapes of CNNs: with the high order
non-linearity brought by the feature map multiplication, it played a
regularization role on CNNs, which made them learn smoother and more stable
landscapes near real samples compared to vanilla CNNs. This smoothness and
stability induce a more predictive and stable behavior in-between real samples,
and make CNNs generate better. Moreover, motivated by the proposed
effectiveness of feature map multiplication, we design feature map
multiplication network (FMMNet) by simply replacing the feature map addition in
ResNet with feature map multiplication. FMMNet outperforms ResNet on various
datasets, and this indicates that feature map multiplication plays a vital role
in improving the performance even without finely designed attention mechanism
in existing methods.
|
From crying to babbling and then to speech, infant's vocal tract goes through
anatomic restructuring. In this paper, we propose a non-invasive fast method of
using infant cry signals with convolutional neural network (CNN) based age
classification to diagnose the abnormality of the vocal tract development as
early as 4-month age. We study F0, F1, F2, and spectrograms and relate them to
the postnatal development of infant vocalization. A novel CNN based age
classification is performed with binary age pairs to discover the pattern and
tendency of the vocal tract changes. The effectiveness of this approach is
evaluated on Baby2020 with healthy infant cries and Baby Chillanto database
with pathological infant cries. The results show that our approach yields
79.20% accuracy for healthy cries, 84.80% for asphyxiated cries, and 91.20% for
deaf cries. Our method first reveals that infants' vocal tract develops to a
certain level at 4-month age and infants can start controlling the vocal folds
to produce discontinuous cry sounds leading to babbling. Early diagnosis of
growth abnormality of the vocal tract can help parents keep vigilant and adopt
medical treatment or training therapy for their infants as early as possible.
|
Second-order nonlinear optics is the base for a large variety of devices
aimed at the active manipulation of light. However, physical principles
restrict its occurrence to non-centrosymmetric, anisotropic matter. This
significantly limits the number of base materials exhibiting nonlinear optics.
Here, we show that embedding chromophores in an array of conical channels 13 nm
across in monolithic silica results in mesoscopic anisotropic matter and thus
in a hybrid material showing second-harmonic generation (SHG). This non-linear
optics is compared to the one achieved in corona-poled polymer films containing
the identical chromophores. It originates in confinement-induced orientational
order of the elongated guest molecules in the nanochannels. This leads to a
non-centrosymmetric dipolar order and hence to a non-linear light-matter
interaction on the sub-wavelength, single-pore scale. Our study demonstrates
that the advent of large-scale, self-organised nanoporosity in monolithic
solids along with confinement-controllable orientational order of chromophores
at the single-pore scale provides a reliable and accessible tool to design
materials with a nonlinear meta-optics.
|
Structured light harnessing multiple degrees of freedom has become a powerful
approach to use complex states of light in fundamental studies and
applications. Here, we investigate the light field of an ultrafast laser beam
with a wavelength-depended polarization state, a beam we term spectral vector
beam. We demonstrate a simple technique to generate and tune such structured
beams and demonstrate their spectroscopic capabilities. By only measuring the
polarization state using fast photodetectors, it is possible to track
pulse-to-pulse changes in the frequency spectrum caused by, e.g. narrowband
transmission or absorption. In our experiments, we reach read-out rates of
around 6 MHz, which is limited by our technical ability to modulate the
spectrum and can in principle reach GHz read-out rates. In simulations we
extend the spectral range to more than 1000 nm by using a supercontinuum light
source, thereby paving the way to various applications requiring high-speed
spectroscopic measurements.
|
The complementarity and substitutability between products are essential
concepts in retail and marketing. Qualitatively, two products are said to be
substitutable if a customer can replace one product by the other, while they
are complementary if they tend to be bought together. In this article, we take
a network perspective to help automatically identify complements and
substitutes from sales transaction data. Starting from a bipartite
product-purchase network representation, with both transaction nodes and
product nodes, we develop appropriate null models to infer significant
relations, either complements or substitutes, between products, and design
measures based on random walks to quantify their importance. The resulting
unipartite networks between products are then analysed with community detection
methods, in order to find groups of similar products for the different types of
relationships. The results are validated by combining observations from a
real-world basket dataset with the existing product hierarchy, as well as a
large-scale flavour compound and recipe dataset.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.