abstract
stringlengths 42
2.09k
|
---|
We present a gauge theory of the conformal group in four spacetime dimensions
with a non-vanishing torsion. In particular, we allow for a completely
antisymmetric torsion, equivalent by Hodge duality to an axial vector whose
presence does not spoil the conformal invariance of the theory, in contrast
with claims of antecedent literature. The requirement of conformal invariance
implies a differential condition (in particular, a Killing equation) on the
aforementioned axial vector which leads to a Maxwell-like equation in a
four-dimensional curved background. We also give some preliminary results in
the context of $\mathcal{N}=1$ four-dimensional conformal supergravity in the
geometric approach, showing that if we only allow for the constraint of
vanishing supertorsion all the other constraints imposed in the spacetime
approach are a consequence of the closure of the Bianchi identities in
superspace. This paves the way towards a future complete investigation of the
conformal supergravity using the Bianchi identities in the presence a
non-vanishing (super)torsion.
|
The Transformer architecture has been successful across many domains,
including natural language processing, computer vision and speech recognition.
In keyword spotting, self-attention has primarily been used on top of
convolutional or recurrent encoders. We investigate a range of ways to adapt
the Transformer architecture to keyword spotting and introduce the Keyword
Transformer (KWT), a fully self-attentional architecture that exceeds
state-of-the-art performance across multiple tasks without any pre-training or
additional data. Surprisingly, this simple architecture outperforms more
complex models that mix convolutional, recurrent and attentive layers. KWT can
be used as a drop-in replacement for these models, setting two new benchmark
records on the Google Speech Commands dataset with 98.6% and 97.7% accuracy on
the 12 and 35-command tasks respectively.
|
Controlled breakdown has recently emerged as a highly appealing technique to
fabricate solid-state nanopores for a wide range of biosensing applications.
This technique relies on applying an electric field of approximately 0.6-1 V/nm
across the membrane to induce a current, and eventually, breakdown of the
dielectric. However, a detailed description of how electrical conduction
through the dielectric occurs during controlled breakdown has not yet been
reported. Here, we study electrical conduction and nanopore formation in
SiN$_x$ membranes during controlled breakdown. We show that depending on the
membrane stoichiometry, electrical conduction is limited by either oxidation
reactions that must occur at the membrane-electrolyte interface (Si-rich
SiN$_x$), or electron transport across the dielectric (stoichiometric
Si$_3$N$_4$). We provide several important implications resulting from
understanding this process which will aid in further developing controlled
breakdown in the coming years, particularly for extending this technique to
integrate nanopores with on-chip nanostructures.
|
Let A be an idempotent algebra on a finite domain. By mediating between
results of Chen and Zhuk, we argue that if A satisfies the polynomially
generated powers property (PGP) and B is a constraint language invariant under
A (that is, in Inv(A)), then QCSP(B) is in NP. In doing this we study the
special forms of PGP, switchability and collapsibility, in detail, both
algebraically and logically, addressing various questions such as decidability
on the way.
We then prove a complexity-theoretic converse in the case of infinite
constraint languages encoded in propositional logic, that if Inv(A) satisfies
the exponentially generated powers property (EGP), then QCSP(Inv(A)) is
co-NP-hard. Since Zhuk proved that only PGP and EGP are possible, we derive a
full dichotomy for the QCSP, justifying what we term the Revised Chen
Conjecture. This result becomes more significant now the original Chen
Conjecture is known to be false.
Switchability was introduced by Chen as a generalisation of the already-known
collapsibility. For three-element domain algebras A that are switchable and
omit a G-set, we prove that, for every finite subset D of Inv(A), Pol(D) is
collapsible. The significance of this is that, for QCSP on finite structures
(over a three-element domain), all QCSP tractability (in P) explained by
switchability is already explained by collapsibility.
|
We find a novel one-parameter family of integrable quadratic Cremona maps of
the plane preserving a pencil of curves of degree 6 and of genus 1. They turn
out to serve as Kahan-type discretizations of a novel family of quadratic
vector fields possessing a polynomial integral of degree 6 whose level curves
are of genus 1, as well. These vector fields are non-homogeneous
generalizations of reduced Nahm systems for magnetic monopoles with icosahedral
symmetry, introduced by Hitchin, Manton and Murray. The straightforward Kahan
discretization of these novel non-homogeneous systems is non-integrable.
However, this drawback is repaired by introducing adjustments of order
$O(\epsilon^2)$ in the coefficients of the discretization, where $\epsilon$ is
the stepsize.
|
We investigate here the final state of gravitational collapse of a
non-spherical and non-marginally bound dust cloud as modeled by the Szekeres
spacetime. We show that a directionally globally naked singularity can be
formed in this case near the collapsing cloud boundary, and not at its
geometric center as is typically the case for a spherical gravitational
collapse. This is a strong curvature naked singularity in the sense of Tipler
criterion on gravitational strength. The null geodesics escaping from the
singularity would be less scattered in this case in certain directions since
the singularity is close to the boundary of the cloud as is the case in the
current scenario. The physical implications are pointed out.
|
In this paper we demonstrate the capability of the method of Lagrangian
descriptors to unveil the phase space structures that characterize transport in
high-dimensional symplectic maps. In order to illustrate its use, we apply it
to a four-dimensional symplectic map model that is used in chemistry to explore
the nonlinear dynamics of van der Waals complexes. The advantage of this
technique is that it allows us to easily and effectively extract the invariant
manifolds that determine the dynamics of the system under study by means of
examining the intersections of the underlying phase space structures with
low-dimensional slices. With this approach, one can perform a full
computational phase space tomography from which three-dimensional
representations of the higher-dimensional phase space can be systematically
reconstructed. This analysis may be of much help for the visualization and
understanding of the nonlinear dynamical mechanisms that take place in
high-dimensional systems. In this context, we demonstrate how this tool can be
used to detect whether the stable and unstable manifolds of the system
intersect forming turnstile lobes that enclose a certain phase space volume,
and the nature of their intersection.
|
Dialog systems enriched with external knowledge can handle user queries that
are outside the scope of the supporting databases/APIs. In this paper, we
follow the baseline provided in DSTC9 Track 1 and propose three subsystems,
KDEAK, KnowleDgEFactor, and Ens-GPT, which form the pipeline for a
task-oriented dialog system capable of accessing unstructured knowledge.
Specifically, KDEAK performs knowledge-seeking turn detection by formulating
the problem as natural language inference using knowledge from dialogs,
databases and FAQs. KnowleDgEFactor accomplishes the knowledge selection task
by formulating a factorized knowledge/document retrieval problem with three
modules performing domain, entity and knowledge level analyses. Ens-GPT
generates a response by first processing multiple knowledge snippets, followed
by an ensemble algorithm that decides if the response should be solely derived
from a GPT2-XL model, or regenerated in combination with the top-ranking
knowledge snippet. Experimental results demonstrate that the proposed pipeline
system outperforms the baseline and generates high-quality responses, achieving
at least 58.77% improvement on BLEU-4 score.
|
We develop efficient randomized algorithms to solve the black-box
reconstruction problem for polynomials over finite fields, computable by depth
three arithmetic circuits with alternating addition/multiplication gates, such
that output gate is an addition gate with in-degree two. These circuits compute
polynomials of form $G\times(T_1 + T_2)$, where $G,T_1,T_2$ are product of
affine forms, and polynomials $T_1,T_2$ have no common factors. Rank of such a
circuit is defined as dimension of vector space spanned by all affine factors
of $T_1$ and $T_2$. For any polynomial $f$ computable by such a circuit,
$rank(f)$ is defined to be the minimum rank of any such circuit computing it.
Our work develops randomized reconstruction algorithms which take as input
black-box access to a polynomial $f$ (over finite field $\mathbb{F}$),
computable by such a circuit. Here are the results.
1 [Low rank]: When $5\leq rank(f) = O(\log^3 d)$, it runs in time
$(nd^{\log^3d}\log |\mathbb{F}|)^{O(1)}$, and, with high probability, outputs a
depth three circuit computing $f$, with top addition gate having in-degree
$\leq d^{rank(f)}$.
2 [High rank]: When $rank(f) = \Omega(\log^3 d)$, it runs in time $(nd\log
|\mathbb{F}|)^{O(1)}$, and, with high probability, outputs a depth three
circuit computing $f$, with top addition gate having in-degree two.
Ours is the first blackbox reconstruction algorithm for this circuit class,
that runs in time polynomial in $\log |\mathbb{F}|$. This problem has been
mentioned as an open problem in [GKL12] (STOC 2012)
|
We exploit a two-dimensional model [7], [6] and [1] describing the elastic
behavior of the wall of a flexible blood vessel which takes interaction with
surrounding muscle tissue and the 3D fluid flow into account. We study time
periodic flows in a cylinder with such compound boundary conditions. The main
result is that solutions of this problem do not depend on the period and they
are nothing else but the time independent Poiseuille flow. Similar solutions of
the Stokes equations for the rigid wall (the no-slip boundary condition) depend
on the period and their profile depends on time.
|
Diffractive zone plate optics uses a thin micro-structure pattern to alter
the propagation direction of the incoming light wave. It has found important
applications in extreme-wavelength imaging where conventional refractive lenses
do not exist. The resolution limit of zone plate optics is determined by the
smallest width of the outermost zone. In order to improve the achievable
resolution, significant efforts have been devoted to the fabrication of very
small zone width with ultrahigh placement accuracy. Here, we report the use of
a diffractometer setup for bypassing the resolution limit of zone plate optics.
In our prototype, we mounted the sample on two rotation stages and used a
low-resolution binary zone plate to relay the sample plane to the detector. We
then performed both in-plane and out-of-plane sample rotations and captured the
corresponding raw images. The captured images were processed using a Fourier
ptychographic procedure for resolution improvement. The final achievable
resolution of the reported setup is not determined by the smallest width
structures of the employed binary zone plate; instead, it is determined by the
maximum angle of the out-of-plane rotation. In our experiment, we demonstrated
8-fold resolution improvement using both a resolution target and a titanium
dioxide sample. The reported approach may be able to bypass the fabrication
challenge of diffractive elements and open up new avenues for microscopy with
extreme wavelengths.
|
Grasping unseen objects in unconstrained, cluttered environments is an
essential skill for autonomous robotic manipulation. Despite recent progress in
full 6-DoF grasp learning, existing approaches often consist of complex
sequential pipelines that possess several potential failure points and
run-times unsuitable for closed-loop grasping. Therefore, we propose an
end-to-end network that efficiently generates a distribution of 6-DoF
parallel-jaw grasps directly from a depth recording of a scene. Our novel grasp
representation treats 3D points of the recorded point cloud as potential grasp
contacts. By rooting the full 6-DoF grasp pose and width in the observed point
cloud, we can reduce the dimensionality of our grasp representation to 4-DoF
which greatly facilitates the learning process. Our class-agnostic approach is
trained on 17 million simulated grasps and generalizes well to real world
sensor data. In a robotic grasping study of unseen objects in structured
clutter we achieve over 90% success rate, cutting the failure rate in half
compared to a recent state-of-the-art method.
|
Chimeric Antigen Receptor (CAR) T-cell therapy is an immunotherapy that has
recently become highly instrumental in the fight against life-threatening
diseases. A variety of modeling and computational simulation efforts have
addressed different aspects of CAR T therapy, including T-cell activation, T-
and malignant cell population dynamics, therapeutic cost-effectiveness
strategies, and patient survival analyses. In this article, we present a
systematic review of those efforts, including mathematical, statistical, and
stochastic models employing a wide range of algorithms, from differential
equations to machine learning. To the best of our knowledge, this is the first
review of all such models studying CAR T therapy. In this review, we provide a
detailed summary of the strengths, limitations, methodology, data used, and
data lacking in current published models. This information may help in
designing and building better models for enhanced prediction and assessment of
the benefit-risk balance associated with novel CAR T therapies, as well as with
the data collection essential for building such models.
|
The nature of dark matter (DM) is one of the most fascinating unresolved
challenges of modern physics. One of the perspective hypotheses suggests that
DM consists of ultralight bosonic particles in the state of Bose-Einstein
condensate (BEC). The superfluid nature of BEC must dramatically affect the
properties of DM matter including quantization of the angular momentum. Angular
momentum quantum in the form of a vortex line is expected to produce a
considerable impact on the luminous matter in galaxies including density
distribution and rotation curves. We investigate the evolution of spinning DM
cloud with typical galactic halo mass and radius. Analytically and numerically
stationary vortex soliton states with different topological charges have been
analyzed. It has been shown that while all multi-charged vortex states are
unstable, a single-charged vortex soliton is extremely robust and survives
during the lifetime of the Universe.
|
Real-world time series data often present recurrent or repetitive patterns
and it is often generated in real time, such as transportation passenger
volume, network traffic, system resource consumption, energy usage, and human
gait. Detecting anomalous events based on machine learning approaches in such
time series data has been an active research topic in many different areas.
However, most machine learning approaches require labeled datasets, offline
training, and may suffer from high computation complexity, consequently
hindering their applicability. Providing a lightweight self-adaptive approach
that does not need offline training in advance and meanwhile is able to detect
anomalies in real time could be highly beneficial. Such an approach could be
immediately applied and deployed on any commodity machine to provide timely
anomaly alerts. To facilitate such an approach, this paper introduces SALAD,
which is a Self-Adaptive Lightweight Anomaly Detection approach based on a
special type of recurrent neural networks called Long Short-Term Memory (LSTM).
Instead of using offline training, SALAD converts a target time series into a
series of average absolute relative error (AARE) values on the fly and predicts
an AARE value for every upcoming data point based on short-term historical AARE
values. If the difference between a calculated AARE value and its corresponding
forecast AARE value is higher than a self-adaptive detection threshold, the
corresponding data point is considered anomalous. Otherwise, the data point is
considered normal. Experiments based on two real-world open-source time series
datasets demonstrate that SALAD outperforms five other state-of-the-art anomaly
detection approaches in terms of detection accuracy. In addition, the results
also show that SALAD is lightweight and can be deployed on a commodity machine.
|
Uncertainty is the only certainty there is. Modeling data uncertainty is
essential for regression, especially in unconstrained settings. Traditionally
the direct regression formulation is considered and the uncertainty is modeled
by modifying the output space to a certain family of probabilistic
distributions. On the other hand, classification based regression and ranking
based solutions are more popular in practice while the direct regression
methods suffer from the limited performance. How to model the uncertainty
within the present-day technologies for regression remains an open issue. In
this paper, we propose to learn probabilistic ordinal embeddings which
represent each data as a multivariate Gaussian distribution rather than a
deterministic point in the latent space. An ordinal distribution constraint is
proposed to exploit the ordinal nature of regression. Our probabilistic ordinal
embeddings can be integrated into popular regression approaches and empower
them with the ability of uncertainty estimation. Experimental results show that
our approach achieves competitive performance. Code is available at
https://github.com/Li-Wanhua/POEs.
|
BiVO4, a visible-light response photocatalyst, has shown tremendous potential
because of abundant raw material sources, good stability and low cost. There
exist some limitations for further applicaitions due to poor capability to
separate electron-hole pairs. In fact, a single-component modification strategy
is barely adequate to obtain highy efficient photocatalytic performance. In
this work, P substituted some of the V atoms from VO4 oxoanions, namely P was
doped into the V sites in the host lattice of BiVO4 by a hydrothermal route.
Meanwhile, Ag as an attractive and efficient electron-cocatalyst was
selectively modified on the (010) facet of BiVO4 nanosheets via facile
photo-deposition. As a result, the obtained dually modified BiVO4 sheets
exhibited enhanced photocatalytic degradation property of methylene blue (MB).
In detail, photocatalytic rate constant (k) was 2.285 min-1g-1, which was 2.78
times higher than pristine BiVO4 nanosheets. Actually, P-doping favored the
formation of O vacancies, led to more charge carriers, and facilitated
photocatalytic reaction. On the other hand, metallic Ag loaded on (010) facet
effectively transferred photogenerated electrons, which consequently helped
electron-hole pairs separation. The present work may enlighten new thoughts for
smart design and controllable synthesis of highly efficient photocatalytic
materials.
|
The initial value problem for Hookean incompressible viscoelastictic motion
in three space dimensions has global strong solutions with small displacements.
|
Medical imaging datasets usually exhibit domain shift due to the variations
of scanner vendors, imaging protocols, etc. This raises the concern about the
generalization capacity of machine learning models. Domain generalization (DG),
which aims to learn a model from multiple source domains such that it can be
directly generalized to unseen test domains, seems particularly promising to
medical imaging community. To address DG, recent model-agnostic meta-learning
(MAML) has been introduced, which transfers the knowledge from previous
training tasks to facilitate the learning of novel testing tasks. However, in
clinical practice, there are usually only a few annotated source domains
available, which decreases the capacity of training task generation and thus
increases the risk of overfitting to training tasks in the paradigm. In this
paper, we propose a novel DG scheme of episodic training with task augmentation
on medical imaging classification. Based on meta-learning, we develop the
paradigm of episodic training to construct the knowledge transfer from episodic
training-task simulation to the real testing task of DG. Motivated by the
limited number of source domains in real-world medical deployment, we consider
the unique task-level overfitting and we propose task augmentation to enhance
the variety during training task generation to alleviate it. With the
established learning framework, we further exploit a novel meta-objective to
regularize the deep embedding of training domains. To validate the
effectiveness of the proposed method, we perform experiments on
histopathological images and abdominal CT images.
|
In this paper we consider the inhomogeneous nonlinear Schr\"odinger (INLS)
equation \begin{align}\label{inls} i \partial_t u +\Delta u +|x|^{-b}
|u|^{2\sigma}u = 0, \,\,\, x \in \mathbb{R}^N \end{align} with $N\geq 3$. We
focus on the intercritical case, where the scaling invariant Sobolev index
$s_c=\frac{N}{2}-\frac{2-b}{2\sigma}$ satisfies $0<s_c<1$. In a previous work,
for radial initial data in $\dot H^{s_c}\cap \dot H^1$, we prove the existence
of blow-up solutions and also a lower bound for the blow-up rate. Here we
extend these results to the non-radial case. We also prove an upper bound for
the blow-up rate and a concentration result for general finite time blow-up
solutions in $H^1$.
|
The COVID-19 pandemic has forced changes in production and especially in
human interaction, with "social distancing" a standard prescription for slowing
transmission of the disease. This paper examines the economic effects of social
distancing at the aggregate level, weighing both the benefits and costs to
prolonged distancing. Specifically we fashion a model of economic recovery when
the productive capacity of factors of production is restricted by social
distancing, building a system of equations where output growth and social
distance changes are interdependent. The model attempts to show the complex
interactions between output levels and social distancing, developing cycle
paths for both variables. Ultimately, however, defying gravity via prolonged
social distancing shows that a lower growth path is inevitable as a result.
|
Variations in the solar wind (SW) parameters with scales of several years are
an important characteristic of solar activity and the basis for a long-term
space weather forecast. We examine the behavior of interplanetary parameters
over 21-24 solar cycles (SCs) on the basis of OMNI database
(https://spdf.gsfc.nasa.gov/pub/data/omni). Since changes in parameters can be
associated both with changes in the number of different large-scale types of
SW, and with variations in the values of these parameters at different phases
of the solar cycle and during the transition from one cycle to another, we
select the entire study period in accordance with the Catalog of large-scale SW
types for 1976-2019 (See the site http://www.iki.rssi.ru/pub/omni, [Yermolaev
et al., 2009]), which covers the period from 21 to 24 SCs, and in accordance
with the phases of the cycles, and averaging the parameters at selected
intervals. In addition to a sharp drop in the number of ICMEs (and associated
Sheath types), there is a noticeable drop in the value (by 20-40%) of plasma
parameters and magnetic field in different types of solar wind at the end of
the 20th century and a continuation of the fall or persistence at a low level
in the 23-24 cycles. Such a drop in the solar wind is apparently associated
with a decrease in solar activity and manifests itself in a noticeable decrease
in space weather factors.
|
A new control approach is proposed for the grid insertion of Power Park
Modules (PPMs). It allows full participation of these modules to ancillary
services. This means that, not only their control have some positive impact on
the grid frequency and voltage dynamics, but they can effectively participate
to existing primary and secondary control loops together with the classic
thermal/inertia synchronous generators and fulfill the same specifications both
from the control and contractual points of view. To achieve such level of
performances, a system approach based on an innovatory control model is
proposed. The latter control model drops classic hypothesis for separation of
voltage and frequency dynamics used till now in order to gather these dynamics
into a small size model. From the system point of view, dynamics are grouped by
time-scales of phenomena in the proposed control model. This results in more
performant controls in comparison to classic approaches which orient controls
to physical actuators (control of grid side converter and of generator side
converter). Also, this allows coordination between control of converters and
generator or, in case of multimachines specifications, among several PPMs. From
the control synthesis point of view, classic robust approaches are used (like,
e.g., H-infinity synthesis). Implementation and validation tests are presented
for wind PPMs but the approach holds for any other type of PPM. These results
will be further used to control the units of the new concept of Dynamic Virtual
Power Plant introduced in the H2020 POSYTYF project.
|
We introduce circular evolutes and involutes of framed curves in the
Euclidean space. Circular evolutes of framed curves stem from the curvature
circles of Bishop directions and singular value sets of normal surfaces of
Bishop directions. On the other hand, involutes of framed curves are direct
generalizations of involutes of regular space curves and frontals in the
Euclidean plane. We investigate properties of normal surfaces, circular
evolutes, and involutes of framed curves. We can observe that taking circular
evolutes and involutes of framed curves are opposite operations under suitable
assumptions, similarly to evolutes and involutes of fronts in the Euclidean
plane. Furthermore, we investigate the relations among singularities of normal
surfaces, circular evolutes, and involutes of framed curves.
|
In this paper, a real-world transportation problem is addressed, concerning
the collection and the transportation of biological sample tubes from sampling
points to a main hospital. Blood and other biological samples are collected in
different centers during morning hours. Then, the samples are transported to
the main hospital, for their analysis, by a fleet of vehicles located in
geographically distributed depots. Each sample has a limited lifetime and must
arrive to the main hospital within that time. If a sample cannot arrive to the
hospital within the lifetime, either is discarded or must be processed in
dedicated facilities called Spoke Centers.Two Mixed Integer Linear Programming
formulations and an Adaptive Large Neighborhood Search (ALNS) metaheuristic
algorithm have been developed for the problem. Computational experiments on
different sets of instances based on real-life data provided by the Local
Healthcare Authority of Bologna, Italy, are presented. A comparison on small
instances with the optimal solutions obtained by the formulations shows the
effectiveness of the proposed ALNS algorithm. On real-life instances, different
batching policies of the samples are evaluated. The results show that the ALNS
algorithm is able to find solutions in which all the samples are delivered on
time, while in the real case about the 40% [5] of the samples is delivered
late.
|
In this work, we study the transfer learning problem under high-dimensional
generalized linear models (GLMs), which aim to improve the fit on target data
by borrowing information from useful source data. Given which sources to
transfer, we propose an oracle algorithm and derive its $\ell_2$-estimation
error bounds. The theoretical analysis shows that under certain conditions,
when the target and source are sufficiently close to each other, the estimation
error bound could be improved over that of the classical penalized estimator
using only target data. When we don't know which sources to transfer, an
algorithm-free transferable source detection approach is introduced to detect
informative sources. The detection consistency is proved under the
high-dimensional GLM transfer learning setting. Extensive simulations and a
real-data experiment verify the effectiveness of our algorithms.
|
We extend results of parametric geometry of numbers to a general diagonal
flow on the space of lattices. Moreover, we compute the Hausdorff dimension of
the set of trajectories with every given behavior, with respect to a
nonstandard metric and thereby attain bounds on the standard ones.
|
In the rectangle stabbing problem, we are given a set $\cR$ of axis-aligned
rectangles in $\RR^2$, and the objective is to find a minimum-cardinality set
of horizontal and/or vertical lines such that each rectangle is intersected by
one of these lines. The standard LP relaxation for this problem is known to
have an integrality gap of 2, while a better intergality gap of 1.58.. is known
for the special case when $\cR$ is a set of horizontal segments. In this paper,
we consider two more special cases: when $\cR$ is a set of horizontal and
vertical segments, and when $\cR$ is a set of unit squares. We show that the
integrality gap of the standard LP relaxation in both cases is stricly less
than $2$. Our rounding technique is based on a generalization of the {\it
threshold rounding} idea used by Kovaleva and Spieksma (SIAM J. Disc. Math
2006), which may prove useful for rounding the LP relaxations of other
geometric covering problems.
|
We study a quasi-two-dimensional macroscopic system of magnetic spherical
particles settled on a shallow concave dish under a temporally oscillating
magnetic field. The system reaches a stationary state where the energy losses
from collisions and friction with the concave dish surface are compensated by
the continuous energy input coming from the oscillating magnetic field. Random
particle motions show some similarities with the motions of atoms and molecules
in a glass or a crystal-forming fluid. Because of the curvature of the surface,
particles experience an additional force toward the center of the concave dish.
When decreasing the magnetic field, the effective temperature is decreased and
diffusive particle motion slows. For slow cooling rates we observe
crystallization, where the particles organize into a hexagonal lattice. We
study the birth of the crystalline nucleus and the subsequent growth of the
crystal. Our observations support non-classical theories of crystal formation.
Initially a dense amorphous aggregate of particles forms, and then in a second
stage this aggregate rearranges internally to form the crystalline nucleus. As
the aggregate grows, the crystal grows in its interior. After a certain size,
all the aggregated particles are part of the crystal and after that, crystal
growth follows the classical theory for crystal growth.
|
The most important direction in the development of fundamental and applied
physics is the study of the properties of optical systems at the nanoscale in
order to create optical and quantum computers, biosensors, single-photon
sources for quantum informatics, devices for DNA sequencing, sensors of various
fields, etc. In all these cases, nanoscale light sources - dye molecules,
quantum dots (epitaxial or colloidal), color centers in crystals, and
nanocontacts in metals - are of key importance. In the nanoenvironment, the
characteristics of these elementary quantum systems - pumping rates, radiative
and non-radiative decay rates, the local density of states, lifetimes, level
shifts - experience changes that can be used intentionally to create nanoscale
light sources with desired properties. This review presents an analysis of
actual theoretical and experimental works in the field of elementary quantum
systems radiation control using plasmonic and dielectric nanostructures,
metamaterials, and nanoparticles made from metamaterials.
|
A fusion boundary-plasma domain is defined by axisymmetric magnetic surfaces
where the geometry is often complicated by the presence of one or more
X-points; and modeling boundary plasmas usually relies on computational grids
that account for the magnetic field geometry. The new grid generator INGRID
(Interactive Grid Generator) presented here is a Python-based code for
calculating grids for fusion boundary plasma modeling, for a variety of
configurations with one or two X-points in the domain. Based on a given
geometry of the magnetic field, INGRID first calculates a skeleton grid which
consists of a small number of quadrilateral patches; then it puts a subgrid on
each of the patches, and joins them in a global grid. This domain partitioning
strategy makes possible a uniform treatment of various configurations with one
or two X-points in the domain. This includes single-null, double-null, and
other configurations with two X-points in the domain. The INGRID design allows
generating grids either interactively, via a parameter-file driven GUI, or
using a non-interactive script-controlled workflow. Results of testing
demonstrate that INGRID is a flexible, robust, and user-friendly
grid-generation tool for fusion boundary-plasma modeling.
|
The general Next-to-Minimal Supersymmetric Standard Model (NMSSM) describes
the singlino-dominated dark-matter (DM) property by four independent
parameters: singlet-doublet Higgs coupling coefficient $\lambda$, Higgsino mass
$\mu_{tot}$, DM mass $m_{\tilde{\chi}_1^0}$, and singlet Higgs self-coupling
coefficient $\kappa$. The first three parameters strongly influence the
DM-nucleon scattering rate, while $\kappa$ usually affects the scattering only
slightly. This characteristic implies that singlet-dominated particles may form
a secluded DM sector. Under such a theoretical structure, the DM achieves the
correct abundance by annihilating into a pair of singlet-dominated Higgs bosons
by adjusting $\kappa$'s value. Its scattering with nucleons is suppressed when
$\lambda v/\mu_{tot}$ is small. This speculation is verified by sophisticated
scanning of the theory's parameter space with various experiment constraints
considered. In addition, the Bayesian evidence of the general NMSSM and that of
$Z_3$-NMSSM is computed. It is found that, at the cost of introducing one
additional parameter, the former is approximately $3.3 \times 10^3$ times the
latter. This result corresponds to Jeffrey's scale of 8.05 and implies that the
considered experiments strongly prefer the general NMSSM to the $Z_3$-NMSSM.
|
First principles approaches have been successful in solving many-body
Hamiltonians for real materials to an extent when correlations are weak or
moderate. As the electronic correlations become stronger often embedding
methods based on first principles approaches are used to better treat the
correlations by solving a suitably chosen many-body Hamiltonian with a higher
level theory. Such combined methods are often referred to as second principles
approaches. At such level of the theory the self energy, i.e. the functional
that embodies the stronger electronic correlations, is either a function of
energy or momentum or both. The success of such theories is commonly measured
by the quality of the self energy functional. However, self-consistency in the
self-energy should, in principle, also change the real space charge
distribution in a correlated material and be able to modify the electronic
eigenfunctions, which is often undermined in second principles approaches. Here
we study the impact of charge self-consistency within two example cases:
TiSe$_{2}$, a three-dimensional charge-density-wave candidate material, and
CrBr$_{3}$, a two-dimensional ferromagnet, and show how real space charge
re-distribution due to correlation effects taken into account within a first
principles Green's function based many-body perturbative approach is key in
driving qualitative changes to the final electronic structure of these
materials.
|
The Lightning Network (LN) is a prominent payment channel network aimed at
addressing Bitcoin's scalability issues. Due to the privacy of channel
balances, senders cannot reliably choose sufficiently liquid payment paths and
resort to a trial-and-error approach, trying multiple paths until one succeeds.
This leaks private information and decreases payment reliability, which harms
the user experience. This work focuses on the reliability and privacy of LN
payments. We create a probabilistic model of the payment process in the LN,
accounting for the uncertainty of the channel balances. This enables us to
express payment success probabilities for a given payment amount and a path.
Applying negative Bernoulli trials for single- and multi-part payments allows
us to compute the expected number of payment attempts for a given amount,
sender, and receiver. As a consequence, we analytically derive the optimal
number of parts into which one should split a payment to minimize the expected
number of attempts. This methodology allows us to define service level
objectives and quantify how much private information leaks to the sender as a
side effect of payment attempts. We propose an optimized path selection
algorithm that does not require a protocol upgrade. Namely, we suggest that
nodes prioritize paths that are most likely to succeed while making payment
attempts. A simulation based on the real-world LN topology shows that this
method reduces the average number of payment attempts by 20% compared to a
baseline algorithm similar to the ones used in practice. This improvement will
increase to 48% if the LN protocol is upgraded to implement the channel
rebalancing proposal described in BOLT14.
|
The structural evolution of laser-excited systems of gold has previously been
measured through ultrafast MeV electron diffraction. However, there has been a
long-standing inability of atomistic simulations to provide a consistent
picture of the melt process, concluding in large discrepancies between the
predicted threshold energy density for complete melt, as well as the transition
between heterogeneous and homogeneous melting. We make use of two-temperature
classical molecular dynamics simulations utilizing three highly successful
interatomic potentials and reproduce electron diffraction data presented by Mo
et al. We recreate the experimental electron diffraction data employing both a
constant and temperature-dependent electron-ion equilibration rate. In all
cases we are able to match time-resolved electron diffraction data, and find
consistency between atomistic simulations and experiments, only by allowing
laser energy to be transported away from the interaction region. This
additional energy-loss pathway, which scales strongly with laser fluence, we
attribute to hot electrons leaving the target on a timescale commensurate with
melting.
|
With the growing size of data sets, feature selection becomes increasingly
important. Taking interactions of original features into consideration will
lead to extremely high dimension, especially when the features are categorical
and one-hot encoding is applied. This makes it more worthwhile mining useful
features as well as their interactions. Association rule mining aims to extract
interesting correlations between items, but it is difficult to use rules as a
qualified classifier themselves. Drawing inspiration from association rule
mining, we come up with a method that uses association rules to select features
and their interactions, then modify the algorithm for several practical
concerns. We analyze the computational complexity of the proposed algorithm to
show its efficiency. And the results of a series of experiments verify the
effectiveness of the algorithm.
|
Satellite communication is experiencing a new dawn thanks to low earth orbit
mega constellations being deployed at an unprecedented speed. Fueled by the
renewed interest in non-terrestrial networks (NTN), the Third Generation
Partnership Project (3GPP) is preparing 5G NR, NB-IoT and LTE-M for NTN
operation. This article is focused on LTE-M and the essential adaptations
needed for supporting satellite communication. Specifically, the major
challenges facing LTE-M NTN at the physical and higher layers are discussed and
potential solutions are outlined.
|
Mainstream compilers perform a multitude of analyses and optimizations on the
given input program. Each analysis pass may generate a program-abstraction.
Each optimization pass is typically composed of multiple alternating phases of
inspection of program-abstractions and transformations of the program. Upon
transformation of a program, the program-abstractions generated by various
analysis passes may become inconsistent with the program's modified state.
Consequently, the downstream transformations may be considered unsafe until the
relevant program-abstractions are stabilized, i.e., the program-abstractions
are made consistent with the modified program. In general, the existing
compiler frameworks do not perform automated stabilization of the
program-abstractions and instead leave it to the optimization writer to deal
with the complex task of identifying the relevant program-abstractions to
stabilize, the points where the stabilization is to be performed, and the exact
procedure of stabilization. Similarly, adding new analyses becomes a challenge
as one has to understand which all existing optimizations may impact the newly
added program-abstractions. In this paper, we address these challenges by
providing the design and implementation of a novel generalized compiler-design
framework called Homeostasis.
Homeostasis can be used to guarantee the trigger of automated stabilization
of relevant program-abstractions under every possible transformation of the
program. Interestingly, Homeostasis provides such guarantees not only for the
existing optimization passes but also for any future optimizations that may be
added to the framework. We have implemented our proposed ideas in the IMOP
compiler framework, for OpenMP C programs. We present an evaluation which shows
that Homeostasis is efficient and easy to use.
|
To properly contrast the Deepfake phenomenon the need to design new Deepfake
detection algorithms arises; the misuse of this formidable A.I. technology
brings serious consequences in the private life of every involved person.
State-of-the-art proliferates with solutions using deep neural networks to
detect a fake multimedia content but unfortunately these algorithms appear to
be neither generalizable nor explainable. However, traces left by Generative
Adversarial Network (GAN) engines during the creation of the Deepfakes can be
detected by analyzing ad-hoc frequencies. For this reason, in this paper we
propose a new pipeline able to detect the so-called GAN Specific Frequencies
(GSF) representing a unique fingerprint of the different generative
architectures. By employing Discrete Cosine Transform (DCT), anomalous
frequencies were detected. The \BETA statistics inferred by the AC coefficients
distribution have been the key to recognize GAN-engine generated data.
Robustness tests were also carried out in order to demonstrate the
effectiveness of the technique using different attacks on images such as JPEG
Compression, mirroring, rotation, scaling, addition of random sized rectangles.
Experiments demonstrated that the method is innovative, exceeds the state of
the art and also give many insights in terms of explainability.
|
We propose a self-supervised framework to learn scene representations from
video that are automatically delineated into background, characters, and their
animations. Our method capitalizes on moving characters being equivariant with
respect to their transformation across frames and the background being constant
with respect to that same transformation. After training, we can manipulate
image encodings in real time to create unseen combinations of the delineated
components. As far as we know, we are the first method to perform unsupervised
extraction and synthesis of interpretable background, character, and animation.
We demonstrate results on three datasets: Moving MNIST with backgrounds, 2D
video game sprites, and Fashion Modeling.
|
In a recent study [Phys. Rev. X 10, 021042 (2020)], we showed using
large-scale density matrix renormalization group (DMRG) simulations on infinite
cylinders that the triangular lattice Hubbard model has a chiral spin liquid
phase. In this work, we introduce hopping anisotropy in the model, making one
of the three distinct bonds on the lattice stronger or weaker compared with the
other two. We implement the anisotropy in two inequivalent ways, one which
respects the mirror symmetry of the cylinder and one which breaks this
symmetry. In the full range of anisotropy, from the square lattice to weakly
coupled one-dimensional chains, we find a variety of phases. Near the isotropic
limit we find the three phases identified in our previous work: metal, chiral
spin liquid, and 120$^\circ$ spiral order; we note that a recent paper suggests
the apparently metallic phase may actually be a Luther-Emery liquid, which
would also be in agreement with our results. When one bond is weakened by a
relatively small amount, the ground state quickly becomes the square lattice
N\'{e}el order. When one bond is strengthened, the story is much less clear,
with the phases that we find depending on the orientation of the anisotropy and
on the cylinder circumference. While our work is to our knowledge the first
DMRG study of the anisotropic triangular lattice Hubbard model, the overall
phase diagram we find is broadly consistent with that found previously using
other methods, such as variational Monte Carlo and dynamical mean field theory.
|
Efficient and low-complexity beamforming design is an important element of
satellite communication systems with mobile receivers equipped with phased
arrays. In this work, we apply the simultaneous perturbation stochastic
approximation (SPSA) method with successive sub-array selection for finding the
optimal antenna weights that maximize the received signal power at a uniform
plane array (UPA). The proposed algorithms are based on iterative gradient
approximation by injecting some carefully designed perturbations on the
parameters to be estimated. Additionally, the successive sub-array selection
technique enhances the performance of SPSA-based algorithms and makes them less
sensitive to the initial beam direction. Simulation results show that our
proposed algorithms can achieve efficient and reliable performance even when
the initial beam direction is not well aligned with the satellite direction.
|
We present cosmological parameter measurements from the effective field
theory-based full-shape analysis of the power spectrum of emission line
galaxies (ELGs). First, we perform extensive tests on simulations and determine
appropriate scale cuts for the perturbative description of the ELG power
spectrum. We study in detail non-linear redshift-space distortions
(``fingers-of-God'') for this sample and show that they are somewhat weaker
than those of luminous red galaxies. This difference is not significant for
current data, but may become important for future surveys like Euclid/DESI.
Then we analyze recent measurements of the ELG power spectrum from the extended
Baryon acoustic Oscillation Spectroscopic Survey (eBOSS) within the
$\nu\Lambda$CDM model. Combined with the BBN baryon density prior, the ELG pre-
and post-reconstructed power spectra alone constrain the matter density
$\Omega_m=0.257_{-0.045}^{+0.031}$, the current mass fluctuation amplitude
$\sigma_8=0.571_{-0.076}^{+0.052}$, and the Hubble constant
$H_0=84.5_{-7}^{+5.8}$ km/s/Mpc (all at 68\% CL). Combining with other
full-shape and BAO data we measure $\Omega_m=0.327_{-0.016}^{+0.014}$,
$\sigma_8=0.69_{-0.045}^{+0.038}$, and $H_0=68.6_{-1.1}^{+1}$ km/s/Mpc. The
total neutrino mass is constrained to be $M_{\rm tot}<0.63$ eV (95\% CL) from
the BBN, full-shape and BAO data only. Finally, we discuss the apparent $\sim
3\sigma$ discrepancy in the inferred clustering amplitude between our full
shape analysis and the cosmic microwave background data.
|
As the killer application of blockchain technology, blockchain-based payments
have attracted extensive attention ranging from hobbyists to corporates to
regulatory bodies. Blockchain facilitates fast, secure, and cross-border
payments without the need for intermediaries such as banks. Because blockchain
technology is still emerging, systematically organised knowledge providing a
holistic and comprehensive view on designing payment applications that use
blockchain is yet to be established. If such knowledge could be established in
the form of a set of blockchain-specific patterns, architects could use those
patterns in designing a payment application that leverages blockchain.
Therefore, in this paper, we first identify a token's lifecycle and then
present 12 patterns that cover critical aspects in enabling the state
transitions of a token in blockchain-based payment applications. The lifecycle
and the annotated patterns provide a payment-focused systematic view of system
interactions and a guide to effective use of the patterns.
|
In the star formation process, the vital impact of environmental factors such
as feedback from massive stars and stellar density on the form of the initial
mass function (IMF) at low-mass end is yet to be understood. Hence a
systematic, highly sensitive observational analysis of a sample of regions
under diverse environmental conditions is essential. We analyse the IMF of
eight young clusters ($<$5 Myr), namely IC1848-West, IC1848-East, NGC 1893, NGC
2244, NGC 2362, NGC 6611, Stock 8 and Cygnus OB2, which are located at the
Galactocentric distance ($R_g$) range $\sim$6-12 kpc along with nearby cluster
IC348 using deep near-IR photometry and Gaia DR2. These clusters are embedded
in massive stellar environments of radiation strength $log(L_{FUV}/L_{\odot})$
$\sim$2.6 to 6.8, $log(L_{EUV})$ $\sim$42.2 to 50.85 photons/s, with stellar
density in the range of $\sim$170 - 1220 stars/pc$^2$. After structural
analysis and field decontamination we obtain an unbiased, uniformly sensitive
sample of pre-main-sequence members of the clusters down to brown-dwarf regime.
The lognormal fit to the IMF of nine clusters gives the mean characteristic
mass ($m_c$) and $\sigma$ of 0.32$\pm$0.02 $M_\odot$ and 0.47$\pm$0.02,
respectively. We compare the IMF with that of low- and high-mass clusters
across the Milky Way. We also check for any systematic variation with respect
to the radiation field strength, stellar density as well with $R_g$. We
conclude that there is no strong evidence for environmental effect in the
underlying form of the IMF of these clusters.
|
A hypothetical pseudo-scalar particle axion, which is an immediate result of
the Peccei-Quinn solution to the strong CP problem, may couple to gluons and
lead to an oscillating electric dipole moment (EDM) of fundamental particles.
This paper proposes a novel method of probing the axion-induced oscillating EDM
in storage rings, using a radiofrequency (RF) Wien Filter. The Wien Filter at
the frequency of the sidebands of the axion and $g-2$ frequency,
$f_\text{axion} \pm f_{g-2}$, generates a spin resonance in the presence of an
oscillating EDM, as confirmed both by an analytical estimation of the spin
equations and independently by simulation. A brief systematic study also shows
that this method is unlikely to be limited by Wien Filter misalignment issues.
|
Pre-trained representations are becoming crucial for many NLP and perception
tasks. While representation learning in NLP has transitioned to training on raw
text without human annotations, visual and vision-language representations
still rely heavily on curated training datasets that are expensive or require
expert knowledge. For vision applications, representations are mostly learned
using datasets with explicit class labels such as ImageNet or OpenImages. For
vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all
involve a non-trivial data collection (and cleaning) process. This costly
curation process limits the size of datasets and hence hinders the scaling of
trained models. In this paper, we leverage a noisy dataset of over one billion
image alt-text pairs, obtained without expensive filtering or post-processing
steps in the Conceptual Captions dataset. A simple dual-encoder architecture
learns to align visual and language representations of the image and text pairs
using a contrastive loss. We show that the scale of our corpus can make up for
its noise and leads to state-of-the-art representations even with such a simple
learning scheme. Our visual representation achieves strong performance when
transferred to classification tasks such as ImageNet and VTAB. The aligned
visual and language representations enables zero-shot image classification and
also set new state-of-the-art results on Flickr30K and MSCOCO image-text
retrieval benchmarks, even when compared with more sophisticated
cross-attention models. The representations also enable cross-modality search
with complex text and text + image queries.
|
We compute the differential yield for quark anti-quark dijet production in
high-energy electron-proton and electron-nucleus collisions at small $x$ as a
function of the relative momentum $\boldsymbol{P}_\perp$ and momentum imbalance
$\boldsymbol{k}_\perp$ of the dijet system for different photon virtualities
$Q^2$, and study the elliptic and quadrangular anisotropies in the relative
angle between $\boldsymbol{P}_\perp$ and $\boldsymbol{k}_\perp$. We review and
extend the analysis in [1], which compared the results of the Color Glass
Condensate (CGC) with those obtained using the transverse momentum dependent
(TMD) framework. In particular, we include in our comparison the improved TMD
(ITMD) framework, which resums kinematic power corrections of the ratio
$k_\perp$ over the hard scale $Q_\perp$. By comparing ITMD and CGC results we
are able to isolate genuine higher saturation contributions in the ratio
$Q_s/Q_\perp$ which are resummed only in the CGC. These saturation
contributions are in addition to those in the Weizs\"ackerWilliams gluon TMD
that appear in powers of $Q_s/k_\perp$. We provide numerical estimates of these
contributions for inclusive dijet production at the future Electron-Ion
Collider, and identify kinematic windows where they can become relevant in the
measurement of dijet and dihadron azimuthal correlations. We argue that such
measurements will allow the detailed experimental study of both kinematic power
corrections and genuine gluon saturation effects.
|
With the increasing popularity of calcium imaging data in neuroscience
research, methods for analyzing calcium trace data are critical to address
various questions. The observed calcium traces are either analyzed directly or
deconvolved to spike trains to infer neuronal activities. When both approaches
are applicable, it is unclear whether deconvolving calcium traces is a
necessary step. In this article, we compare the performance of using calcium
traces or their deconvolved spike trains for three common analyses: clustering,
principal component analysis (PCA), and population decoding. Our simulations
and applications to real data suggest that the estimated spike data outperform
calcium trace data for both clustering and PCA. Although calcium trace data
show higher predictability than spike data at each time point, spike history or
cumulative spike counts is comparable to or better than calcium traces in
population decoding.
|
A cosmological model with an energy transfer between dark matter (DM) and
dark energy (DE) can give rise to comparable energy densities at the present
epoch. The present work deals with the perturbation analysis, parameter
estimation and Bayesian evidence calculation of interacting models with
dynamical coupling parameter that determines the strength of the interaction.
We have considered two cases, where the interaction is a more recent phenomenon
and where the interaction is a phenomenon in the distant past. Moreover, we
have considered the quintessence DE equation of state with
Chevallier-Polarski-Linder (CPL) parametrisation and energy flow from DM to DE.
Using the current observational datasets like the cosmic microwave background
(CMB), baryon acoustic oscillation (BAO), Type Ia Supernovae (SNe Ia) and
redshift-space distortions (RSD), we have estimated the mean values of the
parameters. Using the perturbation analysis and Bayesian evidence calculation,
we have shown that interaction present as a brief early phenomenon is preferred
over a recent interaction.
|
Besides the spirals induced by the Lindblad resonances, planets can generate
a family of tightly wound spirals through buoyancy resonances. The excitation
of buoyancy resonances depends on the thermal relaxation timescale of the gas.
By computing timescales of various processes associated with thermal
relaxation, namely, radiation, diffusion, and gas-dust collision, we show that
the thermal relaxation in protoplanetary disks' surface layers
($Z/R\gtrsim0.1$) and outer disks ($R\gtrsim100$ au) is limited by infrequent
gas-dust collisions. The use of isothermal equation of state or rapid cooling,
common in protoplanetary disk simulations, is therefore not justified. Using
three-dimensional hydrodynamic simulations, we show that the collision-limited
slow thermal relaxation provides favorable conditions for buoyancy resonances
to develop. Buoyancy resonances produce predominantly vertical motions, whose
magnitude at the $^{12}$CO emission surface is of order of $100~{\rm m~s}^{-1}$
for Jovian-mass planets, sufficiently large to detect using molecular line
observations with ALMA. We generate synthetic observations and describe
characteristic features of buoyancy resonances in Keplerian-subtracted moment
maps and velocity channel maps. Based on the morphology and magnitude of the
perturbation, we propose that the tightly wound spirals observed in TW Hya
could be driven by a (sub-)Jovian-mass planet at 90 au. We discuss how
non-Keplerian motions driven by buoyancy resonances can be distinguished from
those driven by other origins. We argue that observations of multiple lines
tracing different heights, with sufficiently high spatial/spectral resolution
and sensitivity to separate the emission arising from the near and far sides of
the disk, will help constrain the origin of non-Keplerian motions.
|
Mt. Abu Faint Object Spectrograph and Camera - Pathfinder (MFOSC-P) is an
imager-spectrograph developed for the Physical Research Laboratory (PRL) 1.2m
telescope at Gurushikhar, Mt. Abu, India. MFOSC-P is based on a focal reducer
concept and provides seeing limited imaging (with a sampling of 3.3 pixels per
arc-second) in Bessell's B, V, R, I and narrow-band H-$\alpha$ filters. The
instrument uses three plane reflection gratings, covering the spectral range of
4500-8500$\AA$, with three different resolutions of 500, 1000, and 2000 around
their central wavelengths. MFOSC-P was conceived as a pathfinder instrument for
a next-generation instrument on the PRL's 2.5m telescope which is coming up at
Mt. Abu. The instrument was developed during 2015-2019 and successfully
commissioned on the PRL 1.2m telescope in February 2019. The designed
performance has been verified with laboratory characterization tests and on-sky
commissioning observations. Different science programs covering a range of
objects are being executed with MFOSC-P since then, e.g., spectroscopy of
M-dwarfs, novae $\&$ symbiotic systems, and detection of H-$\alpha$ emission in
star-forming regions. MFOSC-P presents a novel design and cost-effective way to
develop a FOSC (Faint Object Spectrograph and Camera) type of instrument on a
shorter time-scale of development. The design and development methodology
presented here is most suitable in helping the small aperture telescope
community develop such a versatile instrument, thereby diversifying the science
programs of such observatories.
|
The size of drops generated by the capillary-driven disintegration of liquid
ligaments plays a fundamental role in several important natural phenomena,
ranging from heat and mass transfer at the ocean-atmosphere interface to
pathogen transmission. The inherent non-linearity of the equations governing
the ligament destabilization lead to significant differences in the resulting
drop sizes, owing to small fluctuations in the myriad initial conditions.
Previous experiments and simulations reveal a variety of drop size
distributions, corresponding to competing underlying physical interpretations.
Here, we perform numerical simulations of individual ligaments, the
deterministic breakup of which is triggered by random initial surface
corrugations. Stochasticity is incorporated by simulating a large ensemble of
such ligaments, each realization corresponding to a random but unique initial
configuration. The resulting probability distributions reveal three stable drop
sizes, generated via a sequence of two distinct stages of breakup. The
probability of the large sizes is described by volume-weighted Poisson and
Log-Normal distributions for the first and second breakup stages, respectively.
The study demonstrates a precisely controllable and reproducible framework,
which can be employed to investigate the mechanisms responsible for the
polydispersity in drop sizes found in complex fluid fragmentation scenarios.
|
Single image super-resolution (SISR) deals with a fundamental problem of
upsampling a low-resolution (LR) image to its high-resolution (HR) version.
Last few years have witnessed impressive progress propelled by deep learning
methods. However, one critical challenge faced by existing methods is to strike
a sweet spot of deep model complexity and resulting SISR quality. This paper
addresses this pain point by proposing a linearly-assembled pixel-adaptive
regression network (LAPAR), which casts the direct LR to HR mapping learning
into a linear coefficient regression task over a dictionary of multiple
predefined filter bases. Such a parametric representation renders our model
highly lightweight and easy to optimize while achieving state-of-the-art
results on SISR benchmarks. Moreover, based on the same idea, LAPAR is extended
to tackle other restoration tasks, e.g., image denoising and JPEG image
deblocking, and again, yields strong performance. The code is available at
https://github.com/dvlab-research/Simple-SR.
|
In this paper, we examine the state art of quantum computing and analyze its
potential effects in scientific computing and cybersecurity. Additionally, a
non-technical description of the mechanics of the listed form of computing is
provided to educate the reader for better understanding of the arguments
provided. The purpose of this study is not only to increase awareness in this
nescient technology, but also serve as a general reference guide for any
individual wishing to study other applications of quantum computing in areas
that include finance, chemistry, and data science. Lastly, an educated argument
is provided in the discussion section that addresses the implications this form
of computing will have in the main areas examined.
|
Although pain is frequent in old age, older adults are often undertreated for
pain. This is especially the case for long-term care residents with moderate to
severe dementia who cannot report their pain because of cognitive impairments
that accompany dementia. Nursing staff acknowledge the challenges of
effectively recognizing and managing pain in long-term care facilities due to
lack of human resources and, sometimes, expertise to use validated pain
assessment approaches on a regular basis. Vision-based ambient monitoring will
allow for frequent automated assessments so care staff could be automatically
notified when signs of pain are displayed. However, existing computer vision
techniques for pain detection are not validated on faces of older adults or
people with dementia, and this population is not represented in existing facial
expression datasets of pain. We present the first fully automated vision-based
technique validated on a dementia cohort. Our contributions are threefold.
First, we develop a deep learning-based computer vision system for detecting
painful facial expressions on a video dataset that is collected unobtrusively
from older adult participants with and without dementia. Second, we introduce a
pairwise comparative inference method that calibrates to each person and is
sensitive to changes in facial expression while using training data more
efficiently than sequence models. Third, we introduce a fast contrastive
training method that improves cross-dataset performance. Our pain estimation
model outperforms baselines by a wide margin, especially when evaluated on
faces of people with dementia. Pre-trained model and demo code available at
https://github.com/TaatiTeam/pain_detection_demo
|
Commercial electricity production from marine renewable sources is becoming a
necessity at a global scale. Offshore wind and solar resources can be combined
to reduce construction and maintenance costs. In this respect, the aim of this
study is two-fold: i) analyse offshore wind and solar resource and their
variability in the Mediterranean Sea at the annual and seasonal scales based on
the recently published ERA5 reanalysis dataset, and; ii) perform a preliminary
assessment of some important features of complementarity, synergy, and
availability of the examined resources using an event-based probabilistic
approach. A robust coefficient of variation is introduced to examine the
variability of each resource and a joint coefficient of variation is
implemented for the first time to evaluate the joint variability of offshore
wind and solar potential. The association between the resources is examined by
introducing a robust measure of correlation, along with the Pearson's r and
Kendall's tau correlation coefficient and the corresponding results are
compared. Several metrics are used to examine the degree of complementarity
affected by variability and intermittency issues. Areas with high potential and
low variability for both resources include the Aegean and Alboran seas, while
significant synergy (over 52%) is identified in the gulfs of Lion, Gabes and
Sidra, Aegean Sea and northern Cyprus Isl. The advantage of combining these two
resources is highlighted at selected locations in terms of the monthly energy
production.
|
In the classroom environment, search tools are the means for students to
access Web resources. The perspectives of students, researchers, and industry
practitioners lead the ongoing research debate in this area. In this article,
we argue in favor of incorporating a new voice into this debate: teachers. We
showcase the value of involving teachers in all aspects related to the design
of search tools for the classroom; from the beginning till the end. Driven by
our research experience designing, developing, and evaluating new tools to
support children's information discovery in the classroom, we share insights on
the role of the experts-in-the-loop, i.e., teachers who provide the connection
between search tools and students. And yes, in our case, always involving a
teacher as a research partner.
|
We present a model that jointly learns the denotations of words together with
their groundings using a truth-conditional semantics. Our model builds on the
neurosymbolic approach of Mao et al. (2019), learning to ground objects in the
CLEVR dataset (Johnson et al., 2017) using a novel parallel attention
mechanism. The model achieves state of the art performance on visual question
answering, learning to detect and ground objects with question performance as
the only training signal. We also show that the model is able to learn flexible
non-canonical groundings just by adjusting answers to questions in the training
set.
|
In this paper, we deal with random attractors for dynamical systems forced by
a deterministic noise. These kind of systems are modeled as skew products where
the dynamics of the forcing process are described by the base transformation.
Here, we consider skew products over the Bernoulli shift with the unit interval
fiber. We study the geometric structure of maximal attractors, the orbit
stability and stability of mixing of these skew products under random
perturbations of the fiber maps. We show that there exists an open set
$\mathcal{U}$ in the space of such skew products so that any skew product
belonging to this set admits an attractor which is either a continuous
invariant graph or a bony graph attractor. These skew products have negative
fiber Lyapunov exponents and their fiber maps are non-uniformly contracting,
hence the non-uniform contraction rates are measured by Lyapnnov exponents.
Furthermore, each skew product of $\mathcal{U}$ admits an invariant ergodic
measure whose support is contained in that attractor. Additionally, we show
that the invariant measure for the perturbed system is continuous in the
Hutchinson metric.
|
In the three-dimensional anti-de Sitter spacetime/two-dimensional conformal
field theory correspondence, we derive the imaginary-time path-integral of a
non-relativistic particle in the anti-de Sitter bulk space, which is dual to
the ground state, from the holographic principle. This derivation is based on
(i) the author's previous argument that the holographic principle asserts that
the anti-de Sitter bulk space as a holographic tensor network after
classicalization has as many stochastic classicalized spin degrees of freedom
as there are sites and (ii) the reinterpretation of the Euclidean action of a
free particle as the action of classicalized spins.
|
Explainable deep learning models are advantageous in many situations. Prior
work mostly provide unimodal explanations through post-hoc approaches not part
of the original system design. Explanation mechanisms also ignore useful
textual information present in images. In this paper, we propose MTXNet, an
end-to-end trainable multimodal architecture to generate multimodal
explanations, which focuses on the text in the image. We curate a novel dataset
TextVQA-X, containing ground truth visual and multi-reference textual
explanations that can be leveraged during both training and evaluation. We then
quantitatively show that training with multimodal explanations complements
model performance and surpasses unimodal baselines by up to 7% in CIDEr scores
and 2% in IoU. More importantly, we demonstrate that the multimodal
explanations are consistent with human interpretations, help justify the
models' decision, and provide useful insights to help diagnose an incorrect
prediction. Finally, we describe a real-world e-commerce application for using
the generated multimodal explanations.
|
Fractional Dzherbashian-Nersesian operator is considered and three famous
fractional order derivatives namely Riemann-Liouville, Caputo and Hilfer
derivatives are shown to be special cases of the earlier one. The expression
for Laplace transform of fractional Dzherbashian-Nersesian operator is
constructed. Inverse problems of recovering space dependent and time dependent
source terms of a time fractional diffusion equation with involution and
involving fractional Dzherbashian-Nersesian operator are considered. The
results on existence and uniqueness for the solutions of inverse problems are
established. The results obtained here generalize several known results.
|
Two-dimensional multilinked structures can benefit aerial robots in both
maneuvering and manipulation because of their deformation ability. However,
certain types of singular forms must be avoided during deformation. Hence, an
additional 1 Degrees-of-Freedom (DoF) vectorable propeller is employed in this
work to overcome singular forms by properly changing the thrust direction. In
this paper, we first extend modeling and control methods from our previous
works for an under-actuated model whose thrust forces are not unidirectional.
We then propose a planning method for the vectoring angles to solve the
singularity by maximizing the controllability under arbitrary robot forms.
Finally, we demonstrate the feasibility of the proposed methods by experiments
where a quad-type model is used to perform trajectory tracking under
challenging forms, such as a line-shape form, and the deformation passing these
challenging forms.
|
This paper demonstrates the applicability of the combination of concurrent
learning as a tool for parameter estimation and non-parametric Gaussian Process
for online disturbance learning. A control law is developed by using both
techniques sequentially in the context of feedback linearization. The
concurrent learning algorithm estimates the system parameters of structured
uncertainty without requiring persistent excitation, which are used in the
design of the feedback linearization law. Then, a non-parametric Gaussian
Process learns unstructured uncertainty. The closed-loop system stability for
the nth-order system is proven using the Lyapunov stability theorem. The
simulation results show that the tracking error is minimized (i) when true
values of model parameters have not been provided, (ii) in the presence of
disturbances introduced once the parameters have converged to their true values
and (iii) when system parameters have not converged to their true values in the
presence of disturbances.
|
Five-dimensional $\mathcal{N}=1$ theories with gauge group $U(N)$, $SU(N)$,
$USp(2N)$ and $SO(N)$ are studied at large rank through localization on a large
sphere. The phase diagram of theories with fundamental hypermultiplets is
universal and characterized by third order phase transitions, with the
exception of $U(N)$, that shows both second and third order transitions. The
phase diagram of theories with adjoint or (anti-)symmetric hypermultiplets is
also determined and found to be universal. Moreover, Wilson loops in
fundamental and antisymmetric representations of any rank are analyzed in this
limit. Quiver theories are discussed as well. All the results substantiate the
$\mathcal{F}$-theorem.
|
We compute the bigraded homotopy ring of the Borel $C_2$-equivariant
$K(1)$-local sphere. This captures many of the patterns seen among
$\text{Im}~J$-type elements in $\mathbb{R}$-motivic and $C_2$-equivariant
stable stems. In addition, it provides a streamlined approach to understanding
the $K(1)$-localizations of stunted projective spaces.
|
Software debugging, and program repair are among the most time-consuming and
labor-intensive tasks in software engineering that would benefit a lot from
automation. In this paper, we propose a novel automated program repair approach
based on CodeBERT, which is a transformer-based neural architecture pre-trained
on large corpus of source code. We fine-tune our model on the ManySStuBs4J
small and large datasets to automatically generate the fix codes. The results
show that our technique accurately predicts the fixed codes implemented by the
developers in 19-72% of the cases, depending on the type of datasets, in less
than a second per bug. We also observe that our method can generate
varied-length fixes (short and long) and can fix different types of bugs, even
if only a few instances of those types of bugs exist in the training dataset.
|
We propose a novel text-analytic approach for incorporating textual
information into structural economic models and apply this to study the effects
of tax news. We first develop a novel semi-supervised two-step topic model that
automatically extracts specific information regarding future tax policy changes
from text. We also propose an approach for transforming such textual
information into an economically meaningful time series to be included in a
structural econometric model as variable of interest or instrument. We apply
our method to study the effects of fiscal foresight, in particular the
informational content in speeches of the U.S. president about future tax
reforms, and find that our semi-supervised topic model can successfully extract
information about the direction of tax changes. The extracted information
predicts (exogenous) future tax changes and contains signals that are not
present in previously considered (narrative) measures of (exogenous) tax
changes. We find that tax news triggers a significant yet delayed response in
output.
|
Volkov states are exact solutions of the Dirac equation in the presence of an
arbitrary plane wave. Volkov states, as well as free photon states, are not
stable in the presence of the background plane-wave field but "decay" as
electrons/positrons can emit photons and photons can transform into
electron-positron pairs. By using the solutions of the corresponding
Schwinger-Dyson equations within the locally-constant field approximation, we
compute the probabilities of nonlinear single Compton scattering and nonlinear
Breit-Wheeler pair production by including the effects of the decay of
electron, positron, and photon states. As a result, we find that the
probabilities of these processes can be expressed as the integral over the
light-cone time of the known probabilities valid for stable states per unit of
light-cone time times a light-cone time-dependent exponential damping function
for each interacting particle. The exponential function for an incoming
(outgoing) either electron/positron or photon at each light-cone time
corresponds to the total probability that either the electron/positron emits a
photon via nonlinear Compton scattering or the photon transforms into an
electron-positron pair via nonlinear Breit-Wheeler pair production until that
light-cone time (from that light-cone time on). It is interesting that the
exponential damping terms depend not only on the particles momentum but also on
their spin (for electrons/positrons) and polarization (for photons). This
additional dependence on the discrete quantum numbers prevents the application
of the electron/positron spin and photon polarization sum-rules, which
significantly simplify the computations in the perturbative regime.
|
General Relativity is an extremely successful theory, at least for weak
gravitational fields, however, it breaks down at very high energies, such as in
correspondence of the initial singularity. Quantum Gravity is expected to
provide more physical insights concerning this open question. Indeed, one
alternative scenario to the Big Bang, that manages to completely avoid the
singularity, is offered by Loop Quantum Cosmology (LQC), which predicts that
the Universe undergoes a collapse to an expansion through a bounce. In this
work, we use metric $f(R)$ gravity to reproduce the modified Friedmann
equations which have been obtained in the context of modified loop quantum
cosmologies. To achieve this, we apply an order reduction method to the $f(R)$
field equations, and obtain covariant effective actions that lead to a bounce,
for specific models of modified LQC, considering matter as a scalar field.
|
Operations typically used in machine learning al-gorithms (e.g. adds and soft
max) can be implemented bycompact analog circuits. Analog Application-Specific
Integrated Circuit (ASIC) designs that implement these algorithms using
techniques such as charge sharing circuits and subthreshold transistors,
achieve very high power efficiencies. With the recent advances in deep learning
algorithms, focus has shifted to hardware digital accelerator designs that
implement the prevalent matrix-vector multiplication operations. Power in these
designs is usually dominated by the memory access power of off-chip DRAM needed
for storing the network weights and activations. Emerging dense non-volatile
memory technologies can help to provide on-chip memory and analog circuits can
be well suited to implement the needed multiplication-vector operations coupled
with in-computing memory approaches. This paper presents abrief review of
analog designs that implement various machine learning algorithms. It then
presents an outlook for the use ofanalog circuits in low-power deep network
accelerators suitable for edge or tiny machine learning applications.
|
We address the problem of tensor decomposition in application to
direction-of-arrival (DOA) estimation for transmit beamspace (TB)
multiple-input multiple-output (MIMO) radar. A general 4-order tensor model
that enables computationally efficient DOA estimation is designed. Whereas
other tensor decomposition-based methods treat all factor matrices as
arbitrary, the essence of the proposed DOA estimation method is to fully
exploit the Vandermonde structure of the factor matrices to take advantage of
the shift-invariance between and within different subarrays. Specifically, the
received signal of TB MIMO radar is expressed as a 4-order tensor. Depending on
the target Doppler shifts, the constructed tensor is reshaped into two distinct
3-order tensors. A computationally efficient tensor decomposition method is
proposed to decompose the Vandermonde factor matrices. The generators of the
Vandermonde factor matrices are computed to estimate the phase rotations
between subarrays, which can be utilized as a look-up table for finding target
DOA. It is further shown that our proposed method can be used in a more general
scenario where the subarray structures can be arbitrary but identical. The
proposed DOA estimation method requires no prior information about the tensor
rank and is guaranteed to achieve precise decomposition result. Simulation
results illustrate the performance improvement of the proposed DOA estimation
method as compared to conventional DOA estimation techniques for TB MIMO Radar.
|
In this article, we prove a series of integral formulae for a codimension-one
foliated sub-Riemannian manifold, i.e., a Riemannian manifold $(M,g)$ equipped
with a distribution ${\mathcal D}=T{\mathcal F}\oplus\,{\rm span}(N)$, where
${\mathcal F}$ is a foliation of $M$ and $N$ a unit vector field $g$-orthogonal
to ${\mathcal F}$. Our integral formulas involve $r$th mean curvatures of
${\mathcal F}$, Newton transformations of the shape operator of ${\mathcal F}$
with respect to $N$ and the curvature tensor of induced connection on
${\mathcal D}$ and generalize some known integral formulas (due to
Brito-Langevin-Rosenberg, Andrzejewski-Walczak and the author) for
codimension-one foliations. We apply our formulas to sub-Riemannian manifolds
with restrictions on the curvature and extrinsic geometry of a foliation.
|
In this paper, we will prove a finite dimensional approximation scheme for
the Wiener measure on closed Riemannian manifolds, establishing a
generalization for $L_{1}$-functionals, of the approach followed by Andersson
and Driver on [2]. This scheme is motived by the measure theoretic techniques
of [15]. Moreover, we will embed the concept of stochastic line integral in
this scheme. This concept will propitiate some applications of path integration
in Riemannian manifolds that provides with an alternative formulation of
classical geometric concepts bringing to them an original point of view.
|
Transparency - the provision of information about what personal data is
collected for which purposes, how long it is stored, or to which parties it is
transferred - is one of the core privacy principles underlying regulations such
as the GDPR. Technical approaches for implementing transparency in practice
are, however, only rarely considered. In this paper, we present a novel
approach for doing so in current, RESTful application architectures and in line
with prevailing agile and DevOps-driven practices. For this purpose, we
introduce 1) a transparency-focused extension of OpenAPI specifications that
allows individual service descriptions to be enriched with transparency-related
annotations in a bottom-up fashion and 2) a set of higher-order tools for
aggregating respective information across multiple, interdependent services and
for coherently integrating our approach into automated CI/CD-pipelines.
Together, these building blocks pave the way for providing transparency
information that is more specific and at the same time better reflects the
actual implementation givens within complex service architectures than current,
overly broad privacy statements.
|
We present the open-source pyratbay framework for exoplanet atmospheric
modeling, spectral synthesis, and Bayesian retrieval. The modular design of the
code allows the users to generate atmospheric 1D parametric models of the
temperature, abundances (in thermochemical equilibrium or
constant-with-altitude), and altitude profiles in hydrostatic equilibrium;
sample ExoMol and HITRAN line-by-line cross sections with custom resolving
power and line-wing cutoff values; compute emission or transmission spectra
considering cross sections from molecular line transitions, collision-induced
absorption, Rayleigh scattering, gray clouds, and alkali resonance lines; and
perform Markov chain Monte Carlo atmospheric retrievals for a given transit or
eclipse dataset. We benchmarked the pyratbay framework by reproducing
line-by-line cross-section sampling of ExoMol cross sections, producing
transmission and emission spectra consistent with petitRADTRANS models,
accurately retrieving the atmospheric properties of simulated transmission and
emission observations generated with TauREx models, and closely reproducing
Aura retrieval analyses of the space-based transmission spectrum of HD 209458b.
Finally, we present a retrieval analysis of a population of transiting
exoplanets, focusing on those observed in transmission with the HST WFC3/G141
grism. We found that this instrument alone can confidently identify when a
dataset shows H2O-absorption features; however, it cannot distinguish whether a
muted H2O feature is caused by clouds, high atmospheric metallicity, or low H2O
abundance. Our results are consistent with previous retrieval analyses. The
pyratbay code is available at PyPI (pip install pyratbay) and conda. The code
is heavily documented (https://pyratbay.readthedocs.io) and tested to provide
maximum accessibility to the community and long-term development stability.
|
Volumetric imaging by fluorescence microscopy is often limited by anisotropic
spatial resolution from inferior axial resolution compared to the lateral
resolution. To address this problem, here we present a deep-learning-enabled
unsupervised super-resolution technique that enhances anisotropic images in
volumetric fluorescence microscopy. In contrast to the existing deep learning
approaches that require matched high-resolution target volume images, our
method greatly reduces the effort to put into practice as the training of a
network requires as little as a single 3D image stack, without a priori
knowledge of the image formation process, registration of training data, or
separate acquisition of target data. This is achieved based on the optimal
transport driven cycle-consistent generative adversarial network that learns
from an unpaired matching between high-resolution 2D images in lateral image
plane and low-resolution 2D images in the other planes. Using fluorescence
confocal microscopy and light-sheet microscopy, we demonstrate that the trained
network not only enhances axial resolution, but also restores suppressed visual
details between the imaging planes and removes imaging artifacts.
|
A crucial question in galaxy formation is what role new accretion has in star
formation. Theoretical models have predicted a wide range of correlation
strengths between halo accretion and galaxy star formation. Previously, we
presented a technique to observationally constrain this correlation strength
for isolated Milky Way-mass galaxies at $z\sim 0.12$, based on the correlation
between halo accretion and the density profile of neighbouring galaxies. By
applying this technique to both observational data from the Sloan Digital Sky
Survey and simulation data from the UniverseMachine, where we can test
different correlation strengths, we ruled out positive correlations between
dark matter accretion and recent star formation activity. In this work, we
expand our analysis by (1) applying our technique separately to red and blue
neighbouring galaxies, which trace different infall populations, (2)
correlating dark matter accretion rates with $D_{n}4000$ measurements as a
longer-term quiescence indicator than instantaneous star-formation rates, and
(3) analyzing higher-mass isolated central galaxies with $10^{11.0} <
M_*/M_\odot < 10^{11.5}$ out to $z\sim 0.18$. In all cases, our results are
consistent with non-positive correlation strengths with $\gtrsim 85$ per cent
confidence, suggesting that processes such as gas recycling dominate star
formation in massive $z=0$ galaxies.
|
Discretization of the uniform norm of functions from a given finite
dimensional subspace of continuous functions is studied. We pay special
attention to the case of trigonometric polynomials with frequencies from an
arbitrary finite set with fixed cardinality. We give two different proofs of
the fact that for any $N$-dimensional subspace of the space of continuous
functions it is sufficient to use $e^{CN}$ sample points for an accurate upper
bound for the uniform norm. Previous known results show that one cannot improve
on the exponential growth of the number of sampling points for a good
discretization theorem in the uniform norm.
Also, we prove a general result, which connects the upper bound on the number
of sampling points in the discretization theorem for the uniform norm with the
best $m$-term bilinear approximation of the Dirichlet kernel associated with
the given subspace. We illustrate application of our technique on the example
of trigonometric polynomials.
|
This paper presents a multi-lead fusion method for the accurate and automated
detection of the QRS complex location in 12 lead ECG (Electrocardiogram)
signals. The proposed multi-lead fusion method accurately delineates the QRS
complex by the fusion of detected QRS complexes of the individual 12 leads. The
proposed algorithm consists of two major stages. Firstly, the QRS complex
location of each lead is detected by the single lead QRS detection algorithm.
Secondly, the multi-lead fusion algorithm combines the information of the QRS
complex locations obtained in each of the 12 leads. The performance of the
proposed algorithm is improved in terms of Sensitivity and Positive
Predictivity by discarding the false positives. The proposed method is
validated on the ECG signals with various artifacts, inter and intra subject
variations. The performance of the proposed method is validated on the long
duration recorded ECG signals of St. Petersburg INCART database with
Sensitivity of 99.87% and Positive Predictivity of 99.96% and on the short
duration recorded ECG signals of CSE (Common Standards for Electrocardiography)
multi-lead database with 100% Sensitivity and 99.13% Positive Predictivity.
|
This work presents a hybrid modeling approach to data-driven learning and
representation of unknown physical processes and closure parameterizations.
These hybrid models are suitable for situations where the mechanistic
description of dynamics of some variables is unknown, but reasonably accurate
observational data can be obtained for the evolution of the state of the
system. In this work, we propose machine learning to account for missing
physics and then data assimilation to correct the prediction. In particular, we
devise an effective methodology based on a recurrent neural network to model
the unknown dynamics. A long short-term memory (LSTM) based correction term is
added to the predictive model in order to take into account hidden physics.
Since LSTM introduces a black-box approach for the unknown part of the model,
we investigate whether the proposed hybrid neural-physical model can be further
corrected through a sequential data assimilation step. We apply this framework
to the weakly nonlinear Lorenz model that displays quasiperiodic oscillations,
the highly nonlinear Lorenz model, and two-scale Lorenz model. The hybrid
neural-physics model yields accurate results for the weakly nonlinear Lorenz
model with the predicted state close to the true Lorenz model trajectory. For
the highly nonlinear Lorenz model and the two-scale Lorenz model, the hybrid
neural-physics model deviates from the true state due to the accumulation of
prediction error from one time step to the next time step. The ensemble Kalman
filter approach takes into account the prediction error and updates the
diverged prediction using available observations in order to provide a more
accurate state estimate. The successful synergistic integration of neural
network and data assimilation for low-dimensional system shows the potential
benefits of the proposed hybrid-neural physics model for complex systems.
|
The unique electronic and magnetic properties of Lanthanides molecular
complexes place them at the forefront of the race towards high-temperature
single-ion magnets and magnetic quantum bits. The design of compounds of this
class has so far been almost exclusively driven by static crystal field
considerations, with emphasis on increasing the magnetic anisotropy barrier.
This guideline has now reached its maximum potential and new progress can only
come from a deeper understanding of spin-phonon relaxation mechanisms. In this
work we compute relaxation times fully ab initio and unveil the nature of all
spin-phonon relaxation mechanisms, namely Orbach and Raman pathways, in a
prototypical Dy single-ion magnet. Computational predictions are in agreement
with the experimental determination of spin relaxation time and crystal field
anisotropy, and show that Raman relaxation, dominating at low temperature, is
triggered by low-energy phonons and little affected by further engineering of
crystal field axiality. A comprehensive analysis of spin-phonon coupling
mechanism reveals that molecular vibrations beyond the ion's first coordination
shell can also assume a prominent role in spin relaxation through an
electrostatic polarization effect. Therefore, this work shows the way forward
in the field by delivering a novel and complete set of chemically-sound design
rules tackling every aspect of spin relaxation at any temperature
|
The evaluation of nucleation rates from molecular dynamics trajectories is
hampered by the slow nucleation time scale and impact of finite size effects.
Here, we show that accurate nucleation rates can be obtained in a very general
fashion relying only on the free energy barrier, transition state theory (TST),
and a simple dynamical correction for diffusive recrossing. In this setup, the
time scale problem is overcome by using enhanced sampling methods, in casu
metadynamics, whereas the impact of finite size effects can be naturally
circumvented by reconstructing the free energy surface from an appropriate
ensemble. Approximations from classical nucleation theory are avoided. We
demonstrate the accuracy of the approach by calculating macroscopic rates of
droplet nucleation from argon vapor, spanning sixteen orders of magnitude and
in excellent agreement with literature results, all from simulations of very
small (512 atom) systems.
|
We report the largest broadband terahertz (THz) polarizer based on a flexible
ultra-transparent cyclic olefin copolymer (COC). The COC polarizers were
fabricated by nanoimprint soft lithography with the lowest reported pitch of 2
or 3 micrometers and depth of 3 micrometers and sub-wavelength Au bilayer wire
grid. Fourier Transform Infrared spectroscopy in a large range of 0.9 -20 THz
shows transmittance of bulk materials such as doped and undoped Si and
polymers. COC polarizers present more than doubled transmission intensity and
larger transmitting band when compared to Si. COC polarizers present superior
performance when compared to Si polarizers, with extinctions ratios of at least
4.4 dB higher and registered performance supported by numerical simulations.
Fabricated Si and COC polarizers' show larger operation gap when compared to a
commercial polarizer. Fabrication of these polarizers can be easily up-scaled
which certainly meets functional requirements for many THz devices and
applications, such as high transparency, lower cost fabrication and flexible
material.
|
The Agda Universal Algebra Library (UALib) is a library of types and programs
(theorems and proofs) we developed to formalize the foundations of universal
algebra in dependent type theory using the Agda programming language and proof
assistant. The UALib includes a substantial collection of definitions,
theorems, and proofs from general algebra and equational logic, including many
examples that exhibit the power of inductive and dependent types for
representing and reasoning about relations, algebraic structures, and
equational theories. In this paper we discuss the logical foundations on which
the library is built, and describe the types defined in the first 13 modules of
the library. Special attention is given to aspects of the library that seem
most interesting or challenging from a type theory or mathematical foundations
perspective.
|
Accurate and explainable health event predictions are becoming crucial for
healthcare providers to develop care plans for patients. The availability of
electronic health records (EHR) has enabled machine learning advances in
providing these predictions. However, many deep learning based methods are not
satisfactory in solving several key challenges: 1) effectively utilizing
disease domain knowledge; 2) collaboratively learning representations of
patients and diseases; and 3) incorporating unstructured text. To address these
issues, we propose a collaborative graph learning model to explore
patient-disease interactions and medical domain knowledge. Our solution is able
to capture structural features of both patients and diseases. The proposed
model also utilizes unstructured text data by employing an attention regulation
strategy and then integrates attentive text features into a sequential learning
process. We conduct extensive experiments on two important healthcare problems
to show the competitive prediction performance of the proposed method compared
with various state-of-the-art models. We also confirm the effectiveness of
learned representations and model interpretability by a set of ablation and
case studies.
|
Trajectory planning is commonly used as part of a local planner in autonomous
driving. This paper considers the problem of planning a
continuous-curvature-rate trajectory between fixed start and goal states that
minimizes a tunable trade-off between passenger comfort and travel time. The
problem is an instance of infinite dimensional optimization over two continuous
functions: a path, and a velocity profile. We propose a simplification of this
problem that facilitates the discretization of both functions. This paper also
proposes a method to quickly generate minimal-length paths between start and
goal states based on a single tuning parameter: the second derivative of
curvature. Furthermore, we discretize the set of velocity profiles along a
given path into a selection of acceleration way-points along the path.
Gradient-descent is then employed to minimize cost over feasible choices of the
second derivative of curvature, and acceleration way-points, resulting in a
method that repeatedly solves the path and velocity profiles in an iterative
fashion. Numerical examples are provided to illustrate the benefits of the
proposed methods.
|
Data sharing by researchers is a centerpiece of Open Science principles and
scientific progress. For a sample of 6019 researchers, we analyze the
extent/frequency of their data sharing. Specifically, the relationship with the
following four variables: how much they value data citations, the extent to
which their data-sharing activities are formally recognized, their perceptions
of whether sufficient credit is awarded for data sharing, and the reported
extent to which data citations motivate their data sharing. In addition, we
analyze the extent to which researchers have reused openly accessible data, as
well as how data sharing varies by professional age-cohort, and its
relationship to the value they place on data citations. Furthermore, we
consider most of the explanatory variables simultaneously by estimating a
multiple linear regression that predicts the extent/frequency of their data
sharing. We use the dataset of the State of Open Data Survey 2019 by Springer
Nature and Digital Science. Results do allow us to conclude that a desire for
recognition/credit is a major incentive for data sharing. Thus, the possibility
of receiving data citations is highly valued when sharing data, especially
among younger researchers, irrespective of the frequency with which it is
practiced. Finally, the practice of data sharing was found to be more prevalent
at late research career stages, despite this being when citations are less
valued and have a lower motivational impact. This could be due to the fact that
later-career researchers may benefit less from keeping their data private.
|
Visual object tracking, which is representing a major interest in image
processing field, has facilitated numerous real world applications. Among them,
equipping unmanned aerial vehicle (UAV) with real time robust visual trackers
for all day aerial maneuver, is currently attracting incremental attention and
has remarkably broadened the scope of applications of object tracking. However,
prior tracking methods have merely focused on robust tracking in the
well-illuminated scenes, while ignoring trackers' capabilities to be deployed
in the dark. In darkness, the conditions can be more complex and harsh, easily
posing inferior robust tracking or even tracking failure. To this end, this
work proposed a novel discriminative correlation filter based tracker with
illumination adaptive and anti dark capability, namely ADTrack. ADTrack firstly
exploits image illuminance information to enable adaptability of the model to
the given light condition. Then, by virtue of an efficient and effective image
enhancer, ADTrack carries out image pretreatment, where a target aware mask is
generated. Benefiting from the mask, ADTrack aims to solve a dual regression
problem where dual filters, i.e., the context filter and target focused filter,
are trained with mutual constraint. Thus ADTrack is able to maintain
continuously favorable performance in all-day conditions. Besides, this work
also constructed one UAV nighttime tracking benchmark UAVDark135, comprising of
more than 125k manually annotated frames, which is also very first UAV
nighttime tracking benchmark. Exhaustive experiments are extended on
authoritative daytime benchmarks, i.e., UAV123 10fps, DTB70, and the newly
built dark benchmark UAVDark135, which have validated the superiority of
ADTrack in both bright and dark conditions on a single CPU.
|
In this paper, we propose a novel framework to translate a portrait
photo-face into an anime appearance. Our aim is to synthesize anime-faces which
are style-consistent with a given reference anime-face. However, unlike typical
translation tasks, such anime-face translation is challenging due to complex
variations of appearances among anime-faces. Existing methods often fail to
transfer the styles of reference anime-faces, or introduce noticeable
artifacts/distortions in the local shapes of their generated faces. We propose
AniGAN, a novel GAN-based translator that synthesizes high-quality anime-faces.
Specifically, a new generator architecture is proposed to simultaneously
transfer color/texture styles and transform local facial shapes into anime-like
counterparts based on the style of a reference anime-face, while preserving the
global structure of the source photo-face. We propose a double-branch
discriminator to learn both domain-specific distributions and domain-shared
distributions, helping generate visually pleasing anime-faces and effectively
mitigate artifacts. Extensive experiments on selfie2anime and a new face2anime
dataset qualitatively and quantitatively demonstrate the superiority of our
method over state-of-the-art methods. The new dataset is available at
https://github.com/bing-li-ai/AniGAN .
|
Double-parton scattering is investigated using events with a Z boson and
jets. The Z boson is reconstructed using only the dimuon channel. The
measurements are performed with proton-proton collision data recorded by the
CMS experiment at the LHC at $\sqrt{s} =$ 13 TeV, corresponding to an
integrated luminosity of 35.9 fb$^{-1}$ collected in the year 2016.
Differential cross sections of Z + $\geq$ 1 jet and Z + $\geq$ 2 jets are
measured with transverse momentum of the jets above 20 GeV and pseudorapidity
$|\eta|$ $\lt$ 2.4. Several distributions with sensitivity to double-parton
scattering effects are measured as functions of the angle and the transverse
momentum imbalance between the Z boson and the jets. The measured distributions
are compared with predictions from several event generators with different
hadronization models and different parameter settings for multiparton
interactions. The measured distributions show a dependence on the hadronization
and multiparton interaction simulation parameters, and are important input for
future improvements of the simulations.
|
In this study, we investigate the metastable behavior of Metropolis-type
Glauber dynamics associated with the Blume-Capel model with zero chemical
potential and zero external field at very low temperatures. The corresponding
analyses for the same model with zero chemical potential and positive small
external field were performed in [Cirillo and Nardi, Journal of Statistical
Physics, 150: 1080-1114, 2013] and [Landim and Lemire, Journal of Statistical
Physics, 164: 346-376, 2016]. We obtain both large deviation-type and
potential-theoretic results on the metastable behavior in our setting. To this
end, we perform highly thorough investigation on the energy landscape, where it
is revealed that no critical configurations exist and alternatively a massive
flat plateau of saddle configurations resides therein.
|
The Multi-Phase Transport model (AMPT) is used to investigate the
longitudinal broadening of the transverse momentum two-particle correlator
$C_{2}\left(\Delta\eta,\Delta\varphi\right)$, and its utility to extract the
specific shear viscosity, $\eta/s$, of the quark-gluon plasma formed in
ultra-relativistic heavy ion collisions. The results from these model studies
indicate that the longitudinal broadening of
$C_{2}\left(\Delta\eta,\Delta\varphi\right)$ is sensitive to the magnitude of
$\eta/s$. However, reliable extraction of the longitudinal broadening of the
correlator requires the suppression of possible self-correlations associated
with the definition of the collision centrality.
|
We show that for $\Pi_2$-properties of second or third order arithmetic as
formalized in appropriate natural signatures the apparently weaker notion of
forcibility overlaps with the standard notion of consistency (assuming large
cardinal axioms).
Among such $\Pi_2$-properties we mention: the negation of the Continuum
hypothesis, Souslin Hypothesis, the negation of Whitehead's conjecture on free
groups, the non-existence of outer automorphisms for the Calkin algebra, etc...
In particular this gives an a posteriori explanation of the success forcing
(and forcing axioms) met in producing models of such properties.
Our main results relate generic absoluteness theorems for second order
arithmetic, Woodin's axiom $(*)$ and forcing axioms to Robinson's notion of
model companionship (as applied to set theory). We also briefly outline in
which ways these results provide an argument to refute the Continuum
hypothesis.
|
In Part I of this study, we obtained the ray (group) velocity gradients and
Hessians with respect to the ray locations, directions and the anisotropic
model parameters, at nodal points along ray trajectories, considering general
anisotropic (triclinic) media and both, quasi-compressional and quasi-shear
waves. Ray velocity derivatives for anisotropic media with higher symmetries
were considered particular cases of general anisotropy. In this part, Part II,
we follow the computational workflow presented in Part I, formulating the ray
velocity derivatives directly for polar anisotropic (transverse isotropy with
tilted axis of symmetry, TTI) media for the coupled qP and qSV waves and for SH
waves. The acoustic approximation for qP waves is considered a special case.
The medium properties, normally specified at regular three-dimensional fine
grid points, are the five material parameters: the axial compressional and
shear velocities and the three Thomsen parameters, and two geometric
parameters: the polar angles defining the local direction of the medium
symmetry axis. All the parameters are assumed spatially (smoothly) varying,
where their gradients and Hessians can be reliably computed. Two case examples
are considered; the first represents compacted shale/sand rocks (with positive
anellipticity) and the second, unconsolidated sand rocks with strong negative
anellipticity (manifesting a qSV triplication). The ray velocity derivatives
obtained in this part are first tested by comparing them with the corresponding
numerical (finite difference) derivatives. Additionally, we show that exactly
the same results (ray velocity derivatives) can be obtained if we transform the
given polar anisotropic model parameters (five material and two geometric) into
the twenty-one stiffness tensor components of a general anisotropic (triclinic)
medium, and apply the theory derived in Part I.
|
We provide a simple analysis of the big-bang nucleosynthesis (BBN)
sensitivity to the light dark matter (DM) generated by the thermal freeze-in
mechanism. It is shown that the ratio of the effective neutrino number shift
$\Delta N_{\nu}$ over the DM relic density $\omega\equiv \Omega h^2$, denoted
by $R_\chi\equiv\Delta N_\nu/\omega$, cancels the decaying particle mass and
the feeble coupling, rendering therefore a simple visualization of $\Delta
N_{\nu}$ at the BBN epoch in terms of the DM mass. This property drives one to
conclude that the shift with a sensitivity of $\Delta N_{\nu}\simeq
\mathcal{O}(0.1)$ cannot originate from a single warm DM under the
Lyman-$\alpha$ forest constraints. For the cold-plus-warm DM scenarios where
the Lyman-$\alpha$ constraints are diluted, the ratio $R_\chi$ can be
potentially used to test the thermal freeze-in mechanism in generating a small
warm component of DM and a possible excess at the level of $\Delta
N_{\nu}\simeq \mathcal{O}(0.01)$.
|
We present a scheme for ground-state cooling of a mechanical resonator by
simultaneously coupling it to a superconducting qubit and a cavity field. The
Hamiltonian describing the hybrid system dynamics is systematically derived.
The cooling process is driven by a red-detuned ac drive on the qubit and a
laser drive on the optomechanical cavity. We have investigated cooling in the
weak and the strong coupling regimes for both the individual system, i.e.,
qubit assisted cooling and optomechanical cooling, and compared them with the
effective hybrid cooling. It is shown that hybrid cooling is more effective
compared to the individual cooling mechanisms, and could be applied in both the
resolved and the unresolved sideband regimes.
|
Filtering packet traffic and rules of permit/denial of data packets into
network nodes are granted by facilitating Access Control Lists (ACL). This
paper proposes a procedure of adding a link load threshold value to the access
control list rules option, which acts on the basis of threshold value. The
ultimate goal of this enhanced ACL is to avoid congestion in targeted
subnetworks. The link load threshold value allows to decide that packet traffic
is rerouted by the router to avoid congestion, or packet drop happens on the
basis of packet priorities. The packet rerouting in case of high traffic loads,
based on new packet filtering procedure for congestion avoidance, will result
in the reduction of the overall packet drop ratio, and of over-subscription in
congested subnetworks.
|
The prompt emission of GRBs has been investigated for more than 50 years but
remains poorly understood. Commonly, spectral and temporal profiles of
{\gamma}-ray emission are analysed. However, they are insufficient for a
complete picture on GRB-related physics. The addition of polarization
measurements provides invaluable information towards the understanding of these
astrophysical sources. In recent years, dedicated polarimeters, such as POLAR
and GAP, were built. The former of which observed low levels of polarization as
well as a temporal evolution of the polarization angle. It was understood that
a larger sample of GRB polarization measurements and time resolved studies are
necessary to constrain theoretical models. The POLAR-2 mission aims to address
this by increasing the effective area by an order of magnitude compared to
POLAR. POLAR-2 is manifested for launch on board the China Space Station in
2024 and will operate for at least 2 years. Insight from POLAR will aid in the
improvement of the overall POLAR-2 design. Major improvements (compared to
POLAR) will include the replacement of multi-anode PMTs (MAPMTs) with SiPMs,
increase in sensitive volume and further technological upgrades. POLAR-2 is
projected to measure about 50 GRBs per year with equal or better quality
compared to the best seen by POLAR. The instrument design, preliminary results
and anticipated scientific potential of this mission will be discussed.
|
Subsets and Splits