abstract
stringlengths 42
2.09k
|
---|
We introduce a new model of the logarithmic type of wave like plate equation
with a nonlocal logarithmic damping mechanism. We consider the Cauchy problem
for this new model in the whole space, and study the asymptotic profile and
optimal decay rates of solutions as time goes to infinity in L^{2}-sense. The
operator L considered in this paper was first introduced to dissipate the
solutions of the wave equation in the paper studied by Charao-Ikehata in 2020.
We will discuss the asymptotic property of the solution as time goes to
infinity to our Cauchy problem, and in particular, we classify the property of
the solutions into three parts from the viewpoint of regularity of the initial
data, that is, diffusion-like, wave-like, and both of them.
|
This paper details speckle observations of binary stars taken at the Lowell
Discovery Telescope, the WIYN Telescope, and the Gemini telescopes between 2016
January and 2019 September. The observations taken at Gemini and Lowell were
done with the Differential Speckle Survey Instrument (DSSI), and those done at
WIYN were taken with the successor instrument to DSSI at that site, the
NN-EXPLORE Exoplanet Star and Speckle Imager (NESSI). In total, we present 378
observations of 178 systems and we show that the uncertainty in the measurement
precision for the combined data set is ~2 mas in separation, ~1-2 degrees in
position angle depending on the separation, and $\sim$0.1 magnitudes in
magnitude difference. Together with data already in the literature, these new
results permit 25 visual orbits and one spectroscopic-visual orbit to be
calculated for the first time. In the case of the spectroscopic-visual
analysis, which is done on the trinary star HD 173093, we calculate masses with
precision of better than 1% for all three stars in that system. Twenty-one of
the visual orbits calculated have a K dwarf as the primary star; we add these
to the known orbits of K dwarf primary stars and discuss the basic orbital
properties of these stars at this stage. Although incomplete, the data that
exist so far indicate that binaries with K dwarf primaries tend not to have
low-eccentricity orbits at separations of one to a few tens of AU, that is, on
solar-system scales.
|
For many real-world classification problems, e.g., sentiment classification,
most existing machine learning methods are biased towards the majority class
when the Imbalance Ratio (IR) is high. To address this problem, we propose a
set convolution (SetConv) operation and an episodic training strategy to
extract a single representative for each class, so that classifiers can later
be trained on a balanced class distribution. We prove that our proposed
algorithm is permutation-invariant despite the order of inputs, and experiments
on multiple large-scale benchmark text datasets show the superiority of our
proposed framework when compared to other SOTA methods.
|
In this paper we prove the convergence of solutions to discrete models for
binary waveguide arrays toward those of their formal continuum limit, for which
we also show the existence of localized standing waves. This work rigorously
justifies formal arguments and numerical simulations present in the Physics
literature.
|
What would be the effect of locally poking a static scene? We present an
approach that learns naturally-looking global articulations caused by a local
manipulation at a pixel level. Training requires only videos of moving objects
but no information of the underlying manipulation of the physical scene. Our
generative model learns to infer natural object dynamics as a response to user
interaction and learns about the interrelations between different object body
regions. Given a static image of an object and a local poking of a pixel, the
approach then predicts how the object would deform over time. In contrast to
existing work on video prediction, we do not synthesize arbitrary realistic
videos but enable local interactive control of the deformation. Our model is
not restricted to particular object categories and can transfer dynamics onto
novel unseen object instances. Extensive experiments on diverse objects
demonstrate the effectiveness of our approach compared to common video
prediction frameworks. Project page is available at https://bit.ly/3cxfA2L .
|
We propose a straightforward implementation of the phenomenon of diffractive
focusing with uniform atomic Bose-Einstein condensates. Both, analytical as
well as numerical methods not only illustrate the influence of the atom-atom
interaction on the focusing factor and the focus time, but also allow us to
derive the optimal conditions for observing focusing of this type in the case
of interacting matter waves.
|
Consider the first order differential system given by
\begin{equation*}
\begin{array}{l}
\dot{x}= y, \qquad \dot{y}= -x+a(1-y^{2n})y, \end{array}
\end{equation*} where $a$ is a real parameter and the dots denote derivatives
with respect to the time $t$. Such system is known as the generalized Rayleigh
system and it appears, for instance, in the modeling of diabetic chemical
processes through a constant area duct, where the effect of adding or rejecting
heat is considered. In this paper we characterize the global dynamics of this
generalized Rayleigh system. In particular we prove the existence of a unique
limit cycle when the parameter $a\ne 0$.
|
The ALICE Collaboration reports the first fully-corrected measurements of the
$N$-subjettiness observable for track-based jets in heavy-ion collisions. This
study is performed using data recorded in pp and Pb$-$Pb collisions at
centre-of-mass energies of $\sqrt{s} = 7$ TeV and $\sqrt{s_{\rm NN}} = 2.76$
TeV, respectively. In particular the ratio of 2-subjettiness to 1-subjettiness,
$\tau_{2}/\tau_{1}$, which is sensitive to the rate of two-pronged jet
substructure, is presented. Energy loss of jets traversing the strongly
interacting medium in heavy-ion collisions is expected to change the rate of
two-pronged substructure relative to vacuum. The results are presented for jets
with a resolution parameter of $R = 0.4$ and charged jet transverse momentum of
$40 \leq p_{\rm T,\rm jet} \leq 60$ GeV/$c$, which constitute a larger jet
resolution and lower jet transverse momentum interval than previous
measurements in heavy-ion collisions. This has been achieved by utilising a
semi-inclusive hadron-jet coincidence technique to suppress the larger jet
combinatorial background in this kinematic region. No significant modification
of the $\tau_{2}/\tau_{1}$ observable for track-based jets in Pb$-$Pb
collisions is observed relative to vacuum PYTHIA6 and PYTHIA8 references at the
same collision energy. The measurements of $\tau_{2}/\tau_{1}$, together with
the splitting aperture angle $\Delta R$, are also performed in pp collisions at
$\sqrt{s}=7$ TeV for inclusive jets. These results are compared with PYTHIA
calculations at $\sqrt{s}=7$ TeV, in order to validate the model as a vacuum
reference for the Pb$-$Pb centre-of-mass energy. The PYTHIA references for
$\tau_{2}/\tau_{1}$ are shifted to larger values compared to the measurement in
pp collisions. This hints at a reduction in the rate of two-pronged jets in
Pb$-$Pb collisions compared to pp collisions.
|
We present an ALMA 1.3 mm (Band 6) continuum survey of lensed submillimeter
galaxies (SMGs) at $z=1.0\sim3.2$ with an angular resolution of $\sim0.2$".
These galaxies were uncovered by the Herschel Lensing Survey (HLS), and feature
exceptionally bright far-infrared continuum emission ($S_\mathrm{peak} \gtrsim
90$ mJy) owing to their lensing magnification. We detect 29 sources in 20
fields of massive galaxy clusters with ALMA. Using both the Spitzer/IRAC
(3.6/4.5 $\mathrm{\mu m}$) and ALMA data, we have successfully modeled the
surface brightness profiles of 26 sources in the rest-frame near- and
far-infrared. Similar to previous studies, we find the median dust-to-stellar
continuum size ratio to be small ($R_\mathrm{e,dust}/R_\mathrm{e,star} =
0.38\pm0.14$) for the observed SMGs, indicating that star formation is
centrally concentrated. This is, however, not the case for two spatially
extended main-sequence SMGs with a low surface brightness at 1.3 mm ($\lesssim
0.1$ mJy arcsec$^{-2}$), in which the star formation is distributed over the
entire galaxy ($R_\mathrm{e,dust}/R_\mathrm{e,star}>1$). As a whole, our SMG
sample shows a tight anti-correlation between
($R_\mathrm{e,dust}/R_\mathrm{e,star}$) and far-infrared surface brightness
($\Sigma_\mathrm{IR}$) over a factor of $\simeq$ 1000 in $\Sigma_\mathrm{IR}$.
This indicates that SMGs with less vigorous star formation (i.e., lower
$\Sigma_\mathrm{IR}$) lack central starburst and are likely to retain a broader
spatial distribution of star formation over the whole galaxies (i.e., larger
$R_\mathrm{e,dust}/R_\mathrm{e,star}$). The same trend can be reproduced with
cosmological simulations as a result of central starburst and potentially
subsequent "inside-out" quenching, which likely accounts for the emergence of
compact quiescent galaxies at $z\sim2$.
|
Single layer Pb on top of (111) surfaces of group IV semiconductors hosts
charge density wave and superconductivity depending on the coverage and on the
substrate. These systems are normally considered to be experimental
realizations of single band Hubbard models and their properties are mostly
investigated using lattice models with frozen structural degrees of freedom,
although the reliability of this approximation is unclear. Here, we consider
the case of Pb/Ge(111) at 1/3 coverage, for which surface X-ray diffraction and
ARPES data are available. By performing first principles calculations, we
demonstrate that the non-local exchange between Pb and the substrate drives the
system into a $3\times 3$ charge density wave. The electronic structure of this
charge ordered phase is mainly determined by two effects: the magnitude of the
Pb distortion and the large spin-orbit coupling. Finally, we show that the
effect applies also to the $3\times 3$ phase of Pb/Si(111) where the
Pb-substrate exchange interaction increases the bandwidth by more than a factor
1.5 with respect to DFT+U, in better agreement with STS data. The delicate
interplay between substrate, structural and electronic degrees of freedom
invalidates the widespread interpretation available in literature considering
these compounds as physical realizations of single band Hubbard models.
|
Unmanned aerial vehicle (UAV)-enabled wireless power transfer (WPT) has
recently emerged as a promising technique to provide sustainable energy supply
for widely distributed low-power ground devices (GDs) in large-scale wireless
networks. Compared with the energy transmitters (ETs) in conventional WPT
systems which are deployed at fixed locations, UAV-mounted aerial ETs can fly
flexibly in the three-dimensional (3D) space to charge nearby GDs more
efficiently. This paper provides a tutorial overview on UAV-enabled WPT and its
appealing applications, in particular focusing on how to exploit UAVs'
controllable mobility via their 3D trajectory design to maximize the amounts of
energy transferred to all GDs in a wireless network with fairness. First, we
consider the single-UAV-enabled WPT scenario with one UAV wirelessly charging
multiple GDs at known locations. To solve the energy maximization problem in
this case, we present a general trajectory design framework consisting of three
innovative approaches to optimize the UAV trajectory, which are multi-location
hovering, successive-hover-and-fly, and time-quantization-based optimization,
respectively. Next, we consider the multi-UAV-enabled WPT scenario where
multiple UAVs cooperatively charge many GDs in a large area. Building upon the
single-UAV trajectory design, we propose two efficient schemes to jointly
optimize multiple UAVs' trajectories, based on the principles of UAV swarming
and GD clustering, respectively. Furthermore, we consider two important
extensions of UAV-enabled WPT, namely UAV-enabled wireless powered
communication networks (WPCN) and UAV-enabled wireless powered mobile edge
computing (MEC).
|
We explore the intrinsic dynamics of spherical shells immersed in a fluid in
the vicinity of their buckled state, through experiments and 3D axisymmetric
simulations. The results are supported by a theoretical model that accurately
describes the buckled shell as a two-variable-only oscillator. We quantify the
effective "softening" of shells above the buckling threshold, as observed in
recent experiments on interactions between encapsulated microbubbles and
acoustic waves. The main dissipation mechanism in the neighboring fluid is also
evidenced.
|
In this paper, we analyze the effect of transport infrastructure investments
in railways. As a testing ground, we use data from a new historical database
that includes annual panel data on approximately 2,400 Swedish rural
geographical areas during the period 1860-1917. We use a staggered event study
design that is robust to treatment effect heterogeneity. Importantly, we find
extremely large reduced-form effects of having access to railways. For real
nonagricultural income, the cumulative treatment effect is approximately 120%
after 30 years. Equally important, we also show that our reduced-form effect is
likely to reflect growth rather than a reorganization of existing economic
activity since we find no spillover effects between treated and untreated
regions. Specifically, our results are consistent with the big push hypothesis,
which argues that simultaneous/coordinated investment, such as large
infrastructure investment in railways, can generate economic growth if there
are strong aggregate demand externalities (e.g., Murphy et al. 1989). We used
plant-level data to further corroborate this mechanism. Indeed, we find that
investments in local railways dramatically, and independent of initial
conditions, increase local industrial production and employment on the order of
100-300% across almost all industrial sectors.
|
The presence of relativistic electrons within the diffuse gas phase of galaxy
clusters is now well established, but their detailed origin remains unclear.
Cosmic ray protons are also expected to accumulate during the formation of
clusters and would lead to gamma-ray emission through hadronic interactions
within the thermal gas. Recently, the detection of gamma-ray emission has been
reported toward the Coma cluster with Fermi-LAT. Assuming that this gamma-ray
emission arises from hadronic interactions in the ICM, we aim at exploring the
implication of this signal on the cosmic ray content of the Coma cluster. We
use the MINOT software to build a physical model of the cluster and apply it to
the Fermi-LAT data. We also consider contamination from compact sources and the
impact of various systematic effects. We confirm that a significant gamma-ray
signal is observed within the characteristic radius $\theta_{500}$ of the Coma
cluster, with a test statistic TS~27 for our baseline model. The presence of a
possible point source may account for most of the observed signal. However,
this source could also correspond to the peak of the diffuse emission of the
cluster itself and extended models match the data better. We constrain the
cosmic ray to thermal energy ratio within $R_{500}$ to $X_{\rm
CRp}=1.79^{+1.11}_{-0.30}$\% and the slope of the energy spectrum of cosmic
rays to $\alpha=2.80^{+0.67}_{-0.13}$. Finally, we compute the synchrotron
emission associated with the secondary electrons produced in hadronic
interactions assuming steady state. This emission is about four times lower
than the overall observed radio signal, so that primary cosmic ray electrons or
reacceleration of secondary electrons is necessary to explain the total
emission. Assuming an hadronic origin of the signal, our results provide the
first quantitative measurement of the cosmic ray proton content in a
cluster.[Abridged]
|
We study idempotent, model, and Toeplitz operators that attain the norm.
Notably, we prove that if $\mathcal{Q}$ is a backward shift invariant subspace
of the Hardy space $H^2(\mathbb{D})$, then the model operator $S_{\mathcal{Q}}$
attains its norm. Here $S_{\mathcal{Q}} = P_{\mathcal{Q}}M_z|_{\mathcal{Q}}$,
the compression of the shift $M_z$ on the Hardy space $H^2(\mathbb{D})$ to
$\mathcal{Q}$.
|
Power system simulations that extend over a time period of minutes, hours, or
even longer are called extended-term simulations. As power systems evolve into
complex systems with increasing interdependencies and richer dynamic behaviors
across a wide range of timescales, extended-term simulation is needed for many
power system analysis tasks (e.g., resilience analysis, renewable energy
integration, cascading failures), and there is an urgent need for efficient and
robust extended-term simulation approaches. The conventional approaches are
insufficient for dealing with the extended-term simulation of multi-timescale
processes. This paper proposes an extended-term simulation approach based on
the holomorphic embedding (HE) methodology. Its accuracy and computational
efficiency are backed by HE's high accuracy in event-driven simulation, larger
and adaptive time steps, and flexible switching between full-dynamic and
quasi-steady-state (QSS) models. We used this proposed extended-term simulation
approach to evaluate bulk power system restoration plans, and it demonstrates
satisfactory accuracy and efficiency in this complex simulation task.
|
The efficiency of the adiabatic demagnetization of nuclear spin system (NSS)
of a solid is limited, if quadrupole effects are present. Nevertheless, despite
a considerable quadrupole interaction, recent experiments validated the
thermodynamic description of the NSS in GaAs. This suggests that nuclear spin
temperature can be used as the universal indicator of the NSS state in presence
of external perturbations. We implement this idea by analyzing the modification
of the NSS temperature in response to an oscillating magnetic field at various
frequencies, an approach termed as the warm-up spectroscopy. It is tested in a
n-GaAs sample where both mechanical strain and built-in electric field may
contribute to the quadrupole splitting, yielding the parameters of electric
field gradient tensors for 75As and both Ga isotopes, 69Ga and 71Ga.
|
Unmanned aerial vehicles (UAVs) play an increasingly important role in
military, public, and civilian applications, where providing connectivity to
UAVs is crucial for its real-time control, video streaming, and data
collection. Considering that cellular networks offer wide area, high speed, and
secure wireless connectivity, cellular-connected UAVs have been considered as
an appealing solution to provide UAV connectivity with enhanced reliability,
coverage, throughput, and security. Due to the nature of UAVs mobility, the
throughput, reliability and End-to-End (E2E) delay of UAVs communication under
various flight heights, video resolutions, and transmission frequencies remain
unknown. To evaluate these parameters, we develop a cellular-connected UAV
testbed based on the Long Term Evolution (LTE) network with its uplink video
transmission and downlink control\&command (CC) transmission. We also design
algorithms for sending control signal and controlling UAV. The indoor
experimental results provide fundamental insights for the cellular-connected
UAV system design from the perspective of transmission frequency, adaptability,
and link outage, respectively.
|
TMs are a pattern recognition approach that uses finite state machines for
learning and propositional logic to represent patterns. In addition to being
natively interpretable, they have provided competitive accuracy for various
tasks. In this paper, we increase the computing power of TMs by proposing a
first-order logic-based framework with Herbrand semantics. The resulting TM is
relational and can take advantage of logical structures appearing in natural
language, to learn rules that represent how actions and consequences are
related in the real world. The outcome is a logic program of Horn clauses,
bringing in a structured view of unstructured data. In closed-domain
question-answering, the first-order representation produces 10x more compact
KBs, along with an increase in answering accuracy from 94.83% to 99.48%. The
approach is further robust towards erroneous, missing, and superfluous
information, distilling the aspects of a text that are important for real-world
understanding.
|
Markerless motion capture and understanding of professional non-daily human
movements is an important yet unsolved task, which suffers from complex motion
patterns and severe self-occlusion, especially for the monocular setting. In
this paper, we propose SportsCap -- the first approach for simultaneously
capturing 3D human motions and understanding fine-grained actions from
monocular challenging sports video input. Our approach utilizes the semantic
and temporally structured sub-motion prior in the embedding space for motion
capture and understanding in a data-driven multi-task manner. To enable robust
capture under complex motion patterns, we propose an effective motion embedding
module to recover both the implicit motion embedding and explicit 3D motion
details via a corresponding mapping function as well as a sub-motion
classifier. Based on such hybrid motion information, we introduce a
multi-stream spatial-temporal Graph Convolutional Network(ST-GCN) to predict
the fine-grained semantic action attributes, and adopt a semantic attribute
mapping block to assemble various correlated action attributes into a
high-level action label for the overall detailed understanding of the whole
sequence, so as to enable various applications like action assessment or motion
scoring. Comprehensive experiments on both public and our proposed datasets
show that with a challenging monocular sports video input, our novel approach
not only significantly improves the accuracy of 3D human motion capture, but
also recovers accurate fine-grained semantic action attributes.
|
Methods for stochastic trace estimation often require the repeated evaluation
of expressions of the form $z^T p_n(A)z$, where $A$ is a symmetric matrix and
$p_n$ is a degree $n$ polynomial written in the standard or Chebyshev basis. We
show how to evaluate these expressions using only $\lceil n/2\rceil$
matrix-vector products, thus substantially reducing the cost of existing trace
estimation algorithms that use Chebyshev interpolation or Taylor series.
|
The inverse Higgs phenomenon, which plays an important r\^ole in physical
systems with Goldstone bosons (such as the phonons in a crystal) involves
nonholonomic mechanical constraints. By formulating field theories with
symmetries and constraints in a general way using the language of differential
geometry, we show that many examples of constraints in inverse Higgs phenomena
fall into a special class, which we call coholonomic constraints, that are dual
(in the sense of category theory) to holonomic constraints. Just as for
holonomic constraints, systems with coholonomic constraints are equivalent to
unconstrained systems (whose degrees of freedom are known as essential
Goldstone bosons), making it easier to study their consistency and dynamics.
The remaining examples of inverse Higgs phenomena in the literature require the
dual of a slight generalisation of a holonomic constraint, which we call
(co)meronomic. Our formalism simplifies and clarifies the many ad hoc
assumptions and constructions present in the literature. In particular, it
identifies which are necessary and which are merely convenient. It also opens
the way to studying much more general dynamical examples, including systems
which have no well-defined notion of a target space.
|
This article discusses a dark energy cosmological model in the standard
theory of gravity - general relativity with a broad scalar field as a source.
Exact solutions of Einstein's field equations are derived by considering a
particular form of deceleration parameter $q$, which shows a smooth transition
from decelerated to accelerated phase in the evolution of the universe. The
external datasets such as Hubble ($H(z)$) datasets, Supernovae (SN) datasets,
and Baryonic Acoustic Oscillation (BAO) datasets are used for constraining the
model par parameters appearing in the functional form of $q$. The transition
redshift is obtained at $% z_{t}=0.67_{-0.36}^{+0.26}$ for the combined data
set ($H(z)+SN+BAO$), where the model shows signature-flipping and is consistent
with recent observations. Moreover, the present value of the deceleration
parameter comes out to be $q_{0}=-0.50_{-0.11}^{+0.12}$ and the jerk parameter
$% j_{0}=-0.98_{-0.02}^{+0.06}$ (close to 1) for the combined datasets, which
is compatible as per Planck2018 results. The analysis also constrains the omega
value i.e., $\Omega _{m_{0}}\leq 0.269$ for the smooth evolution of the scalar
field EoS parameter. It is seen that energy density is higher for the effective
energy density of the matter field than energy density in the presence of a
scalar field. The evolution of the physical and geometrical parameters is
discussed in some details with the model parameters' numerical constrained
values. Moreover, we have performed the state-finder analysis to investigate
the nature of dark energy.
|
Electrical energy consumption data accessibility for low voltage end users is
one of the pillars of smart grids. In some countries, despite the presence of
smart meters, a fragmentary data availability and/or the lack of
standardization hinders the creation of post-metering value-added services and
confines such innovative solutions to the prototypal and experimental level. We
take inspiration from the technology adopted in Italy, where the national
regulatory authority actively supported the definition of a solution agreed
upon by all the involved stakeholders. In this context, smart meters are
enabled to convey data to low voltage end users through a power line
communication channel (CHAIN 2) in near real-time. The aim of this paper is
twofold. On the one hand, it describes the proof of concept that the channel
underwent and its subsequent validation (with performances nearing 99% success
rate). On the other hand, it defines a classification framework (I2MA) for
post-metering value-added services, in order to categorize each use case based
on both level of service and expected benefits, and understand its maturity
level. As an example, we apply the methodology to the 16 use cases defined in
Italy. The lessons learned from the regulatory, technological, and functional
approach of the Italian experience bring us to the provision of recommendations
for researchers and industry experts. In particular, we argue that a
well-functioning post-metering value-added services' market can flourish when:
i) distribution system operators certify the measurements coming from smart
meters; ii) national regulatory authorities support the technological
innovation needed for setting up this market; and iii) service providers create
customer-oriented solutions based on smart meters' data.
|
Robust edge transport can occur when particles in crystalline lattices
interact with an external magnetic field. This system is well described by
Bloch's theorem, with the spectrum being composed of bands of bulk states and
in-gap edge states. When the confining lattice geometry is altered to be
quasicrystaline, then Bloch's theorem breaks down. However, we still expect to
observe the basic characteristics of bulk states and current carrying edge
states. Here, we show that for quasicrystals in magnetic fields, there is also
a third option; the bulk localised transport states. These states share the
in-gap nature of the well-known edge states and can support transport along
them, but they are fully contained within the bulk of the system, with no
support along the edge. We consider both finite and infinite systems, using
rigorous error controlled computational techniques that are not prone to
finite-size effects. The bulk localised transport states are preserved for
infinite systems, in stark contrast to the normal edge states. This allows for
transport to be observed in infinite systems, without any perturbations,
defects, or boundaries being introduced. We confirm the in-gap topological
nature of the bulk localised transport states for finite and infinite systems
by computing common topological measures; namely the Bott index and local Chern
marker. The bulk localised transport states form due to a magnetic aperiodicity
arising from the interplay of length scales between the magnetic field and
quasiperiodic lattice. Bulk localised transport could have interesting
applications similar to those of the edge states on the boundary, but that
could now take advantage of the larger bulk of the lattice. The infinite size
techniques introduced here, especially the calculation of topological measures,
could also be widely applied to other crystalline, quasicrystalline, and
disordered models.
|
It is well established that glassy materials can undergo aging, i.e., their
properties gradually change over time. There is rapidly growing evidence that
dense active and living systems also exhibit many features of glassy behavior,
but it is still largely unknown how physical aging is manifested in such active
glassy materials. Our goal is to explore whether active and passive thermal
glasses age in fundamentally different ways. To address this, we numerically
study the aging dynamics following a quench from high to low temperature for
two-dimensional passive and active Brownian model glass-formers. We find that
aging in active thermal glasses is governed by a time-dependent competition
between thermal and active effects, with an effective temperature that
explicitly evolves with the age of the material. Moreover, unlike passive aging
phenomenology, we find that the degree of dynamic heterogeneity in active aging
systems is relatively small and remarkably constant with age. We conclude that
the often-invoked mapping between an active system and a passive one with a
higher effective temperature rigorously breaks down upon aging, and that the
aging dynamics of thermal active glasses differs in several distinct ways from
both the passive and athermal active case.
|
The radio nebula W50 is a unique object interacting with the jets of the
microquasar SS433. The SS433/W50 system is a good target for investigating the
energy of cosmic-ray particles accelerated by galactic jets. We report
observations of radio nebula W50 conducted with the NSF's Karl G. Jansky Very
Large Array (VLA) in the L band (1.0 -- 2.0 GHz). We investigate the secular
change of W50 on the basis of the observations in 1984, 1996, and 2017, and
find that most of its structures were stable for 33 years. We revise the upper
limit velocity of the eastern terminal filament by half to 0.023$c$ assuming a
distance of 5.5 kpc. We also analyze the observational data of the Arecibo
Observatory 305-m telescope and identify the HI cavity around W50 in the
velocity range 33.77 km s$^{-1}$ -- 55.85 km s$^{-1}$. From this result, we
estimate the maximum energy of the cosmic-ray protons accelerated by the jet
terminal region to be above 10$^{15.5}$ eV. We also use the luminosity of the
gamma-rays in the range 0.5 -- 10 GeV to estimate the total energy of
accelerated protons below 5.2 $\times$ 10$^{48}$ erg.
|
We present the design for a novel type of dual-band photodetector in the
thermal infrared spectral range, the Optically Controlled Dual-band quantum dot
Infrared Photodetector (OCDIP). This concept is based on a quantum dot ensemble
with a unimodal size distribution, whose absorption spectrum can be controlled
by optically-injected carriers. An external pumping laser varies the electron
density in the QDs, permitting to control the available electronic transitions
and thus the absorption spectrum. We grew a test sample which we studied by AFM
and photoluminescence. Based on the experimental data, we simulated the
infrared absorption spectrum of the sample, which showed two absorption bands
at 5.85 um and 8.98 um depending on the excitation power.
|
We present a novel ultrastable superconducting radio-frequency (RF) ion trap
realized as a combination of an RF cavity and a linear Paul trap. Its RF
quadrupole mode at 34.52 MHz reaches a quality factor of $Q\approx2.3\times
10^5$ at a temperature of 4.1 K and is used to radially confine ions in an
ultralow-noise pseudopotential. This concept is expected to strongly suppress
motional heating rates and related frequency shifts which limit the ultimate
accuracy achieved in advanced ion traps for frequency metrology. Running with
its low-vibration cryogenic cooling system, electron beam ion trap and
deceleration beamline supplying highly charged ions (HCI), the superconducting
trap offers ideal conditions for optical frequency metrology with ionic
species. We report its proof-of-principle operation as a quadrupole mass filter
with HCI, and trapping of Doppler-cooled ${}^9\text{Be}^+$ Coulomb crystals.
|
With Regulation UNECE R157 on Automated Lane-Keeping Systems, the first
framework for the introduction of passenger cars with Level 3 systems has
become available in 2020. In accordance with recent research projects including
academia and the automotive industry, the Regulation utilizes scenario based
testing for the safety assessment. The complexity of safety validation of
automated driving systems necessitates system-level simulations. The
Regulation, however, is missing the required parameterization necessary for
test case generation. To overcome this problem, we incorporate the exposure and
consider the heterogeneous behavior of the traffic participants by extracting
concrete scenarios according to Regulation's scenario definition from the
established naturalistic highway dataset highD. We present a methodology to
find the scenarios in real-world data, extract the parameters for modeling the
scenarios and transfer them to simulation. In this process, more than 340
scenarios were extracted. OpenSCENARIO files were generated to enable an
exemplary transfer of the scenarios to CARLA and esmini. We compare the
trajectories to examine the similarity of the scenarios in the simulation to
the recorded scenarios. In order to foster research, we publish the resulting
dataset called ConScenD together with instructions for usage with both
simulation tools. The dataset is available online at
https://www.levelXdata.com/scenarios.
|
In this paper, we characterize the asymptotic and large scale behavior of the
eigenvalues of wavelet random matrices in high dimensions. We assume that
possibly non-Gaussian, finite-variance $p$-variate measurements are made of a
low-dimensional $r$-variate ($r \ll p$) fractional stochastic process with
non-canonical scaling coordinates and in the presence of additive
high-dimensional noise. The measurements are correlated both time-wise and
between rows. We show that the $r$ largest eigenvalues of the wavelet random
matrices, when appropriately rescaled, converge to scale invariant functions in
the high-dimensional limit. By contrast, the remaining $p-r$ eigenvalues remain
bounded. Under additional assumptions, we show that, up to a log
transformation, the $r$ largest eigenvalues of wavelet random matrices exhibit
asymptotically Gaussian distributions. The results have direct consequences for
statistical inference.
|
Annually, a large number of injuries and deaths around the world are related
to motor vehicle accidents. This value has recently been reduced to some
extent, via the use of driver-assistance systems. Developing driver-assistance
systems (i.e., automated driving systems) can play a crucial role in reducing
this number. Estimating and predicting surrounding vehicles' movement is
essential for an automated vehicle and advanced safety systems. Moreover,
predicting the trajectory is influenced by numerous factors, such as drivers'
behavior during accidents, history of the vehicle's movement and the
surrounding vehicles, and their position on the traffic scene. The vehicle must
move over a safe path in traffic and react to other drivers' unpredictable
behaviors in the shortest time. Herein, to predict automated vehicles' path, a
model with low computational complexity is proposed, which is trained by images
taken from the road's aerial image. Our method is based on an encoder-decoder
model that utilizes a social tensor to model the effect of the surrounding
vehicles' movement on the target vehicle. The proposed model can predict the
vehicle's future path in any freeway only by viewing the images related to the
history of the target vehicle's movement and its neighbors. Deep learning was
used as a tool for extracting the features of these images. Using the HighD
database, an image dataset of the road's aerial image was created, and the
model's performance was evaluated on this new database. We achieved the RMSE of
1.91 for the next 5 seconds and found that the proposed method had less error
than the best path-prediction methods in previous studies.
|
The paper provides a version of the rational Hodge conjecture for $\3\dg$
categories. The noncommutative Hodge conjecture is equivalent to the version
proposed in \cite{perry2020integral} for admissible subcategories. We obtain
examples of evidence of the Hodge conjecture by techniques of noncommutative
geometry. Finally, we show that the noncommutative Hodge conjecture for smooth
proper connective $\3\dg$ algebras is true.
|
Let $(X, D)$ be a log smooth log canonical pair such that $K_X+D$ is ample.
Extending a theorem of Guenancia and building on his techniques, we show that
negatively curved K\"{a}hler-Einstein crossing edge metrics converge to
K\"{a}hler-Einstein mixed cusp and edge metrics smoothly away from the divisor
when some of the cone angles converge to $0$. We further show that near the
divisor such normalized K\"{a}hler-Einstein crossing edge metrics converge to a
mixed cylinder and edge metric in the pointed Gromov-Hausdorff sense when some
of the cone angles converge to $0$ at (possibly) different speeds.
|
Novice programmers face numerous barriers while attempting to learn how to
code that may deter them from pursuing a computer science degree or career in
software development. In this work, we propose a tool concept to address the
particularly challenging barrier of novice programmers holding misconceptions
about how their code behaves. Specifically, the concept involves an inquisitive
code editor that: (1) identifies misconceptions by periodically prompting the
novice programmer with questions about their program's behavior, (2) corrects
the misconceptions by generating explanations based on the program's actual
behavior, and (3) prevents further misconceptions by inserting test code and
utilizing other educational resources. We have implemented portions of the
concept as plugins for the Atom code editor and conducted informal surveys with
students and instructors. Next steps include deploying the tool prototype to
students enrolled in introductory programming courses.
|
We show that any Brauer tree algebra has precisely $\binom{2n}{n}$
$2$-tilting complexes, where $n$ is the number of edges of the associated
Brauer tree. More explicitly, for an external edge $e$ and an integer $j\neq0$,
we show that the number of $2$-tilting complexes $T$ with $g_e(T)=j$ is
$\binom{2n-|j|-1}{n-1}$, where $g_e(T)$ denotes the $e$-th of the $g$-vector of
$T$. To prove this, we use a geometric model of Brauer graph algebras on the
closed oriented marked surfaces and a classification of $2$-tilting complexes
due to Adachi-Aihara-Chan.
|
We measure the evolution of the rest-frame UV luminosity function (LF) and
the stellar mass function (SMF) of Lyman-alpha (Lya) emitters (LAEs) from z~2
to z~6 by exploring ~4000 LAEs from the SC4K sample. We find a correlation
between Lya luminosity (LLya) and rest-frame UV (M_UV), with best-fit
M_UV=-1.6+-0.2 log10(LLya/erg/s)+47+-12 and a shallower relation between LLya
and stellar mass (Mstar), with best-fit log10( Mstar/Msun)=0.9+-0.1
log10(LLya/erg/s)-28+-4.0. An increasing LLya cut predominantly lowers the
number density of faint M_UV and low Mstar LAEs. We estimate a proxy for the
full UV LFs and SMFs of LAEs with simple assumptions of the faint end slope.
For the UV LF, we find a brightening of the characteristic UV luminosity
(M_UV*) with increasing redshift and a decrease of the characteristic number
density (Phi*). For the SMF, we measure a characteristic stellar mass
(Mstar*/Msun) increase with increasing redshift, and a Phi* decline. However,
if we apply a uniform luminosity cut of log10 (LLya/erg/s) >= 43.0, we find
much milder to no evolution in the UV and SMF of LAEs. The UV luminosity
density (rho_UV) of the full sample of LAEs shows moderate evolution and the
stellar mass density (rho_M) decreases, with both being always lower than the
total rho_UV and rho_M of more typical galaxies but slowly approaching them
with increasing redshift. Overall, our results indicate that both rho_UV and
rho_M of LAEs slowly approach the measurements of continuum-selected galaxies
at z>6, which suggests a key role of LAEs in the epoch of reionisation.
|
Accurate numerical solutions for the Schr\"odinger equation are of utmost
importance in quantum chemistry. However, the computational cost of current
high-accuracy methods scales poorly with the number of interacting particles.
Combining Monte Carlo methods with unsupervised training of neural networks has
recently been proposed as a promising approach to overcome the curse of
dimensionality in this setting and to obtain accurate wavefunctions for
individual molecules at a moderately scaling computational cost. These methods
currently do not exploit the regularity exhibited by wavefunctions with respect
to their molecular geometries. Inspired by recent successful applications of
deep transfer learning in machine translation and computer vision tasks, we
attempt to leverage this regularity by introducing a weight-sharing constraint
when optimizing neural network-based models for different molecular geometries.
That is, we restrict the optimization process such that up to 95 percent of
weights in a neural network model are in fact equal across varying molecular
geometries. We find that this technique can accelerate optimization when
considering sets of nuclear geometries of the same molecule by an order of
magnitude and that it opens a promising route towards pre-trained neural
network wavefunctions that yield high accuracy even across different molecules.
|
As a non-linear extension of the classic Linear Discriminant Analysis(LDA),
Deep Linear Discriminant Analysis(DLDA) replaces the original Categorical Cross
Entropy(CCE) loss function with eigenvalue-based loss function to make a deep
neural network(DNN) able to learn linearly separable hidden representations. In
this paper, we first point out DLDA focuses on training the cooperative
discriminative ability of all the dimensions in the latent subspace, while put
less emphasis on training the separable capacity of single dimension. To
improve DLDA, a regularization method on within-class scatter matrix is
proposed to strengthen the discriminative ability of each dimension, and also
keep them complement each other. Experiment results on STL-10, CIFAR-10 and
Pediatric Pneumonic Chest X-ray Dataset showed that our proposed regularization
method Regularized Deep Linear Discriminant Analysis(RDLDA) outperformed DLDA
and conventional neural network with CCE as objective. To further improve the
discriminative ability of RDLDA in the local space, an algorithm named Subclass
RDLDA is also proposed.
|
A fog-radio access network (F-RAN) architecture is studied for an
Internet-of-Things (IoT) system in which wireless sensors monitor a number of
multi-valued events and transmit in the uplink using grant-free random access
to multiple edge nodes (ENs). Each EN is connected to a central processor (CP)
via a finite-capacity fronthaul link. In contrast to conventional
information-agnostic protocols based on separate source-channel (SSC) coding,
where each device uses a separate codebook, this paper considers an
information-centric approach based on joint source-channel (JSC) coding via a
non-orthogonal generalization of type-based multiple access (TBMA). By
leveraging the semantics of the observed signals, all sensors measuring the
same event share the same codebook (with non-orthogonal codewords), and all
such sensors making the same local estimate of the event transmit the same
codeword. The F-RAN architecture directly detects the events values without
first performing individual decoding for each device. Cloud and edge detection
schemes based on Bayesian message passing are designed and trade-offs between
cloud and edge processing are assessed.
|
We report parametric resonances (PRs) in the mean-field dynamics of a
one-dimensional dipolar Bose-Einstein condensate (DBEC) in widely varying
trapping geometries. The chief goal is to characterize the energy levels of
this system by analytical methods and the significance of this study arises
from the commonly known fact that in the presence of interactions the energy
levels of a trapped BEC are hard to calculate analytically. The latter
characterization is achieved by a matching of the PR energies to energy levels
of the confining trap using perturbative methods. Further, this work reveals
the role of the interplay between dipole-dipole interactions (DDI) and trapping
geometry in defining the energies and amplitudes of the PRs. The PRs are
induced by a negative Gaussian potential whose depth oscillates with time.
Moreover, the DDI play a role in this induction. The dynamics of this system is
modeled by the time-dependent Gross- Pitaevskii equation (TDGPE) that is
numerically solved by the Crank-Nicolson method. The PRs are discussed basing
on analytical methods: first, it is shown that it is possible to reproduce PRs
by the Lagrangian variational method that are similar to the ones obtained from
TDGPE. Second, the energies at which the PRs arise are closely matched with the
energy levels of the corresponding trap calculated by time-independent
perturbation theory. Third, the most probable transitions between the trap
energy levels yielding PRs are determined by time-dependent perturbation
theory. The most significant result of this work is that we have been able to
characterize the above mentioned energy levels of a DBEC in a complex trapping
potential.
|
The digital transformation has been underway, creating digital shadows of
(almost) all physical entities and moving them to the Internet. The era of
Internet of Everything has therefore started to come into play, giving rise to
unprecedented traffic growths. In this context, optical core networks forming
the backbone of Internet infrastructure have been under critical issues of
reaching the capacity limit of conventional fiber, a phenomenon widely referred
as capacity crunch. For many years, the many-fold increases in fiber capacity
is thanks to exploiting physical dimensions for multiplexing optical signals
such as wavelength, polarization, time and lately space-division multiplexing
using multi-core fibers and such route seems to come to an end as almost all
known ways have been exploited. This necessitates for a departure from
traditional approaches to use the fiber capacity more efficiently and thereby
improve economics of scale. This paper lays out a new perspective to integrate
network coding (NC) functions into optical networks to achieve greater capacity
efficiency by upgrading intermediate nodes functionalities. In addition to the
review of recent proposals on new research problems enabled by NC operation in
optical networks, we also report state-of-the-art findings in the literature in
an effort to renew the interest of NC in optical networks and discuss three
critical points for pushing forward its applicability and practicality
including i) NC as a new dimension for multiplexing optical signals ii)
algorithmic aspects of NC-enabled optical networks design iii) NC as an
entirely fresh way for securing optical signals at physical layers
|
In this paper, we propose a novel lightweight relation extraction approach of
structural block driven - convolutional neural learning. Specifically, we
detect the essential sequential tokens associated with entities through
dependency analysis, named as a structural block, and only encode the block on
a block-wise and an inter-block-wise representation, utilizing multi-scale
CNNs. This is to 1) eliminate the noisy from irrelevant part of a sentence;
meanwhile 2) enhance the relevant block representation with both block-wise and
inter-block-wise semantically enriched representation. Our method has the
advantage of being independent of long sentence context since we only encode
the sequential tokens within a block boundary. Experiments on two datasets
i.e., SemEval2010 and KBP37, demonstrate the significant advantages of our
method. In particular, we achieve the new state-of-the-art performance on the
KBP37 dataset; and comparable performance with the state-of-the-art on the
SemEval2010 dataset.
|
A model investigating the role of geometry on the alpha dose rate of spent
nuclear fuel has been developed. This novel approach utilises a new piecewise
function to describe the probability of alpha escape as a function of
particulate radius, decay range within the material, and position from the
surface. The alpha dose rates were produced for particulates of radii 1 $\mu$m
to 10 mm, showing considerable changes in the 1 $\mu$m to 50 $\mu$m range.
Results indicate that for decreasing particulate sizes, approaching radii equal
to or less than the range of the $\alpha$-particle within the fuel, there is a
significant increase in the rate of energy emitted per unit mass of fuel
material. The influence of geometry is more significant for smaller radii,
showing clear differences in dose rate curves below 50 $\mu$m. These
considerations are essential for any future accurate prediction of the
dissolution rates and hydrogen gas release, driven by the radiolytic yields of
particulate spent nuclear fuel.
|
We investigate the effectiveness of three different job-search and training
programmes for German long-term unemployed persons. On the basis of an
extensive administrative data set, we evaluated the effects of those programmes
on various levels of aggregation using Causal Machine Learning. We found
participants to benefit from the investigated programmes with placement
services to be most effective. Effects are realised quickly and are
long-lasting for any programme. While the effects are rather homogenous for
men, we found differential effects for women in various characteristics. Women
benefit in particular when local labour market conditions improve. Regarding
the allocation mechanism of the unemployed to the different programmes, we
found the observed allocation to be as effective as a random allocation.
Therefore, we propose data-driven rules for the allocation of the unemployed to
the respective labour market programmes that would improve the status-quo.
|
Faceted summarization provides briefings of a document from different
perspectives. Readers can quickly comprehend the main points of a long document
with the help of a structured outline. However, little research has been
conducted on this subject, partially due to the lack of large-scale faceted
summarization datasets. In this study, we present FacetSum, a faceted
summarization benchmark built on Emerald journal articles, covering a diverse
range of domains. Different from traditional document-summary pairs, FacetSum
provides multiple summaries, each targeted at specific sections of a long
document, including the purpose, method, findings, and value. Analyses and
empirical results on our dataset reveal the importance of bringing structure
into summaries. We believe FacetSum will spur further advances in summarization
research and foster the development of NLP systems that can leverage the
structured information in both long texts and summaries.
|
Considering a double-headed Brownian motor moving with both translational and
rotational degrees of freedom, we investigate the directed transport properties
of the system in a traveling-wave potential. It is found that the traveling
wave provides the essential condition of the directed transport for the system,
and at an appropriate angular frequency, the positive current can be optimized.
A general current reversal appears by modulating the angular frequency of the
traveling wave, noise intensity, external driving force and the rod length. By
transforming the dynamical equation in traveling-wave potential into that in a
tilted potential, the mechanism of current reversal is analyzed. For both cases
of Gaussian and Levy noises, the currents show similar dependence on the
parameters. Moreover, the current in the tilted potential shows a typical
stochastic resonance effect. The external driving force has also a
resonance-like effect on the current in the tilted potential. But the current
in the traveling-wave potential exhibits the reverse behaviors of that in the
tilted potential. Besides, the currents obviously depend on the stability index
of the Levy noise under certain conditions.
|
We demonstrate size selective optical trapping and transport for
nanoparticles near an optical nanofiber taper. Using a two-wavelength,
counter-propagating mode configuration, we show that 100 nm diameter and 150 nm
diameter gold nanospheres (GNSs) are trapped by the evanescent field in the
taper region at different optical powers. Conversely, when one nanoparticle
species is trapped the other may be transported, leading to a sieve-like
effect. Our results show that sophisticated optical manipulation can be
achieved in a passive configuration by taking advantage of mode behavior in
nanophotonics devices.
|
We investigated changes in the b value of the Gutenberg-Richter's law in and
around the focal areas of earthquakes on March 20 and on May 1, 2021, with
magnitude (M) 6.9 and 6.8, respectively, which occurred off the Pacific coast
of Miyagi prefecture, northeastern Japan. We showed that the b value in these
focal areas had been noticeably small, especially within a few years before the
occurrence of the M6.9 earthquake in its vicinity, indicating that differential
stress had been high in the focal areas. The coseismic slip of the 2011 Tohoku
earthquake seems to have stopped just short of the east side of the focus of
the M6.9 earthquake. Furthermore, the afterslip of the 2011 Tohoku earthquake
was relatively small in the focal areas of the M6.9 and M6.8 earthquakes,
compared to the surrounding regions. In addition, the focus of the M6.9
earthquake was situated close to the border point where the interplate slip in
the period from 2012 through 2021 has been considerably larger on the northern
side than on the southern side. The high-stress state inferred by the b-value
analysis is concordant with those characteristics of interplate slip events. We
found that the M6.8 earthquake on May 1 occurred near an area where the b value
remained small, even after the M6.9 quake. The ruptured areas by the two
earthquakes now seem to almost coincide with the small-b-value region that had
existed before their occurrence. The b value on the east side of the focal
areas of the M6.9 and M6.8 earthquakes which corresponds to the eastern part of
the source region of the 1978 off-Miyagi prefecture earthquake was consistently
large, while the seismicity enhanced by the two earthquakes also shows a large
b value, implying that stress in the region has not been very high.
|
During the early history of unitary quantum theory the Kato's exceptional
points (EPs, a.k.a. non-Hermitian degeneracies) of Hamiltonians $H(\lambda)$
did not play any significant role, mainly due to the Stone theorem which firmly
connected the unitarity with the Hermiticity. During the recent wave of
optimism people started believing that the corridors of a unitary access to the
EPs could be opened leading, say, to a new picture of quantum phase transitions
via an {\it ad hoc} weakening of the Hermiticty (replaced by the
quasi-Hermiticity). Subsequently, the pessimism prevailed (the paths of access
appeared to be fragile). In a way restricted to the quantum physics of closed
systems a return to optimism is advocated here: the apparent fragility of the
corridors is claimed to follow from a misinterpretation of the theory in its
quasi-Hermitian formulation. Several perturbed versions of the realistic
many-body Bose-Hubbard model are chosen for illustration purposes.
|
Existing works on visual counting primarily focus on one specific category at
a time, such as people, animals, and cells. In this paper, we are interested in
counting everything, that is to count objects from any category given only a
few annotated instances from that category. To this end, we pose counting as a
few-shot regression task. To tackle this task, we present a novel method that
takes a query image together with a few exemplar objects from the query image
and predicts a density map for the presence of all objects of interest in the
query image. We also present a novel adaptation strategy to adapt our network
to any novel visual category at test time, using only a few exemplar objects
from the novel category. We also introduce a dataset of 147 object categories
containing over 6000 images that are suitable for the few-shot counting task.
The images are annotated with two types of annotation, dots and bounding boxes,
and they can be used for developing few-shot counting models. Experiments on
this dataset shows that our method outperforms several state-of-the-art object
detectors and few-shot counting approaches. Our code and dataset can be found
at https://github.com/cvlab-stonybrook/LearningToCountEverything.
|
Turbulent puffs are ubiquitous in everyday life phenomena. Understanding
their dynamics is important in a variety of situations ranging from industrial
processes to pure and applied science. In all these fields, a deep knowledge of
the statistical structure of temperature and velocity space/time fluctuations
is of paramount importance to construct models of chemical reaction (in
chemistry), of condensation of virus-containing droplets (in virology and/or
biophysics), and optimal mixing strategies in industrial applications. As a
matter of fact, results of turbulence in a puff are confined to bulk properties
(i.e. average puff velocity and typical decay/growth time) and dates back to
the second half of the 20th century. There is thus a huge gap to fill to pass
from bulk properties to two-point statistical observables. Here we fill this
gap exploiting theory and numerics in concert to predict and validate the
space/time scaling behaviors of both velocity and temperature structure
functions including intermittency corrections. Excellent agreement between
theory and simulations is found. Our results are expected to have profound
impact to develop evaporation models for virus-containing droplets carried by a
turbulent puff, with benefits to the comprehension of the airborne route of
virus contagion.
|
The installation of electric vehicle charging stations (EVCS) will be
essential to promote the acceptance by the users of electric vehicles (EVs).
However, if EVCS are exclusively supplied by the grid, negative impacts on its
stability together with possible CO2 emission increases could be produced.
Introduction of hybrid renewable energy systems (HRES) for EVCS can cope with
both drawbacks by reducing the load on the grid and generating clean
electricity. This paper develops a methodology based on a weighted
multicriteria process to design the most suitable configuration for HRES in
EVCS. This methodology determines the local renewable resources and the EVCS
electricity demand. Then, taking into account environmental, economic and
technical aspects, it deduces the most adequate HRES design for the EVCS.
Besides, an experimental stage to validate the design deduced from the
multicriteria process is included. Therefore, the final design for the HRES in
EVCS is supported not only by a complete numerical evaluation, but also by an
experimental verification of the demand being fully covered. Methodology
application to Valencia (Spain) proves that an off-grid HRES with solar PV,
wind resources and batteries support would be the most suitable configuration
for the system. This solution was also experimentally verified.
|
We use large deviation theory to obtain the free energy of the XY model on a
fully connected graph on each site of which there is a randomly oriented field
of magnitude $h$. The phase diagram is obtained for two symmetric distributions
of the random orientations: (a) a uniform distribution and (b) a distribution
with cubic symmetry. In both cases, the ordered state reflects the symmetry of
the underlying disorder distribution. The phase boundary has a multicritical
point which separates a locus of continuous transitions (for small values of
$h$) from a locus of first order transitions (for large $h$). The free energy
is a function of a single variable in case (a) and a function of two variables
in case (b), leading to different characters of the multicritical points in the
two cases.
|
Machine learning (ML) tools such as encoder-decoder deep convolutional neural
networks (CNN) are able to extract relationships between inputs and outputs of
large complex systems directly from raw data. For time-varying systems the
predictive capabilities of ML tools degrade as the systems are no longer
accurately represented by the data sets with which the ML models were trained.
Re-training is possible, but only if the changes are slow and if new
input-output training data measurements can be made online non-invasively. In
this work we present an approach to deep learning for time-varying systems in
which adaptive feedback based only on available system output measurements is
applied to encoded low-dimensional dense layers of encoder-decoder type CNNs.
We demonstrate our method in developing an inverse model of a complex charged
particle accelerator system, mapping output beam measurements to input beam
distributions while both the accelerator components and the unknown input beam
distribution quickly vary with time. We demonstrate our results using
experimental measurements of the input and output beam distributions of the
HiRES ultra-fast electron diffraction (UED) microscopy beam line at Lawrence
Berkeley National Laboratory. We show how our method can be used to aid both
physics and ML-based surrogate online models to provide non-invasive beam
diagnostics and we also demonstrate how our method can be used to automatically
track the time varying quantum efficiency map of a particle accelerator's
photocathode.
|
The extremely large magnetoresistance (XMR) effect in nonmagnetic semimetals
have attracted intensive attention recently. Here we propose an XMR candidate
material SrPd based on first-principles electronic structure calculations in
combination with a semi-classical model. The calculated carrier densities in
SrPd indicate that there is a good electron-hole compensation, while the
calculated intrinsic carrier mobilities are as high as 10$^5$
cm$^2$V$^{-1}$s$^{-1}$. There are only two doubly degenerate bands crossing the
Fermi level for SrPd, thus a semi-classical two-band model is available for
describing its transport properties. Accordingly, the magnetoresistance of SrPd
under a magnetic field of $4$ Tesla is predicted to reach ${10^5} \%$ at low
temperature. Furthermore, the calculated topological invariant indicates that
SrPd is topologically trivial. Our theoretical studies suggest that SrPd can
serve as an ideal platform to examine the charge compensation mechanism of the
XMR effect.
|
Chemically peculiar stars in eclipsing binary systems are rare objects that
allow the derivation of fundamental stellar parameters and important
information on the evolutionary status and the origin of the observed chemical
peculiarities. Here we present an investigation of the known eclipsing binary
system BD+09 1467 = V680 Mon. Using spectra from the Large Sky Area
Multi-Object Fiber Spectroscopic Telescope (LAMOST) and own observations, we
identify the primary component of the system as a mercury-manganese (HgMn/CP3)
star (spectral type kB9 hB8 HeB9 V HgMn). Furthermore, photometric time series
data from the Transiting Exoplanet Survey Satellite (TESS) indicate that the
system is a "heartbeat star", a rare class of eccentric binary stars with
short-period orbits that exhibit a characteristic signature near the time of
periastron in their light curves due to the tidal distortion of the components.
Using all available photometric observations, we present an updated ephemeris
and binary system parameters as derived from modelling of the system with the
ELISa code, which indicates that the secondary star has an effective
temperature of Teff = 8300+-200 K (spectral type of about A4). V680 Mon is only
the fifth known eclipsing CP3 star and the first one in a heartbeat binary.
Furthermore, our results indicate that the star is located on the zero-age main
sequence and a possible member of the open cluster NGC 2264. As such, it lends
itself perfectly for detailed studies and may turn out to be a keystone in the
understanding of the development of CP3 star peculiarities.
|
To achieve reliable mining results for massive vessel trajectories, one of
the most important challenges is how to efficiently compute the similarities
between different vessel trajectories. The computation of vessel trajectory
similarity has recently attracted increasing attention in the maritime data
mining research community. However, traditional shape- and warping-based
methods often suffer from several drawbacks such as high computational cost and
sensitivity to unwanted artifacts and non-uniform sampling rates, etc. To
eliminate these drawbacks, we propose an unsupervised learning method which
automatically extracts low-dimensional features through a convolutional
auto-encoder (CAE). In particular, we first generate the informative trajectory
images by remapping the raw vessel trajectories into two-dimensional matrices
while maintaining the spatio-temporal properties. Based on the massive vessel
trajectories collected, the CAE can learn the low-dimensional representations
of informative trajectory images in an unsupervised manner. The trajectory
similarity is finally equivalent to efficiently computing the similarities
between the learned low-dimensional features, which strongly correlate with the
raw vessel trajectories. Comprehensive experiments on realistic data sets have
demonstrated that the proposed method largely outperforms traditional
trajectory similarity computation methods in terms of efficiency and
effectiveness. The high-quality trajectory clustering performance could also be
guaranteed according to the CAE-based trajectory similarity computation
results.
|
The gap generation in the dice model with local four-fermion interaction is
studied. Due to the presence of two valleys with degenerate electron states,
there are two main types of gaps. The intra- and intervalley gap describes the
electron and hole pairing in the same and different valleys, respectively. We
found that while the generation of the intravalley gap takes place only in the
supercritical regime, the intervalley gap is generated for an arbitrary small
coupling. The physical reason for the absence of the critical coupling is the
catalysis of the intervalley gap generation by the flat band in the electron
spectrum of the dice model. The completely quenched kinetic energy in the flat
band when integrated over momentum in the gap equation leads to extremely large
intervalley gap proportional to the area of the Brillouin zone.
|
Automated defect inspection is critical for effective and efficient
maintenance, repair, and operations in advanced manufacturing. On the other
hand, automated defect inspection is often constrained by the lack of defect
samples, especially when we adopt deep neural networks for this task. This
paper presents Defect-GAN, an automated defect synthesis network that generates
realistic and diverse defect samples for training accurate and robust defect
inspection networks. Defect-GAN learns through defacement and restoration
processes, where the defacement generates defects on normal surface images
while the restoration removes defects to generate normal images. It employs a
novel compositional layer-based architecture for generating realistic defects
within various image backgrounds with different textures and appearances. It
can also mimic the stochastic variations of defects and offer flexible control
over the locations and categories of the generated defects within the image
background. Extensive experiments show that Defect-GAN is capable of
synthesizing various defects with superior diversity and fidelity. In addition,
the synthesized defect samples demonstrate their effectiveness in training
better defect inspection networks.
|
Let $\mathcal{G}$ denote the variety generated by infinite dimensional
Grassmann algebras; i.e., the collection of all unitary associative algebras
satisfying the identity $[[z_1,z_2],z_3]=0$, where $[z_i,z_j]=z_iz_j-z_jz_i$.
Consider the free algebra $F_3$ in $\mathcal{G}$ generated by
$X_3=\{x_1,x_2,x_3\}$. The commutator ideal $F_3'$ of the algebra $F_3$ has a
natural $K[X_3]$-module structure. We call an element $p\in F_3$ symmetric if
$p(x_1,x_2,x_3)=p(x_{\xi1},x_{\xi2},x_{\xi3})$ for each permutation $\xi\in
S_3$. Symmetric elements form the subalgebra $F_3^{S_3}$ of invariants of the
symmetric group $S_3$ in $F_3$. We give a free generating set for the
$K[X_3]^{S_3}$-module $(F_3')^{S_3}$.
|
We investigate the efficiency of two very different spoken term detection
approaches for transcription when the available data is insufficient to train a
robust ASR system. This work is grounded in very low-resource language
documentation scenario where only few minutes of recording have been
transcribed for a given language so far.Experiments on two oral languages show
that a pretrained universal phone recognizer, fine-tuned with only a few
minutes of target language speech, can be used for spoken term detection with a
better overall performance than a dynamic time warping approach. In addition,
we show that representing phoneme recognition ambiguity in a graph structure
can further boost the recall while maintaining high precision in the low
resource spoken term detection task.
|
Multilingual Transformer-based language models, usually pretrained on more
than 100 languages, have been shown to achieve outstanding results in a wide
range of cross-lingual transfer tasks. However, it remains unknown whether the
optimization for different languages conditions the capacity of the models to
generalize over syntactic structures, and how languages with syntactic
phenomena of different complexity are affected. In this work, we explore the
syntactic generalization capabilities of the monolingual and multilingual
versions of BERT and RoBERTa. More specifically, we evaluate the syntactic
generalization potential of the models on English and Spanish tests, comparing
the syntactic abilities of monolingual and multilingual models on the same
language (English), and of multilingual models on two different languages
(English and Spanish). For English, we use the available SyntaxGym test suite;
for Spanish, we introduce SyntaxGymES, a novel ensemble of targeted syntactic
tests in Spanish, designed to evaluate the syntactic generalization
capabilities of language models through the SyntaxGym online platform.
|
There has been much recent interest in two-sided markets and dynamics
thereof. In a rather a general discrete-time feedback model, which we show
conditions that assure that for each agent, there exists the limit of a
long-run average allocation of a resource to the agent, which is independent of
any initial conditions. We call this property the unique ergodicity.
Our model encompasses two-sided markets and more complicated interconnections
of workers and customers, such as in a supply chain. It allows for
non-linearity of the response functions of market participants. Finally, it
allows for uncertainty in the response of market participants by considering a
set of the possible responses to either price or other signals and a measure to
sample from these.
|
Social acceptability is an important consideration for HCI designers who
develop technologies for social contexts. However, the current theoretical
foundations of social acceptability research do not account for the complex
interactions among the actors in social situations and the specific role of
technology. In order to improve the understanding of how context shapes and is
shaped by situated technology interactions, we suggest to reframe the social
space as a dynamic bundle of social practices and explore it with simulation
studies using agent-based modeling. We outline possible research directions
that focus on specific interactions among practices as well as regularities in
emerging patterns.
|
In a sharp contrast to the response of silica particles we show that the
metal-dielectric Janus particles with boojum defects in a nematic liquid
crystal are self-propelled under the action of an electric field applied
perpendicular to the director. The particles can be transported along any
direction in the plane of the sample by selecting the appropriate orientation
of the Janus vector with respect to the director. The direction of motion of
the particles is controllable by varying the field amplitude and frequency. The
command demonstrated on the motility of the particles is promising for tunable
transport and microrobotic applications.
|
Sleep staging is fundamental for sleep assessment and disease diagnosis.
Although previous attempts to classify sleep stages have achieved high
classification performance, several challenges remain open: 1) How to
effectively extract salient waves in multimodal sleep data; 2) How to capture
the multi-scale transition rules among sleep stages; 3) How to adaptively seize
the key role of specific modality for sleep staging. To address these
challenges, we propose SalientSleepNet, a multimodal salient wave detection
network for sleep staging. Specifically, SalientSleepNet is a temporal fully
convolutional network based on the $\rm U^2$-Net architecture that is
originally proposed for salient object detection in computer vision. It is
mainly composed of two independent $\rm U^2$-like streams to extract the
salient features from multimodal data, respectively. Meanwhile, the multi-scale
extraction module is designed to capture multi-scale transition rules among
sleep stages. Besides, the multimodal attention module is proposed to
adaptively capture valuable information from multimodal data for the specific
sleep stage. Experiments on the two datasets demonstrate that SalientSleepNet
outperforms the state-of-the-art baselines. It is worth noting that this model
has the least amount of parameters compared with the existing deep neural
network models.
|
Spin-phonon interaction is an important channel for spin and energy
relaxation in magnetic insulators. Understanding this interaction is critical
for developing magnetic insulator-based spintronic devices. Quantifying this
interaction in yttrium iron garnet (YIG), one of the most extensively
investigated magnetic insulators, remains challenging because of the large
number of atoms in a unit cell. Here, we report temperature-dependent and
polarization-resolved Raman measurements in a YIG bulk crystal. We first
classify the phonon modes based on their symmetry. We then develop a modified
mean-field theory and define a symmetry-adapted parameter to quantify
spin-phonon interaction in a phonon-mode specific way for the first time in
YIG. Based on this improved mean-field theory, we discover a positive
correlation between the spin-phonon interaction strength and the phonon
frequency.
|
We propose a method for the unsupervised reconstruction of a
temporally-coherent sequence of surfaces from a sequence of time-evolving point
clouds, yielding dense, semantically meaningful correspondences between all
keyframes. We represent the reconstructed surface as an atlas, using a neural
network. Using canonical correspondences defined via the atlas, we encourage
the reconstruction to be as isometric as possible across frames, leading to
semantically-meaningful reconstruction. Through experiments and comparisons, we
empirically show that our method achieves results that exceed that state of the
art in the accuracy of unsupervised correspondences and accuracy of surface
reconstruction.
|
Citrus segmentation is a key step of automatic citrus picking. While most
current image segmentation approaches achieve good segmentation results by
pixel-wise segmentation, these supervised learning-based methods require a
large amount of annotated data, and do not consider the continuous temporal
changes of citrus position in real-world applications. In this paper, we first
train a simple CNN with a small number of labelled citrus images in a
supervised manner, which can roughly predict the citrus location from each
frame. Then, we extend a state-of-the-art unsupervised learning approach to
pre-learn the citrus's potential movements between frames from unlabelled
citrus's videos. To take advantages of both networks, we employ the multimodal
transformer to combine supervised learned static information and unsupervised
learned movement information. The experimental results show that combing both
network allows the prediction accuracy reached at 88.3$\%$ IOU and 93.6$\%$
precision, outperforming the original supervised baseline 1.2$\%$ and 2.4$\%$.
Compared with most of the existing citrus segmentation methods, our method uses
a small amount of supervised data and a large number of unsupervised data,
while learning the pixel level location information and the temporal
information of citrus changes to enhance the segmentation effect.
|
A bipartite experiment consists of one set of units being assigned treatments
and another set of units for which we measure outcomes. The two sets of units
are connected by a bipartite graph, governing how the treated units can affect
the outcome units. In this paper, we consider estimation of the average total
treatment effect in the bipartite experimental framework under a linear
exposure-response model. We introduce the Exposure Reweighted Linear (ERL)
estimator, and show that the estimator is unbiased, consistent and
asymptotically normal, provided that the bipartite graph is sufficiently
sparse. To facilitate inference, we introduce an unbiased and consistent
estimator of the variance of the ERL point estimator. In addition, we introduce
a cluster-based design, Exposure-Design, that uses heuristics to increase the
precision of the ERL estimator by realizing a desirable exposure distribution.
|
Given a stream of graph edges from a dynamic graph, how can we assign anomaly
scores to edges and subgraphs in an online manner, for the purpose of detecting
unusual behavior, using constant time and memory? For example, in intrusion
detection, existing work seeks to detect either anomalous edges or anomalous
subgraphs, but not both. In this paper, we first extend the count-min sketch
data structure to a higher-order sketch. This higher-order sketch has the
useful property of preserving the dense subgraph structure (dense subgraphs in
the input turn into dense submatrices in the data structure). We then propose
four online algorithms that utilize this enhanced data structure, which (a)
detect both edge and graph anomalies; (b) process each edge and graph in
constant memory and constant update time per newly arriving edge, and; (c)
outperform state-of-the-art baselines on four real-world datasets. Our method
is the first streaming approach that incorporates dense subgraph search to
detect graph anomalies in constant memory and time.
|
Effectively modeling phenomena present in highly nonlinear dynamical systems
whilst also accurately quantifying uncertainty is a challenging task, which
often requires problem-specific techniques. We present a novel, domain-agnostic
approach to tackling this problem, using compositions of physics-informed
random features, derived from ordinary differential equations. The architecture
of our model leverages recent advances in approximate inference for deep
Gaussian processes, such as layer-wise weight-space approximations which allow
us to incorporate random Fourier features, and stochastic variational inference
for approximate Bayesian inference. We provide evidence that our model is
capable of capturing highly nonlinear behaviour in real-world multivariate time
series data. In addition, we find that our approach achieves comparable
performance to a number of other probabilistic models on benchmark regression
tasks.
|
Laser-induced ultrafast demagnetization has puzzled researchers around the
world for over two decades. Intrinsic complexity in electronic, magnetic, and
phononic subsystems is difficult to understand microscopically. So far it is
not possible to explain demagnetization using a single mechanism, which
suggests a crucial piece of information still missing. In this paper, we return
to a fundamental aspect of physics: spin and its change within each band in the
entire Brillouin zone. We employ fcc Ni as an example and use an extremely
dense {\bf k} mesh to map out spin changes for every band close to the Fermi
level along all the high symmetry lines. To our surprise, spin angular momentum
at some special {\bf k} points abruptly changes from $\pm \hbar/2$ to $\mp
\hbar/2$ simply by moving from one crystal momentum point to the next. This
explains why intraband transitions, which the spin superdiffusion model is
based upon, can induce a sharp spin moment reduction, and why electric current
can change spin orientation in spintronics. These special {\bf k} points, which
are called spin Berry points, are not random and appear when several bands are
close to each other, so the Berry potential of spin majority states is
different from that of spin minority states. Although within a single band,
spin Berry points jump, when we group several neighboring bands together, they
form distinctive smooth spin Berry lines. It is the band structure that
disrupts those lines. Spin Berry points are crucial to laser-induced ultrafast
demagnetization and spintronics.
|
Surface-response functions are one of the most promising routes for bridging
the gap between fully quantum-mechanical calculations and phenomenological
models in quantum nanoplasmonics. Within all the currently available recipes
for obtaining such response functions, \emph{ab initio} calculations remain one
of the most predominant, wherein the surface-response function are retrieved
via the metal's non-equilibrium response to an external perturbation. Here, we
present a complementary approach where one of the most appealing
surface-response functions, namely the Feibelman $d$-parameters, yield a finite
contribution even in the case where they are calculated directly from the
equilibrium properties described under the local-response approximation (LRA),
but with a spatially varying equilibrium electron density. Using model
calculations that mimic both spill-in and spill-out of the equilibrium electron
density, we show that the obtained $d$-parameters are in qualitative agreement
with more elaborate, but also more computationally demanding, \emph{ab initio}
methods. The analytical work presented here illustrates how microscopic
surface-response functions can emerge out of entirely local electrodynamic
considerations.
|
In this paper, a novel and robust algorithm is proposed for adaptive
beamforming based on the idea of reconstructing the autocorrelation sequence
(ACS) of a random process from a set of measured data. This is obtained from
the first column and the first row of the sample covariance matrix (SCM) after
averaging along its diagonals. Then, the power spectrum of the correlation
sequence is estimated using the discrete Fourier transform (DFT). The DFT
coefficients corresponding to the angles within the noise-plus-interference
region are used to reconstruct the noise-plus-interference covariance matrix
(NPICM), while the desired signal covariance matrix (DSCM) is estimated by
identifying and removing the noise-plus-interference component from the SCM. In
particular, the spatial power spectrum of the estimated received signal is
utilized to compute the correlation sequence corresponding to the
noise-plus-interference in which the dominant DFT coefficient of the
noise-plus-interference is captured. A key advantage of the proposed adaptive
beamforming is that only little prior information is required. Specifically, an
imprecise knowledge of the array geometry and of the angular sectors in which
the interferences are located is needed. Simulation results demonstrate that
compared with previous reconstruction-based beamformers, the proposed approach
can achieve better overall performance in the case of multiple mismatches over
a very large range of input signal-to-noise ratios.
|
Neural models have transformed the fundamental information retrieval problem
of mapping a query to a giant set of items. However, the need for efficient and
low latency inference forces the community to reconsider efficient approximate
near-neighbor search in the item space. To this end, learning to index is
gaining much interest in recent times. Methods have to trade between obtaining
high accuracy while maintaining load balance and scalability in distributed
settings. We propose a novel approach called IRLI (pronounced `early'), which
iteratively partitions the items by learning the relevant buckets directly from
the query-item relevance data. Furthermore, IRLI employs a superior
power-of-$k$-choices based load balancing strategy. We mathematically show that
IRLI retrieves the correct item with high probability under very natural
assumptions and provides superior load balancing. IRLI surpasses the best
baseline's precision on multi-label classification while being $5x$ faster on
inference. For near-neighbor search tasks, the same method outperforms the
state-of-the-art Learned Hashing approach NeuralLSH by requiring only ~
{1/6}^th of the candidates for the same recall. IRLI is both data and model
parallel, making it ideal for distributed GPU implementation. We demonstrate
this advantage by indexing 100 million dense vectors and surpassing the popular
FAISS library by >10% on recall.
|
The magnetic ground state of polycrystalline N\'eel skyrmion hosting material
GaV$_4$S$_8$ has been investigated using ac susceptibility and powder neutron
diffraction. In the absence of an applied magnetic field GaV$_4$S$_8$ undergoes
a transition from a paramagnetic to a cycloidal state below 13~K and then to a
ferromagnetic-like state below 6~K. With evidence from ac susceptibility and
powder neutron diffraction, we have identified the commensurate magnetic
structure at 1.5 K, with ordered magnetic moments of $0.23(2)~\mu_{\mathrm{B}}$
on the V1 sites and $0.22(1)~\mu_{\mathrm{B}}$ on the V2 sites. These moments
have ferromagnetic-like alignment but with a 39(8)$^{\circ}$ canting of the
magnetic moments on the V2 sites away from the V$_4$ cluster. In the
incommensurate magnetic phase that exists between 6 and 13 K, we provide a
thorough and careful analysis of the cycloidal magnetic structure exhibited by
this material using powder neutron diffraction.
|
Deep learning has advanced from fully connected architectures to structured
models organized into components, e.g., the transformer composed of positional
elements, modular architectures divided into slots, and graph neural nets made
up of nodes. In structured models, an interesting question is how to conduct
dynamic and possibly sparse communication among the separate components. Here,
we explore the hypothesis that restricting the transmitted information among
components to discrete representations is a beneficial bottleneck. The
motivating intuition is human language in which communication occurs through
discrete symbols. Even though individuals have different understandings of what
a "cat" is based on their specific experiences, the shared discrete token makes
it possible for communication among individuals to be unimpeded by individual
differences in internal representation. To discretize the values of concepts
dynamically communicated among specialist components, we extend the
quantization mechanism from the Vector-Quantized Variational Autoencoder to
multi-headed discretization with shared codebooks and use it for
discrete-valued neural communication (DVNC). Our experiments show that DVNC
substantially improves systematic generalization in a variety of architectures
-- transformers, modular architectures, and graph neural networks. We also show
that the DVNC is robust to the choice of hyperparameters, making the method
very useful in practice. Moreover, we establish a theoretical justification of
our discretization process, proving that it has the ability to increase noise
robustness and reduce the underlying dimensionality of the model.
|
Optimal stopping is the problem of deciding the right time at which to take a
particular action in a stochastic system, in order to maximize an expected
reward. It has many applications in areas such as finance, healthcare, and
statistics. In this paper, we employ deep Reinforcement Learning (RL) to learn
optimal stopping policies in two financial engineering applications: namely
option pricing, and optimal option exercise. We present for the first time a
comprehensive empirical evaluation of the quality of optimal stopping policies
identified by three state of the art deep RL algorithms: double deep Q-learning
(DDQN), categorical distributional RL (C51), and Implicit Quantile Networks
(IQN). In the case of option pricing, our findings indicate that in a
theoretical Black-Schole environment, IQN successfully identifies nearly
optimal prices. On the other hand, it is slightly outperformed by C51 when
confronted to real stock data movements in a put option exercise problem that
involves assets from the S&P500 index. More importantly, the C51 algorithm is
able to identify an optimal stopping policy that achieves 8% more out-of-sample
returns than the best of four natural benchmark policies. We conclude with a
discussion of our findings which should pave the way for relevant future
research.
|
We present the sympathetic eruption of a standard and a blowout coronal jets
originating from two adjacent coronal bright points (CBP1 and CBP2) in a polar
coronal hole, using soft X-ray and extreme ultraviolet observations
respectively taken by the Hinode and the Solar Dynamic Observatory. In the
event, a collimated jet with obvious westward lateral motion firstly launched
from CBP1, during which a small bright point appeared around CBP1's east end,
and magnetic flux cancellation was observed within the eruption source region.
Based on these characteristics, we interpret the observed jet as a standard jet
associated with photosperic magnetic flux cancellation. About 15 minutes later,
the westward moving jet spire interacted with CBP2 and resulted in magnetic
reconnection between them, which caused the formation of the second jet above
CBP2 and the appearance of a bright loop system in-between the two CBPs. In
addition, we observed the writhing, kinking, and violent eruption of a small
kink structure close to CBP2's west end but inside the jet-base, which made the
second jet brighter and broader than the first one. These features suggest that
the second jet should be a blowout jet triggered by the magnetic reconnection
between CBP2 and the spire of the first jet. We conclude that the two
successive jets were physically connected to each other rather than a temporal
coincidence, and this observation also suggests that coronal jets can be
triggered by external eruptions or disturbances, besides internal magnetic
activities or magnetohydrodynamic instabilities.
|
This paper compares notions of double sliceness for links. The main result is
to show that a large family of 2-component Montesinos links are not strongly
doubly slice despite being weakly doubly slice and having doubly slice
components. Our principal obstruction to strong double slicing comes by
considering branched double covers. To this end we prove a result classifying
Seifert fibered spaces which admit a smooth embeddings into integer homology
$S^1 \times S^3$s by maps inducing surjections on the first homology group. A
number of other results and examples pertaining to doubly slice links are also
given.
|
To boost the capacity of the cellular system, the operators have started to
reuse the same licensed spectrum by deploying 4G LTE small cells (Femto Cells)
in the past. But in time, these small cell licensed spectrum is not sufficient
to satisfy future applications like augmented reality (AR)and virtual reality
(VR). Hence, cellular operators look for alternate unlicensed spectrum in Wi-Fi
5 GHz band, later 3GPP named as LTE Licensed Assisted Access (LAA). The recent
and current rollout of LAA deployments (in developed nations like the US)
provides an opportunity to understand coexistence profound ground truth. This
paper discusses a high-level overview of my past, present, and future research
works in the direction of small cell benefits. In the future, we shift the
focus onto the latest unlicensed band: 6 GHz, where the latest Wi-Fi version,
802.11ax, will coexist with the latest cellular technology, 5G New Radio(NR) in
unlicensed
|
Controlling the activity of proteins with azobenzene photoswitches is a
potent tool for manipulating their biological function. With the help of light,
one can change e.g. binding affinities, control allostery or temper with
complex biological processes. Additionally, due to their intrinsically fast
photoisomerisation, azobenzene photoswitches can serve as triggers to initiate
out-of-equilibrium processes. Such switching of the activity, therefore,
initiates a cascade of conformational events, which can only be accessed with
time-resolved methods. In this Review, we will show how combining the potency
of azobenzene photoswitching with transient spectroscopic techniques helps to
disclose the order of events and provide an experimental observation of
biomolecular interactions in real-time. This will ultimately help us to
understand how proteins accommodate, adapt and readjust their structure to
answer an incoming signal and it will complete our knowledge of the dynamical
character of proteins.
|
Nonuniform structure of low-density nuclear matter, known as nuclear pasta,
is expected to appear not only in the inner crust of neutron stars but also in
core-collapse supernova explosions and neutron-star mergers. We perform fully
three-dimensional calculations of inhomogeneous nuclear matter and neutron-star
matter in the low-density region using the Thomas-Fermi approximation. The
nuclear interaction is described in the relativistic mean-field approach with
the point-coupling interaction, where the meson exchange in each channel is
replaced by the contact interaction between nucleons. We investigate the
influence of nuclear symmetry energy and its density dependence on pasta
structures by introducing a coupling term between the isoscalar-vector and
isovector-vector interactions. It is found that the properties of pasta phases
in the neutron-rich matter are strongly dependent on the symmetry energy and
its slope. In addition to typical shapes like droplets, rods, slabs, tubes, and
bubbles, some intermediate pasta structures are also observed in cold stellar
matter with a relatively large proton fraction. We find that nonspherical
shapes are unlikely to be formed in neutron-star crusts, since the proton
fraction obtained in $\beta$ equilibrium is rather small. The inner crust
properties may lead to a visible difference in the neutron-star radius.
|
This paper addresses the Mountain Pass Theorem for locally Lipschitz
functions on finite-dimensional vector spaces in terms of tangencies. Namely,
let $f \colon \mathbb R^n \to \mathbb R$ be a locally Lipschitz function with a
mountain pass geometry. Let $$c := \inf_{\gamma \in \mathcal
A}\max_{t\in[0,1]}f(\gamma(t)),$$ where $\mathcal{A}$ is the set of all
continuous paths joining $x^*$ to $y^*.$ We show that either $c$ is a critical
value of $f$ or $c$ is a tangency value at infinity of $f.$ This reduces to the
Mountain Pass Theorem of Ambrosetti and Rabinowitz in the case where the
function $f$ is definable (such as, semi-algebraic) in an o-minimal structure.
|
Recent studies on metamorphic petrology as well as microstructural
observations suggest the influence of mechanical effects upon chemically active
metamorphic minerals. Thus, the understanding of such a coupling is crucial to
describe the dynamics of geomaterials. In this effort, we derive a
thermodynamically-consistent framework to characterize the evolution of
chemically active minerals. We model the metamorphic mineral assemblages as a
solid-species solution where the species mass transport and chemical reaction
drive the stress generation process. The theoretical foundations of the
framework rely on modern continuum mechanics, thermodynamics far from
equilibrium, and the phase-field model. We treat the mineral solid solution as
a continuum body, and following the Larch\'e and Cahn network model, we define
displacement and strain fields. Consequently, we obtain a set of coupled
chemo-mechanical equations. We use the aforementioned framework to study single
minerals as solid solutions during metamorphism. Furthermore, we emphasise the
use of the phase-field framework as a promising tool to model complex
multi-physics processes in geoscience. Without loss of generality, we use
common physical and chemical parameters found in the geoscience literature to
portrait a comprehensive view of the underlying physics. Thereby, we carry out
2D and 3D numerical simulations using material parameters for metamorphic
minerals to showcase and verify the chemo-mechanical interactions of mineral
solid solutions that undergo spinodal decomposition, chemical reactions, and
deformation.
|
Reversible covalent kinase inhibitors (RCKIs) are a class of novel kinase
inhibitors attracting increasing attention because they simultaneously show the
selectivity of covalent kinase inhibitors, yet avoid permanent
protein-modification-induced adverse effects. Over the last decade, RCKIs have
been reported to target different kinases, including atypical kinases.
Currently, three RCKIs are undergoing clinical trials to treat specific
diseases, for example, Pemphigus, an autoimmune disorder. In this perspective,
first, RCKIs are systematically summarized, including characteristics of
electrophilic groups, chemical scaffolds, nucleophilic residues, and binding
modes. Second, we provide insights into privileged electrophiles, the
distribution of nucleophiles and hence effective design strategies for RCKIs.
Finally, we provide a brief perspective on future design strategies for RCKIs,
including those that target proteins other than kinases.
|
We prove that the sublinearly Morse boundary of every known cubulated group
continuously injects in the Gromov boundary of a certain hyperbolic graph. We
also show that for all CAT(0) cube complexes, convergence to sublinearly Morse
geodesic rays has a simple combinatorial description using the hyperplanes
crossed by such sequences. As an application of this combinatorial description,
we show that a certain subspace of the Roller boundary continously surjects on
the subspace of the visual boundary consisting of sublinearly Morse geodesic
rays.
|
Depth completion aims to generate a dense depth map from the sparse depth map
and aligned RGB image. However, current depth completion methods use extremely
expensive 64-line LiDAR(about $100,000) to obtain sparse depth maps, which will
limit their application scenarios. Compared with the 64-line LiDAR, the
single-line LiDAR is much less expensive and much more robust. Therefore, we
propose a method to tackle the problem of single-line depth completion, in
which we aim to generate a dense depth map from the single-line LiDAR info and
the aligned RGB image. A single-line depth completion dataset is proposed based
on the existing 64-line depth completion dataset(KITTI). A network called
Semantic Guided Two-Branch Network(SGTBN) which contains global and local
branches to extract and fuse global and local info is proposed for this task. A
Semantic guided depth upsampling module is used in our network to make full use
of the semantic info in RGB images. Except for the usual MSE loss, we add the
virtual normal loss to increase the constraint of high-order 3D geometry in our
network. Our network outperforms the state-of-the-art in the single-line depth
completion task. Besides, compared with the monocular depth estimation, our
method also has significant advantages in precision and model size.
|
Spectral observations below Lyman-alpha are now obtained with the Cosmic
Origin Spectrograph (COS) on the Hubble Space Telescope (HST). It is therefore
necessary to provide an accurate treatment of the blue wing of the Lyman-alpha
line that enables correct calculations of radiative transport in DA and DBA
white dwarf stars. On the theoretical front, we very recently developed very
accurate H-He potential energies for the hydrogen 1s, 2s, and 2p states.
Nevertheless, an uncertainty remained about the asymptotic correlation of the
Sigma states and the electronic dipole transition moments. A similar difficulty
occurred in our first calculations for the resonance broadening of hydrogen
perturbed by collisions with neutral H atoms. The aim of this paper is twofold.
First, we clarify the question of the asymptotic correlation of the Sigma
states, and we show that relativistic contributions, even very tiny, may need
to be accounted for a correct long-range and asymptotic description of the
states because of the specific 2s 2p Coulomb degeneracy in hydrogen. This
effect of relativistic corrections, inducing small splitting of the 2s and 2p
states of H, is shown to be important for the Sigma-Sigma$ transition dipole
moments in H-He and is also discussed in H-H. Second, we use existent (H-H) and
newly determined (H-He) accurate potentials and properties to provide a
theoretical investigation of the collisional effects on the blue wing of the
Lyman-alpha line of H perturbed by He and H. We study the relative
contributions in the blue wing of the H and He atoms according to their
relative densities. We finally achieve a comparison with recent COS
observations and propose an assignment for a feature centered at 1190 A.
|
We apply a recently developed unsupervised machine learning scheme for local
atomic environments to characterize large-scale, disordered aggregates formed
by sequence-defined macromolecules. This method provides new insight into the
structure of these disordered, dilute aggregates, which has proven difficult to
understand using collective variables manually derived from expert knowledge.
In contrast to such conventional order parameters, we are able to classify the
global aggregate structure directly using descriptions of the local
environments. The resulting characterization provides a deeper understanding of
the range of possible self-assembled structures and their relationships to each
other. We also provide a detailed analysis of the effects of finite system
size, stochasticity, and kinetics of these aggregates based on the learned
collective variables. Interestingly, we find that the spatiotemporal evolution
of systems in the learned latent space is smooth and continuous, despite being
derived from only a single snapshot from each of about 1000 monomer sequences.
These results demonstrate the insight which can be gained by applying
unsupervised machine learning to soft matter systems, especially when suitable
order parameters are not known.
|
Recently, significant progress has been made in single-view depth estimation
thanks to increasingly large and diverse depth datasets. However, these
datasets are largely limited to specific application domains (e.g. indoor,
autonomous driving) or static in-the-wild scenes due to hardware constraints or
technical limitations of 3D reconstruction. In this paper, we introduce the
first depth dataset DynOcc consisting of dynamic in-the-wild scenes. Our
approach leverages the occlusion cues in these dynamic scenes to infer depth
relationships between points of selected video frames. To achieve accurate
occlusion detection and depth order estimation, we employ a novel occlusion
boundary detection, filtering and thinning scheme followed by a robust
foreground/background classification method. In total our DynOcc dataset
contains 22M depth pairs out of 91K frames from a diverse set of videos. Using
our dataset we achieved state-of-the-art results measured in weighted human
disagreement rate (WHDR). We also show that the inferred depth maps trained
with DynOcc can preserve sharper depth boundaries.
|
Large global companies need to speed up their innovation activities to
increase competitive advantage. However, such companies' organizational
structures impede their ability to capture trends they are well aware of due to
bureaucracy, slow decision-making, distributed departments, and distributed
processes. One way to strengthen the innovation capability is through fostering
internal startups. We report findings from an embedded multiple-case study of
five internal startups in a globally distributed company to identify barriers
for software product innovation: late involvement of software developers,
executive sponsor is missing or not clarified, yearly budgeting and planning,
unclear decision-making authority, lack of digital infrastructure for
experimentation and access to data from external actors. Drawing on the
framework of continuous software engineering proposed by Fitzgerald and Stol,
we discuss the role of BizDev in software product innovation. We suggest that
lack of continuity, rather than the lack of speed, is an ultimate challenge for
internal startups in large global companies.
|
We study the background (equilibrium), linear and nonlinear spin currents in
2D Rashba spin-orbit coupled systems with Zeeman splitting and in 3D
noncentrosymmetric metals using modified spin current operator by inclusion of
the anomalous velocity. The linear spin Hall current arises due to the
anomalous velocity of charge carriers induced by the Berry curvature. The
nonlinear spin current occurs due to the band velocity and/or the anomalous
velocity. For 2D Rashba systems, the background spin current saturates at high
Fermi energy (independent of the Zeeman coupling), linear spin current exhibits
a plateau at the Zeeman gap and nonlinear spin currents are peaked at the gap
edges. The magnitude of the nonlinear spin current peaks enhances with the
strength of Zeeman interaction. The linear spin current is polarized out of
plane, while the nonlinear ones are polarized in-plane. We witness pure
anomalous nonlinear spin current with spin polarization along the direction of
propagation. In 3D noncentrosymmetric metals, background and linear spin
currents are monotonically increasing functions of Fermi energy, while
nonlinear spin currents vary non-monotonically as a function of Fermi energy
and are independent of the Berry curvature. These findings may provide useful
information to manipulate spin currents in Rashba spin-orbit coupled systems.
|
Using a navigation process with the datum $(F,V)$, in which $F$ is a Finsler
metric and the smooth tangent vector field $V$ satisfies $F(-V(x))>1$
everywhere, a Lorentz Finsler metric $\tilde{F}$ can be induced. Isoparametric
functions and isoparametric hypersurfaces with or without involving a smooth
measure can be defined for $\tilde{F}$. When the vector field $V$ in the
navigation datum is homothetic, we prove the local correspondences between
isoparametric functions and isoparametric hypersurfaces before and after this
navigation process. Using these correspondences, we provide some examples of
isoparametric functions and isoparametric hypersurfaces on a Funk space of
Lorentz Randers type.
|
We study the high frequency Hall conductivity in a two-dimensional (2D) model
of conduction electrons coupled to a background magnetic skyrmion texture via
an effective Hund's coupling term. For an ordered skyrmion crystal, a Kubo
formula calculation using the basis of skyrmion crystal Chern bands reveals a
resonant Hall response at a frequency set by the Hund's coupling:
$\hbar\omega_{\text{res}} \approx J_H$. A complementary real-space Kubo formula
calculation for an isolated skyrmion in a box reveals a similar resonant Hall
response. A linear relation between the area under the Hall resonant curve and
the skyrmion density is discovered numerically and is further elucidated using
a gradient expansion which is valid for smooth textures and a local
approximation based on a spin-trimer calculation. We point out the issue of
distinguishing this skyrmion contribution from a similar feature arising from
spin-orbit interactions, as demonstrated in a model for Rashba spin-orbit
coupled electrons in a collinear ferromagnet, which is analogous to the
difficulty of unambiguously separating the d.c. topological Hall effect from
the anomalous Hall effect. The resonant feature in the high frequency
topological Hall effect is proposed to provide a potentially useful local
optical signature of skyrmions via probes such as scanning magneto-optical Kerr
microscopy.
|
We report on observations of the active K2 dwarf $\epsilon$ Eridani based on
contemporaneous SPIRou, NARVAL, and TESS data obtained over two months in late
2018, when the activity of the star was reported to be in a non-cyclic phase.
We first recover the fundamental parameters of the target from both visible and
nIR spectral fitting. The large-scale magnetic field is investigated from
polarimetric data. From unpolarized spectra, we estimate the total magnetic
flux through Zeeman broadening of magnetically sensitive nIR lines and the
chromospheric emission using the CaII H & K lines. The TESS photometric
monitoring is modeled with pseudo-periodic Gaussian Process Regression.
Fundamental parameters of $\epsilon$ Eridani derived from visible and
near-infrared wavelengths provide us with consistent results, also in agreement
with published values. We report a progressive increase of macroturbulence
towards larger nIR wavelengths. Zeeman broadening of individual lines
highlights an unsigned surface magnetic field $B_{\rm mono} = 1.90 \pm 0.13$
kG, with a filling factor $f = 12.5 \pm 1.7$% (unsigned magnetic flux $Bf = 237
\pm 36$ G). The large-scale magnetic field geometry, chromospheric emission,
and broadband photometry display clear signs of non-rotational evolution over
the course of data collection. Characteristic decay times deduced from the
light curve and longitudinal field measurements fall in the range 30-40 d,
while the characteristic timescale of surface differential rotation, as derived
through the evolution of the magnetic geometry, is equal to $57 \pm 5$ d. The
large-scale magnetic field exhibits a combination of properties not observed
previously for $\epsilon$ Eridani, with a surface field among the weakest
previously reported, but also mostly axisymmetric, and dominated by a toroidal
component.
|
Maintenance of existing software requires a large amount of time for
comprehending the source code. The architecture of a software, however, may not
be clear to maintainers if up to date documentations are not available.
Software clustering is often used as a remodularisation and architecture
recovery technique to help recover a semantic representation of the software
design. Due to the diverse domains, structure, and behaviour of software
systems, the suitability of different clustering algorithms for different
software systems are not investigated thoroughly. Research that introduce new
clustering techniques usually validate their approaches on a specific domain,
which might limit its generalisability. If the chosen test subjects could only
represent a narrow perspective of the whole picture, researchers might risk not
being able to address the external validity of their findings. This work aims
to fill this gap by introducing a new approach, Explaining Software Clustering
for Remodularisation, to evaluate the effectiveness of different software
clustering approaches. This work focuses on hierarchical clustering and Bunch
clustering algorithms and provides information about their suitability
according to the features of the software, which as a consequence, enables the
selection of the most optimum algorithm and configuration from our existing
pool of choices for a particular software system. The proposed framework is
tested on 30 open source software systems with varying sizes and domains, and
demonstrates that it can characterise both the strengths and weaknesses of the
analysed software clustering algorithms using software features extracted from
the code. The proposed approach also provides a better understanding of the
algorithms behaviour through the application of dimensionality reduction
techniques.
|
Cleaner analytic technique for quantifying compounds in dense suspension is
needed for wastewater and environment analysis, chemical or bio-conversion
process monitoring, biomedical diagnostics, food quality control among others.
In this work, we introduce a green, fast, one-step method called nanoextraction
for extraction and detection of target analytes from sub-milliliter dense
suspensions using surface nanodroplets without toxic solvents and pre-removal
of the solid contents. With nanoextraction, we achieve a limit of detection
(LOD) of 10^(-9) M for a fluorescent model analyte obtained from a particle
suspension sample. The LOD lower than that in water without particles 10^(-8)
M, potentially due to the interaction of particles and the analyte. The high
particle concentration in the suspension sample thus does not reduce the
extraction efficiency, although the extraction process was slowed down up to 5
min. As proof of principle, we demonstrate the nanoextraction for
quantification of model compounds in wastewater slurry containing 30 wt% sands
and oily components (i.e. heavy oils). The nanoextraction and detection
technology developed in this work may be used as fast analytic technologies for
complex slurry samples in environment industrial waste, or in biomedical
diagnostics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.