abstract
stringlengths 42
2.09k
|
---|
Fluctuations of the glacier calving front have an important influence over
the ice flow of whole glacier systems. It is therefore important to precisely
monitor the position of the calving front. However, the manual delineation of
SAR images is a difficult, laborious and subjective task. Convolutional neural
networks have previously shown promising results in automating the glacier
segmentation in SAR images, making them desirable for further exploration of
their possibilities. In this work, we propose to compute uncertainty and use it
in an Uncertainty Optimization regime as a novel two-stage process. By using
dropout as a random sampling layer in a U-Net architecture, we create a
probabilistic Bayesian Neural Network. With several forward passes, we create a
sampling distribution, which can estimate the model uncertainty for each pixel
in the segmentation mask. The additional uncertainty map information can serve
as a guideline for the experts in the manual annotation of the data.
Furthermore, feeding the uncertainty map to the network leads to 95.24% Dice
similarity, which is an overall improvement in the segmentation performance
compared to the state-of-the-art deterministic U-Net-based glacier segmentation
pipelines.
|
We study nested variational inequalities, which are variational inequalities
whose feasible set is the solution set of another variational inequality. We
present a projected averaging Tikhonov algorithm requiring the weakest
conditions in the literature to guarantee the convergence to solutions of the
nested variational inequality. Specifically, we only need monotonicity of the
upper- and the lower-level variational inequalities. Also, we provide the first
complexity analysis for nested variational inequalities considering optimality
of both the upper- and lower-level.
|
Evolving trees arise in many real-life scenarios from computer file systems
and dynamic call graphs, to fake news propagation and disease spread. Most
layout algorithms for static trees, however, do not work well in an evolving
setting (e.g., they are not designed to be stable between time steps). Dynamic
graph layout algorithms are better suited to this task, although they often
introduce unnecessary edge crossings. With this in mind we propose two methods
for visualizing evolving trees that guarantee no edge crossings, while
optimizing (1) desired edge length realization, (2) layout compactness, and (3)
stability. We evaluate the two new methods, along with four prior approaches
(two static and two dynamic), on real-world datasets using quantitative
metrics: stress, desired edge length realization, layout compactness,
stability, and running time. The new methods are fully functional and available
on github.
|
We discuss a remarkable correspondence between the description of Black Holes
as highly occupied condensates of $N$ weakly interacting gravitons and that of
Color Glass Condensates (CGCs) as highly occupied gluon states. In both cases,
the dynamics of "wee partons" in Regge asymptotics is controlled by emergent
semi-hard scales that lead to perturbative unitarization and classicalization
of $2\rightarrow N$ particle amplitudes at weak coupling. In particular, they
attain a maximal entropy permitted by unitarity, bounded by the inverse
coupling $\alpha$ of the respective constituents. Strikingly, this entropy is
equal to the area measured in units of the Goldstone constant corresponding to
the spontaneous breaking of Poincar{\'{e}} symmetry by the corresponding
graviton or gluon condensate. In gravity, the Goldstone constant is the Planck
scale, and gives rise to the Bekenstein-Hawking entropy. Likewise, in the CGC,
the corresponding Goldstone scale is determined by the onset of gluon
screening. We point to further similarities in Black Hole formation,
thermalization and decay, to that of the Glasma matter formed from colliding
CGCs in ultrarelativistic nuclear collisions, which decays into a Quark-Gluon
Plasma.
|
In this study, we present the ro-vibrationally resolved gas-phase spectrum of
the diatomic molecule TiO around 1000\,cm$^{-1}$. Molecules were produced in a
laser ablation source by vaporizing a pure titanium sample in the atmosphere of
gaseous nitrous oxide. Adiabatically expanded gas, containing TiO, formed a
supersonic jet and was probed perpendicularly to its propagation by infrared
radiation from quantum cascade lasers. Fundamental bands of $^{46-50}$TiO and
vibrational hotbands of $^{48}$TiO are identified and analyzed. In a
mass-independent fitting procedure combining the new infrared data with pure
rotational and electronic transitions from the literature, a Dunham-like
parameterization is obtained. From the present data set, the multi-isotopic
analysis allows to determine the spin-rotation coupling constant $\gamma$ and
the Born-Oppenheimer correction coefficient $\Delta_{\rm U_{10}}^{\mathrm{Ti}}$
for the first time. The parameter set enables to calculate the Born-Oppenheimer
correction coefficients $\Delta_{\rm U_{02}}^{\mathrm{Ti}}$ and $\Delta_{\rm
U_{02}}^{\mathrm{O}}$. In addition, the vibrational transition moments for the
observed vibrational transitions are reported.
|
Most real world applications require dealing with stochasticity like sensor
noise or predictive uncertainty, where formal specifications of desired
behavior are inherently probabilistic. Despite the promise of formal
verification in ensuring the reliability of neural networks, progress in the
direction of probabilistic specifications has been limited. In this direction,
we first introduce a general formulation of probabilistic specifications for
neural networks, which captures both probabilistic networks (e.g., Bayesian
neural networks, MC-Dropout networks) and uncertain inputs (distributions over
inputs arising from sensor noise or other perturbations). We then propose a
general technique to verify such specifications by generalizing the notion of
Lagrangian duality, replacing standard Lagrangian multipliers with "functional
multipliers" that can be arbitrary functions of the activations at a given
layer. We show that an optimal choice of functional multipliers leads to exact
verification (i.e., sound and complete verification), and for specific forms of
multipliers, we develop tractable practical verification algorithms.
We empirically validate our algorithms by applying them to Bayesian Neural
Networks (BNNs) and MC Dropout Networks, and certifying properties such as
adversarial robustness and robust detection of out-of-distribution (OOD) data.
On these tasks we are able to provide significantly stronger guarantees when
compared to prior work -- for instance, for a VGG-64 MC-Dropout CNN trained on
CIFAR-10, we improve the certified AUC (a verified lower bound on the true AUC)
for robust OOD detection (on CIFAR-100) from $0\% \rightarrow 29\%$. Similarly,
for a BNN trained on MNIST, we improve on the robust accuracy from $60.2\%
\rightarrow 74.6\%$. Further, on a novel specification -- distributionally
robust OOD detection -- we improve the certified AUC from $5\% \rightarrow
23\%$.
|
In-situ X-ray diffraction was used to investigate the structural
rearrangements during annealing from 77 K up to the crystallization temperature
of $\mathit{Cu_{44}Zr_{44}Al_8Hf_2Co_2}$ bulk metallic glass rejuvenated by
high pressure torsion performed at cryogenic temperatures and at room
temperature. The structural evolution was evaluated by dynamic mechanical
analysis as well as by differential scanning calorimetry to determine
relaxation dynamics and crystallization behaviour. Using a measure of the
configurational entropy calculated from the X-ray pair correlation function the
structural footprint of the deformation-induced rejuvenation in bulk metallic
glass is revealed. With synchrotron radiation temperature and time resolutions
comparable to calorimetric experiments are possible. This opens new
experimental possibilities allowing to unambiguously correlate changes in
atomic configuration and structure to calorimetrically observed signals and can
attribute those to changes of the dynamic and vibrational relaxations in glassy
materials. The results confirm that the structural footprint of the
$\mathit{\beta}$-transition is related to entropic relaxation with
characteristics of a first-order transition. The DMA data shows that in the
range of the $\mathit{\beta}$-transition non-reversible structural
rearrangements are preferentially activated. The low temperature
$\mathit{\gamma}$-transition is mostly triggering reversible deformations and
shows a change of slope in the entropic footprint with second order
characteristics.
|
Consider the geometric range space $(X, \mathcal{H}_d)$ where $X \subset
\mathbb{R}^d$ and $\mathcal{H}_d$ is the set of ranges defined by
$d$-dimensional halfspaces. In this setting we consider that $X$ is the
disjoint union of a red and blue set. For each halfspace $h \in \mathcal{H}_d$
define a function $\Phi(h)$ that measures the "difference" between the fraction
of red and fraction of blue points which fall in the range $h$. In this context
the maximum discrepancy problem is to find the $h^* = \arg \max_{h \in (X,
\mathcal{H}_d)} \Phi(h)$. We aim to instead find an $\hat{h}$ such that
$\Phi(h^*) - \Phi(\hat{h}) \le \varepsilon$. This is the central problem in
linear classification for machine learning, in spatial scan statistics for
spatial anomaly detection, and shows up in many other areas. We provide a
solution for this problem in $O(|X| + (1/\varepsilon^d) \log^4
(1/\varepsilon))$ time, which improves polynomially over the previous best
solutions. For $d=2$ we show that this is nearly tight through conditional
lower bounds. For different classes of $\Phi$ we can either provide a
$\Omega(|X|^{3/2 - o(1)})$ time lower bound for the exact solution with a
reduction to APSP, or an $\Omega(|X| + 1/\varepsilon^{2-o(1)})$ lower bound for
the approximate solution with a reduction to 3SUM.
A key technical result is a $\varepsilon$-approximate halfspace range
counting data structure of size $O(1/\varepsilon^d)$ with $O(\log
(1/\varepsilon))$ query time, which we can build in $O(|X| + (1/\varepsilon^d)
\log^4 (1/\varepsilon))$ time.
|
In this paper we study the possibility of having a wormhole (WH) as a
candidate for the Sgr A$^\star$ central object and test this idea by
constraining their geometry using the motion of S2 star and the reconstructed
shadow images. In particular, we consider three WH models, including WHs in
Einstein theory, brane-world gravity, and Einstein-Dirac-Maxwell theory. To
this end, we have constrained the WH throat using the motion of S2 star and
shown that the flare out condition is satisfied. We also consider the accretion
of infalling gas model and study the accretion rate and the intensity of the
electromagnetic radiation as well as the shadow images.
|
Motivated by experiments on colloidal membranes composed of chiral rod-like
viruses, we use Monte Carlo methods to determine the phase diagram for the
liquid crystalline order of the rods and the membrane shape. We generalize the
Lebwohl-Lasher model for a nematic with a chiral coupling to a curved surface
with edge tension and a resistance to bending, and include an energy cost for
tilting of the rods relative to the local membrane normal. The membrane is
represented by a triangular mesh of hard beads joined by bonds, where each bead
is decorated by a director. The beads can move, the bonds can reconnect and the
directors can rotate at each Monte Carlo step. When the cost of tilt is small,
the membrane tends to be flat, with the rods only twisting near the edge for
low chiral coupling, and remaining parallel to the normal in the interior of
the membrane. At high chiral coupling, the rods twist everywhere, forming a
cholesteric state. When the cost of tilt is large, the emergence of the
cholesteric state at high values of the chiral coupling is accompanied by the
bending of the membrane into a saddle shape. Increasing the edge tension tends
to flatten the membrane. These results illustrate the geometric frustration
arising from the inability of a surface normal to have twist.
|
Spectral gaps in the vibrational modes of disordered solids are key design
elements in the synthesis and control of phononic metamaterials that exhibit a
plethora of novel elastic and mechanical properties. However, reliably
producing these gaps often require a high degree of network specificity through
complex control optimization procedures. In this work, we present as an
additional tool to the existing repertoire, a numerical scheme that rapidly
generates sizeable spectral gaps in absence of any fine tuning of the network
structure or elastic parameters. These gaps occur even in disordered
polydisperse systems consisting of relatively few particles ($N \sim
10^2-10^3$). Our proposed procedure exploits sticky potentials that have
recently been shown to suppress the formation of soft modes, thus effectively
recovering the linear elastic regime where band structures appear, at much
shorter length scales than in conventional models of disordered solids. Our
approach is relevant to design and realization of gapped spectra in a variety
of physical setups ranging from colloidal suspensions to 3D-printed elastic
networks.
|
Quantum many-body systems are characterized by their correlations. While
equal-time correlators and unequal-time commutators between operators are
standard observables, the direct access to unequal-time anti-commutators poses
a formidable experimental challenge. Here, we propose a general technique for
measuring unequal-time anti-commutators using the linear response of a system
to a non-Hermitian perturbation. We illustrate the protocol at the example of a
Bose-Hubbard model, where the approach to thermal equilibrium in a closed
quantum system can be tracked by measuring both sides of the
fluctuation-dissipation relation. We relate the scheme to the quantum Zeno
effect and weak measurements, and illustrate possible implementations at the
example of a cold-atom system. Our proposal provides a way of characterizing
dynamical correlations in quantum many-body systems with potential applications
in understanding strongly correlated matter as well as for novel quantum
technologies.
|
This thesis is dedicated to study the thermodynamic properties of a
magnetized neutral vector boson gas at any temperature, with the aim to provide
equations of state that allow more general and precise descriptions of
astrophysical phenomena. The all temperature analytical expressions for the
thermodynamic magnitudes, as well as their non relativistic limits, are
obtained starting from the energy spectrum given by Proca's theory. With these
expressions, and considering the system under astrophysical conditions
(particle densities, temperatures and magnetic fields in the order of the
estimated for Neutron Stars), we investigate the Bose Einstein condensation,
the magnetic properties and the equations of state of the gas, making a special
emphasis on the influence of antiparticles and magnetic field. In all cases,
the results are compared with their analogues in the low temperature and the
non relativistic limits. This allows us to establish the ranges of validity of
these approximations and to achieve a better understanding of their effects on
the studied system.
|
In this letter, we reanalyze the multi-component strongly interacting massive
particle (mSIMP) scenario using an effective operator approach. As in the
single-component SIMP case, the total relic abundance of mSIMP dark matter (DM)
is determined by the coupling strengths of $3 \to 2$ processes achieved by a
five-point effective operator. Intriguingly, we find that there is an
unavoidable $2 \to 2$ process induced by the corresponding five-point
interaction in the dark sector, which would reshuffle the mass densities of
SIMP DM after the chemical freeze-out. We dub this DM scenario as reshuffled
SIMP (rSIMP). Given this observation, we then numerically solve the coupled
Boltzmann equations including the $3 \to 2$ and $2 \to 2$ processes to get the
correct yields of rSIMP DM. It turns out that the masses of rSIMP DM must be
nearly degenerate for them to contribute sizable abundances. On the other hand,
we also introduce effective operators to bridge the dark sector and visible
sector via a vector portal coupling. Since the signal strength of detecting DM
is proportional to the individual densities, thereby, obtaining the right
amount of DM particles is crucial in the rSIMP scenario. The cosmological and
theoretical constraints for rSIMP models are discussed as well.
|
We have derived the stellar atmospheric parameters, the effective temperature
T$_{eff}$, the microturbulent velocity $\zeta$, the surface gravity log g, and
the metallicity [Fe/H] for HE 0017+0055, HE 2144-1832, HE 2339-0837, HD 145777,
and CD-27 14351 from local thermodynamic equilibrium analyses using model
atmospheres. Elemental abundances of C, N, $\alpha$-elements, iron-peak
elements, and several neutron-capture elements are estimated using the
equivalent width measurement technique as well as spectrum synthesis
calculations in some cases. In the context of the double enhancement observed
in four of the programme stars, we have critically examined whether the
literature i-process model yields ([X/Fe]) of heavy elements can explain the
observed abundance distribution. The estimated metallicity [Fe/H] of the
programme stars ranges from -1.63 to -2.74. All five stars show enhanced
abundance for Ba, and four of them exhibit enhanced abundance for Eu. Based on
our analysis, HE 0017+0055, HE 2144-1832, and HE 2339-0837 are found to be
CEMP-r/s stars, whereas HD 145777 and CD-27 14351 show characteristic
properties of CEMP-s stars. From a detailed analysis of different classifiers
of CEMP stars, using a large sample of similar stars from the literature, we
have identified the one which best describes the CEMP-s and CEMP-r/s stars. We
have also examined if [hs/ls] alone can be used as a classifier, and if there
are any limiting values for [hs/ls] ratio that can be used to distinguish
CEMP-s and CEMP-r/s stars. In spite of peaking at different values of [hs/ls],
CEMP-s and CEMP-r/s stars show an overlap in the range 0.0 < [hs/ls] < 1.5 and
hence this ratio alone can not be used to distinguish CEMP-s and CEMP-r/s
stars.
|
We present near-infrared [Fe II] images of four Class 0/I jets (HH 1/2, HH
34, HH 111, HH 46/47) observed with the Hubble Space Telescope Wide Field
Camera 3. The unprecedented angular resolution allows us to measure proper
motions, jet widths and trajectories, and extinction along the jets. In all
cases, we detect the counter-jet which was barely visible or invisible at
shorter wavelengths. We measure tangential velocities of a few hundred km/s,
consistent with previous HST measurements over 10 years ago. We measure the jet
width as close as a few tens of au from the star, revealing high collimations
of about 2 degrees for HH 1, HH 34, HH 111 and about 8 degrees for HH 46, all
of which are preserved up to large distances. For HH 34, we find evidence of a
larger initial opening angle of about 7 degrees. Measurement of knot positions
reveals deviations in trajectory of both the jet and counter-jet of all
sources. Analysis of asymmetries in the inner knot positions for HH 111
suggests the presence of a low mass stellar companion at separation 20-30 au.
Finally, we find extinction values of 15-20 mag near the source which gradually
decreases moving downstream along the jet. These observations have allowed us
to study the counter-jet at unprecedented high angular resolution, and will be
a valuable reference for planning future JWST mid-infrared observations which
will peer even closer into the jet engine.
|
Hereditary hemolytic anemias are genetic disorders that affect the shape and
density of red blood cells. Genetic tests currently used to diagnose such
anemias are expensive and unavailable in the majority of clinical labs. Here,
we propose a method for identifying hereditary hemolytic anemias based on a
standard biochemistry method, called Percoll gradient, obtained by centrifuging
a patient's blood. Our hybrid approach consists on using spatial data-driven
features, extracted with a convolutional neural network and spectral
handcrafted features obtained from fast Fourier transform. We compare late and
early feature fusion with AlexNet and VGG16 architectures. AlexNet with late
fusion of spectral features performs better compared to other approaches. We
achieved an average F1-score of 88% on different classes suggesting the
possibility of diagnosing of hereditary hemolytic anemias from Percoll
gradients. Finally, we utilize Grad-CAM to explore the spatial features used
for classification.
|
We prove existence and uniqueness of solutions of a class of abstract fully
nonlinear mean field game systems. We justify that such problems are related to
controlled local or nonlocal diffusions, or more specifically, controlled time
change rates of stochastic (L\'evy) processes. Both the system of equations and
the precise control interpretation seem to be new. We show that the results
apply for degenerate equations of low fractional order, and some nondegenerate
equations, local and nonlocal.
|
We obtain a sufficient condition for benign overfitting of linear regression
problem. Our result does not rely on concentration argument but on small-ball
assumption and thus can holds in heavy-tailed case. The basic idea is to
establish a coordinate small-ball estimate in terms of effective rank so that
we can calibrate the balance of epsilon-Net and exponential probability. Our
result indicates that benign overfitting is not depending on concentration
property of the input vector. Finally, we discuss potential difficulties for
benign overfitting beyond linear model and a benign overfitting result without
truncated effective rank.
|
We present a simple short proof of the Fundamental Theorem of Algebra,
without complex analysis and with a minimal use of topology. It can be taught
in a first year calculus class.
|
Shaped by human movement, place connectivity is quantified by the strength of
spatial interactions among locations. For decades, spatial scientists have
researched place connectivity, applications, and metrics. The growing
popularity of social media provides a new data stream where spatial social
interaction measures are largely devoid of privacy issues, easily assessable,
and harmonized. In this study, we introduced a global multi-scale place
connectivity index (PCI) based on spatial interactions among places revealed by
geotagged tweets as a spatiotemporal-continuous and easy-to-implement
measurement. The multi-scale PCI, demonstrated at the US county level, exhibits
a strong positive association with SafeGraph population movement records (10
percent penetration in the US population) and Facebook's social connectedness
index (SCI), a popular connectivity index based on social networks. We found
that PCI has a strong boundary effect and that it generally follows the
distance decay, although this force is weaker in more urbanized counties with a
denser population. Our investigation further suggests that PCI has great
potential in addressing real-world problems that require place connectivity
knowledge, exemplified with two applications: 1) modeling the spatial spread of
COVID-19 during the early stage of the pandemic and 2) modeling hurricane
evacuation destination choice. The methodological and contextual knowledge of
PCI, together with the launched visualization platform and open-sourced PCI
datasets at various geographic levels, are expected to support research fields
requiring knowledge in human spatial interactions.
|
The well known phenomenon of exponential contraction for solutions to the
viscous Hamilton-Jacobi equation in the space-periodic setting is based on the
Markov mechanism. However, the corresponding Lyapunov exponent $\lambda(\nu)$
characterizing the exponential rate of contraction depends on the viscosity
$\nu$. The Markov mechanism provides only a lower bound for $\lambda(\nu)$
which vanishes in the limit $\nu \to 0$. At the same time, in the inviscid case
$\nu=0$ one also has exponential contraction based on a completely different
dynamical mechanism. This mechanism is based on hyperbolicity of
action-minimizing orbits for the related Lagrangian variational problem.
In this paper we consider the discrete time case (kicked forcing), and
establish a uniform lower bound for $\lambda(\nu)$ which is valid for all
$\nu\geq 0$. The proof is based on a nontrivial interplay between the dynamical
and Markov mechanisms for exponential contraction. We combine PDE methods with
the ideas from the Weak KAM theory.
|
Let $U\not\equiv \pm\infty$ be a $\delta$-subharmonic function on a closed
disc of radius $R$ centered at zero. In the previous two parts of our paper, we
obtained general and explicit estimates of the integral of the positive part of
the radial maximum growth characteristic ${\mathsf M}_U(t):=
\sup\bigl\{U(z)\bigm| |z|=r\bigr\}$ over the increasing integration function
$m$ on the segment $[0, r]\subset [0,R)$ through the difference characteristic
of Nevanlinna and the quantities associated with the integration function $m$.
The third part of our paper contains estimates of these quantities in terms of
the Hausdorff $h$-measure and $h$-content of compact subset $S\subset [0, r]$
such that the integration function $m$ is constant on each open component of
the connectivity of the complement $[0, r]\setminus S$. The case of the
d-dimensional Hausdorff measure is highlighted separately.
|
Many software engineering tasks, such as testing, and anomaly detection can
benefit from the ability to infer a behavioral model of the software.Most
existing inference approaches assume access to code to collect execution
sequences. In this paper, we investigate a black-box scenario, where the system
under analysis cannot be instrumented, in this granular fashion.This scenario
is particularly prevalent with control systems' log analysis in the form of
continuous signals. In this situation, an execution trace amounts to a
multivariate time-series of input and output signals, where different states of
the system correspond to different `phases` in the time-series. The main
challenge is to detect when these phase changes take place. Unfortunately, most
existing solutions are either univariate, make assumptions on the data
distribution, or have limited learning power.Therefore, we propose a hybrid
deep neural network that accepts as input a multivariate time series and
applies a set of convolutional and recurrent layers to learn the non-linear
correlations between signals and the patterns over time.We show how this
approach can be used to accurately detect state changes, and how the inferred
models can be successfully applied to transfer-learning scenarios, to
accurately process traces from different products with similar execution
characteristics. Our experimental results on two UAV autopilot case studies
indicate that our approach is highly accurate (over 90% F1 score for state
classification) and significantly improves baselines (by up to 102% for change
point detection).Using transfer learning we also show that up to 90% of the
maximum achievable F1 scores in the open-source case study can be achieved by
reusing the trained models from the industrial case and only fine tuning them
using as low as 5 labeled samples, which reduces the manual labeling effort by
98%.
|
The simultaneous advancement of high resolution integral field unit
spectroscopy and robust full-spectral fitting codes now make it possible to
examine spatially-resolved kinematic, chemical composition, and star-formation
history from nearby galaxies. We take new MUSE data from the Snapshot Optical
Spectroscopic Imaging of Mergers and Pairs for Legacy Exploration (SOSIMPLE)
survey to examine NGC 7135. With counter-rotation of gas, disrupted kinematics
and asymmetric chemical distribution, NGC 7135 is consistent with an ongoing
merger. Though well hidden by the current merger, we are able to distinguish
stars originating from an older merger, occurring 6-10 Gyr ago. We further find
a gradient in ex-situ material with galactocentric radius, with the accreted
fraction rising from 0% in the galaxy centre, to ~7% within 0.6 effective
radii.
|
Ultra Diffuse Galaxies (UDGs), a type of large Low Surface Brightness (LSB)
galaxies with particularly large effective radii (r_eff > 1.5 kpc), are now
routinely studied in the local (z<0.1) universe. While they are found to be
abundant in clusters, groups, and in the field, their formation mechanisms
remain elusive and an active topic of debate. New insights may be found by
studying their counterparts at higher redshifts (z>1.0), even though
cosmological surface brightness dimming makes them particularly diffcult to
detect and study there. This work uses the deepest Hubble Space Telescope (HST)
imaging stacks of z > 1 clusters, namely: SPT-CL J2106-5844 and MOO J1014+0038.
These two clusters, at z=1.13 and z=1.23, were monitored as part of the HST
See-Change program. Compared to the Hubble Extreme Deep Field (XDF) as
reference field, we find statistical over-densities of large LSB galaxies in
both clusters. Based on stellar population modelling and assuming no size
evolution, we find that the faintest sources we can detect are about as bright
as expected for the progenitors of the brightest local UDGs. We find that the
LSBs we detect in SPT-CL J2106-5844 and MOO J1014-5844 already have old stellar
populations that place them on the red sequence. Correcting for incompleteness,
and based on an extrapolation of local scaling relations, we estimate that
distant UDGs are relatively under-abundant compared to local UDGs by a factor ~
3. A plausible explanation for the implied increase with time would be a
significant size growth of these galaxies in the last ~ 8 Gyr, as also
suggested by hydrodynamical simulations.
|
In this work, 1272 superflares on 311 stars are collected from 22,539
solar-type stars from the second-year observation of Transiting Exoplanet
Survey Satellite (TESS), which almost covered the northern hemisphere of the
sky. Three superflare stars contain hot Jupiter candidates or ultrashort-period
planet candidates. We obtain $\gamma = -1.76\pm 0.11$ of the correlation
between flare frequency and flare energy ($dN/dE\propto E^{-\gamma}$) for all
superflares and get $\beta=0.42\pm0.01$ of the correlation between superflare
duration and energy ($T_{\text {duration }} \propto E^{\beta}$), which supports
that a similar mechanism is shared by stellar superflares and solar flares.
Stellar photometric variability ($R_{\rm var}$) is estimated for all solar-type
stars, and the relation of $E\propto {R_{\rm var}}^{3/2}$ is included. An
indicator of chromospheric activity ($S$-index) is obtained by using data from
the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) for 7454
solar-type stars. Distributions of these two properties indicate that the Sun
is generally less active than superflare stars. We find that saturation-like
feature of $R_{\rm var}\sim 0.1$ may be the reason for superflare energy
saturating around $10^{36}$ erg. Object TIC 93277807 was captured by the TESS
first-year mission and generated the most energetic superflare. This superflare
is valuable and unique that can be treated as an extreme event, which may be
generated by different mechanisms rather than other superflares.
|
A number of challenges of the standard $\Lambda$CDM model has been emerging
during the past few years as the accuracy of cosmological observations
improves. In this review we discuss in a unified manner many existing signals
in cosmological and astrophysical data that appear to be in some tension
($2\sigma$ or larger) with the standard $\Lambda$CDM model as defined by the
Planck18 parameter values. In addition to the major well studied $5\sigma$
challenge of $\Lambda$CDM (the Hubble $H_0$ crisis) and other well known
tensions (the growth tension and the lensing amplitude $A_L$ anomaly), we
discuss a wide range of other less discussed less-standard signals which appear
at a lower statistical significance level than the $H_0$ tension (also known as
'curiosities' in the data) which may also constitute hints towards new physics.
For example such signals include cosmic dipoles (the fine structure constant
$\alpha$, velocity and quasar dipoles), CMB asymmetries, BAO Ly$\alpha$
tension, age of the Universe issues, the Lithium problem, small scale
curiosities like the core-cusp and missing satellite problems, quasars Hubble
diagram, oscillating short range gravity signals etc. The goal of this
pedagogical review is to collectively present the current status of these
signals and their level of significance, with emphasis to the Hubble crisis and
refer to recent resources where more details can be found for each signal. We
also briefly discuss possible theoretical approaches that can potentially
explain the non-standard nature of some of these signals.
|
We test the theoretical free energy surface (FES) for two-step nucleation
(TSN) proposed by Iwamatsu [J. Chem. Phys. 134, 164508 (2011)] by comparing the
predictions of the theory to numerical results for the FES recently reported
from Monte Carlo simulations of TSN in a simple lattice system [James, et al.,
J. Chem. Phys. 150, 074501 (2019)]. No adjustable parameters are used to make
this comparison. That is, all the parameters of the theory are evaluated
directly for the model system, yielding a predicted FES which we then compare
to the FES obtained from simulations. We find that the theoretical FES
successfully predicts the numerically-evaluated FES over a range of
thermodynamic conditions that spans distinct regimes of behavior associated
with TSN. All the qualitative features of the FES are captured by the theory
and the quantitative comparison is also very good. Our results demonstrate that
Iwamatsu's extension of classical nucleation theory provides an excellent
framework for understanding the thermodynamics of TSN.
|
For solving large-scale consistent linear system, we combine two efficient
row index selection strategies with Kaczmarz-type method with oblique
projection, and propose a greedy randomized Kaczmarz method with oblique
projection (GRKO) and the maximal weighted residual Kaczmarz method with
oblique projection (MWRKO) . Through those method, the number of iteration
steps and running time can be reduced to a greater extent to find the
least-norm solution, especially when the rows of matrix A are close to linear
correlation. Theoretical proof and numerical results show that GRKO method and
MWRKO method are more effective than greedy randomized Kaczmarz method and
maximal weighted residual Kaczmarz method respectively.
|
We present the new software OpDiLib, a universal add-on for classical
operator overloading AD tools that enables the automatic differentiation (AD)
of OpenMP parallelized code. With it, we establish support for OpenMP features
in a reverse mode operator overloading AD tool to an extent that was previously
only reported on in source transformation tools. We achieve this with an
event-based implementation ansatz that is unprecedented in AD. Combined with
modern OpenMP features around OMPT, we demonstrate how it can be used to
achieve differentiation without any additional modifications of the source
code; neither do we impose a priori restrictions on the data access patterns,
which makes OpDiLib highly applicable. For further performance optimizations,
restrictions like atomic updates on the adjoint variables can be lifted in a
fine-grained manner for any parts of the code. OpDiLib can also be applied in a
semi-automatic fashion via a macro interface, which supports compilers that do
not implement OMPT. In a detailed performance study, we demonstrate the
applicability of OpDiLib for a pure operator overloading approach in a hybrid
parallel environment. We quantify the cost of atomic updates on the adjoint
vector and showcase the speedup and scaling that can be achieved with the
different configurations of OpDiLib in both the forward and the reverse pass.
|
We propose a new mechanism to communicate between fermion dark matter and the
Standard Model (SM) only through the four-form flux. The four-form couplings
are responsible for the relaxation of the Higgs mass to the correct value and
the initial displacement of the reheating pseudo-scalar field from the minimum.
We show that the simultaneous presence of the pseudo-scalar coupling to fermion
dark matter and the flux-induced Higgs mixing gives rise to unsuppressed
annihilations of dark matter into the SM particles at present, whereas the
direct detection bounds from XENON1T can be avoided. We suggest exploring the
interesting bulk parameter space of the model for which dark matter annihilates
dominantly into a pair of singlet-like scalars with similar mass as for dark
matter.
|
As it is said by Van Gogh, great things are done by a series of small things
brought together. Aesthetic experience arises from the aggregation of
underlying visual components. However, most existing deep image aesthetic
assessment (IAA) methods over-simplify the IAA process by failing to model
image aesthetics with clearly-defined visual components as building blocks. As
a result, the connection between resulting aesthetic predictions and underlying
visual components is mostly invisible and hard to be explicitly controlled,
which limits the model in both performance and interpretability. This work aims
to model image aesthetics from the level of visual components. Specifically,
object-level regions detected by a generic object detector are defined as
visual components, namely object-level visual components (OVCs). Then generic
features representing OVCs are aggregated for the aesthetic prediction based
upon proposed object-level and graph attention mechanisms, which dynamically
determines the importance of individual OVCs and relevance between OVC pairs,
respectively. Experimental results confirm the superiority of our framework
over previous relevant methods in terms of SRCC and PLCC on the aesthetic
rating distribution prediction. Besides, quantitative analysis is done towards
model interpretation by observing how OVCs contribute to aesthetic predictions,
whose results are found to be supported by psychology on aesthetics and
photography rules. To the best of our knowledge, this is the first attempt at
the interpretation of a deep IAA model.
|
A recent line of work has shown that end-to-end optimization of Bayesian
filters can be used to learn state estimators for systems whose underlying
models are difficult to hand-design or tune, while retaining the core
advantages of probabilistic state estimation. As an alternative approach for
state estimation in these settings, we present an end-to-end approach for
learning state estimators modeled as factor graph-based smoothers. By unrolling
the optimizer we use for maximum a posteriori inference in these probabilistic
graphical models, we can learn probabilistic system models in the full context
of an overall state estimator, while also taking advantage of the distinct
accuracy and runtime advantages that smoothers offer over recursive filters. We
study this approach using two fundamental state estimation problems, object
tracking and visual odometry, where we demonstrate a significant improvement
over existing baselines. Our work comes with an extensive code release, which
includes training and evaluation scripts, as well as Python libraries for Lie
theory and factor graph optimization:
https://sites.google.com/view/diffsmoothing/
|
Spatial relations between objects in an image have proved useful for
structural object recognition. Structural constraints can act as regularization
in neural network training, improving generalization capability with small
datasets. Several relations can be modeled as a morphological dilation of a
reference object with a structuring element representing the semantics of the
relation, from which the degree of satisfaction of the relation between another
object and the reference object can be derived. However, dilation is not
differentiable, requiring an approximation to be used in the context of
gradient-descent training of a network. We propose to approximate dilations
using convolutions based on a kernel equal to the structuring element. We show
that the proposed approximation, even if slightly less accurate than previous
approximations, is definitely faster to compute and therefore more suitable for
computationally intensive neural network applications.
|
The present study examines to what extent cultural background determines
sensorimotor synchronization in humans
|
Understanding how the magnetic activity of low-mass stars depends on their
fundamental parameters is an important goal of stellar astrophysics. Previous
studies show that activity levels are largely determined by the stellar Rossby
number which is defined as the rotation period divided by the convective
turnover time. However, we currently have little information on the role that
chemical composition plays. In this work, we investigate how metallicity
affects magnetic activity using photometric variability as an activity proxy.
Similarly to other proxies, we demonstrate that the amplitude of photometric
variability is well parameterised by the Rossby number, although in a more
complex way. We also show that variability amplitude and metallicity are
generally positively correlated. This trend can be understood in terms of the
effect that metallicity has on stellar structure and, hence, the convective
turnover time (or, equivalently, the Rossby number). Lastly, we demonstrate
that the metallicity dependence of photometric variability results in a
rotation period detection bias whereby the periods of metal-rich stars are more
easily recovered for stars of a given mass.
|
Hadamard matrices in $\{0,1\}$ presentation are square $m\times m$ matrices
whose entries are zeros and ones and whose rows considered as vectors in $\Bbb
R^m$ produce the Gram matrix of a special form with respect to the standard
scalar product in $\Bbb R^m$. The concept of Hadamard matrices is extended in
the present paper. As a result pseudo-Hadamard matrices of the first generation
are defined and investigated. An algorithm for generating these pseudo-Hadamard
matrices is designed and is used for testing some conjectures.
|
In this paper we investigate a novel set of polarizing agents --
mixed-valence compounds -- by theoretical and experimental methods and
demonstrate their performance in high-field Dynamic Nuclear Polarization (DNP)
experiments in the solid state. Mixed-valence compounds constitute a group of
molecules, in which molecular mobility persists even in solids. Consequently,
such polarizing agents can be used to perform Overhauser-DNP experiments in
solid-state, with favorable conditions for dynamic nuclear polarization
formation at ultra-high magnetic fields.
|
Parafermions are a natural generalization of Majorana fermions. We consider a
breathing Kagome lattice with complex hoppings by imposing $\mathbb{Z}_{3}$
clock symmetry in the complex energy plane. It is a non-Hermitian
generalization of the second-order topological insulator characterized by the
emergence of topological corner states. We demonstrate that the topological
corner states are parafermions in the present $\mathbb{Z}_{3}$ clock-symmetric
model. It is also shown that the model is realized in electric circuits
properly designed, where the parafermion corner states are observed by
impedance resonance. We also construct $\mathbb{Z}_{4}$ and $\mathbb{Z}_{6}$
parafermions on breathing square and honeycomb lattices, respectively.
|
Code review is a widely-used practice in software development companies to
identify defects. Hence, code review has been included in many software
engineering curricula at universities worldwide. However, teaching code review
is still a challenging task because the code review effectiveness depends on
the code reading and analytical skills of a reviewer. While several studies
have investigated the code reading techniques that students should use to find
defects during code review, little has focused on a learning activity that
involves analytical skills. Indeed, developing a code review checklist should
stimulate students to develop their analytical skills to anticipate potential
issues (i.e., software defects). Yet, it is unclear whether students can
anticipate potential issues given their limited experience in software
development (programming, testing, etc.). We perform a qualitative analysis to
investigate whether students are capable of creating code review checklists,
and if the checklists can be used to guide reviewers to find defects. In
addition, we identify common mistakes that students make when developing a code
review checklist. Our results show that while there are some misconceptions
among students about the purpose of code review, students are able to
anticipate potential defects and create a relatively good code review
checklist. Hence, our results lead us to conclude that developing a code review
checklist can be a part of the learning activities for code review in order to
scaffold students' skills.
|
Level 5 autonomy for self-driving cars requires a robust visual perception
system that can parse input images under any visual condition. However,
existing semantic segmentation datasets are either dominated by images captured
under normal conditions or are small in scale. To address this, we introduce
ACDC, the Adverse Conditions Dataset with Correspondences for training and
testing semantic segmentation methods on adverse visual conditions. ACDC
consists of a large set of 4006 images which are equally distributed between
four common adverse conditions: fog, nighttime, rain, and snow. Each
adverse-condition image comes with a high-quality fine pixel-level semantic
annotation, a corresponding image of the same scene taken under normal
conditions, and a binary mask that distinguishes between intra-image regions of
clear and uncertain semantic content. Thus, ACDC supports both standard
semantic segmentation and the newly introduced uncertainty-aware semantic
segmentation. A detailed empirical study demonstrates the challenges that the
adverse domains of ACDC pose to state-of-the-art supervised and unsupervised
approaches and indicates the value of our dataset in steering future progress
in the field. Our dataset and benchmark are publicly available.
|
For the validation of safety-critical systems regarding safety and comfort,
e.g., in the context of automated driving, engineers often have to cope with
large (parametric) test spaces for which it is infeasible to test through all
possible parameter configurations. At the same time, critical behavior of a
well-engineered system with respect to prescribed safety and comfort
requirements tends to be extremely rare, speaking of probabilities of order
$10^{-6}$ or less, but clearly has to be examined carefully for valid
argumentation. Hence, common approaches such as boundary value analysis are
insufficient while methods based on random sampling from the parameter space
(simple Monte Carlo) lack the ability to detect these rare critical events
efficiently, i.e., with appropriate simulation budget. For this reason, a more
sophisticated simulation-based approach is proposed which employs optimistic
optimization on an objective function called "criticality" in order to identify
effectively the set of critical parameter configurations. Within the scope of
the ITEA 3 TESTOMAT project (http://www.testomatproject.eu/) the collaboration
partners OFFIS e.V. and AKKA Germany GmbH conducted a case study on applying
criticality-based rare event simulation to the charging process of an
automotive battery management system given as a model. The present technical
report documents the industrial use case, the approach, application and
experimental results, as well as lessons learned from the case study.
|
Homonym identification is important for WSD that require coarse-grained
partitions of senses. The goal of this project is to determine whether
contextual information is sufficient for identifying a homonymous word. To
capture the context, BERT embeddings are used as opposed to Word2Vec, which
conflates senses into one vector. SemCor is leveraged to retrieve the
embeddings. Various clustering algorithms are applied to the embeddings.
Finally, the embeddings are visualized in a lower-dimensional space to
understand the feasibility of the clustering process.
|
Quantum computers are a leading platform for the simulation of many-body
physics. This task has been recently facilitated by the possibility to program
directly the time-dependent pulses sent to the computer. Here, we use this
feature to simulate quantum lattice models with long-range hopping. Our
approach is based on an exact mapping between periodically driven quantum
systems and one-dimensional lattices in the synthetic Floquet direction. By
engineering a periodic drive with a power-law spectrum, we simulate a lattice
with long-range hopping, whose decay exponent is freely tunable. We propose and
realize experimentally two protocols to probe the long tails of the Floquet
eigenfunctions and to identify a scaling transition between weak and strong
long-range couplings. Our work offers a useful benchmark of pulse engineering
and opens the route towards quantum simulations of rich nonequilibrium effects.
|
A technique is presented, which creates MCB junctions that can be pivoted to
any desirable angle. The MCB junction equipped with a specific glass liquid
cell can be used to produce a MCB junction, of which the electrodes are covered
with a microscopic layer of fluid, thus producing a partially wet phase MCB
junction.
|
We report the results of the first search for the decay $B_s^0 \rightarrow
\eta^\prime \eta$ using $121.4~\textrm{fb}^{-1}$ of data collected at the
$\Upsilon(5S)$ resonance with the Belle detector at the KEKB asymmetric-energy
$e^+e^-$ collider. We observe no significant signal and set a 90\%
confidence-level upper limit of %$7.1 \times 10^{-5}$ $6.5 \times 10^{-5}$ on
the branching fraction of this decay.
|
Path planning is a key component in mobile robotics. A wide range of path
planning algorithms exist, but few attempts have been made to benchmark the
algorithms holistically or unify their interface. Moreover, with the recent
advances in deep neural networks, there is an urgent need to facilitate the
development and benchmarking of such learning-based planning algorithms. This
paper presents PathBench, a platform for developing, visualizing, training,
testing, and benchmarking of existing and future, classical and learned 2D and
3D path planning algorithms, while offering support for Robot Oper-ating System
(ROS). Many existing path planning algorithms are supported; e.g. A*,
wavefront, rapidly-exploring random tree, value iteration networks, gated path
planning networks; and integrating new algorithms is easy and clearly
specified. We demonstrate the benchmarking capability of PathBench by comparing
implemented classical and learned algorithms for metrics, such as path length,
success rate, computational time and path deviation. These evaluations are done
on built-in PathBench maps and external path planning environments from video
games and real world databases. PathBench is open source.
|
This paper studies a dynamical system, which contains two contours. There is
a cluster on each contour. The cluster contains particles, located in adjacent
cells. The clusters move under prescribed rules. The delays of clusters are due
to that the clusters cannot pass through the node simultaneously. The dynamical
system belongs to the class of contour networks introduced by A. P. Buslaev.
|
We adapt and extend Yosida's parametrix method, originally introduced for the
construction of the fundamental solution to a parabolic operator on a
Riemannian manifold, to derive Varadhan-type asymptotic estimates for the
transition density of a degenerate diffusion under the weak H\"ormander
condition. This diffusion process, widely studied by Yor in a series of papers,
finds direct application in the study of a class of path-dependent financial
derivatives known as Asian options. We obtain the Varadhan formula
\begin{equation} \frac{-2 \log p(t,x;T,y) } { \Psi(t,x;T,y) } \to 1, \qquad
\text{as } \quad T-t \to 0^+, \end{equation} where $p$ denotes the transition
density and $\Psi$ denotes the optimal cost function of a deterministic control
problem associated to the diffusion. We provide a partial proof of this
formula, and present numerical evidence to support the validity of an
intermediate inequality that is required to complete the proof. We also derive
an asymptotic expansion of the cost function $\Psi$, expressed in terms of
elementary functions, which is useful in order to design efficient
approximation formulas for the transition density.
|
Investigating how the cutoff energy $E_{\rm cut}$ varies with X-ray flux and
photon index $\Gamma$ in individual AGNs opens a new window to probe the yet
unclear coronal physics. So far $E_{\rm cut}$ variations have only been
detected in several AGNs but different patterns have been reported. Here we
report new detections of $E_{\rm cut}$ variations in two Seyfert galaxies with
multiple NuSTAR exposures. While in NGC 3227 $E_{\rm cut}$ monotonically
increases with $\Gamma$, the $E_{\rm cut}$-$\Gamma$ relation exhibits a
$\Lambda$ shape in SWIFT J2127.4+5654 ($E_{\rm cut}$ increasing with $\Gamma$
at $\Gamma$ $\lesssim$ 2.05, but reversely decreasing at $\Gamma$ $\gtrsim$
2.05), indicating more than a single underlying mechanism is involved.
Meanwhile both galaxies show softer spectra while they brighten in X-ray, a
common phenomenon in Seyfert galaxies. Plotting all 7 AGNs with $E_{\rm cut}$
variations ever reported with NuSTAR observations in the $E_{\rm cut}$-$\Gamma$
diagram, we find they could be unified with the $\Lambda$ pattern. Although the
sample is small and SWIFT J2127.4+5654 is the only source with $\Gamma$ varying
across the break point thus the only one exhibiting the complete $\Lambda$
pattern in a single source, the discoveries shed new light on the coronal
physics in AGNs. Possible underlying physical mechanisms are discussed.
|
We consider exact and averaged control problem for a system of quasi-linear
ODEs and SDEs with a non-negative definite symmetric matrix of the system. The
strategy of the proof is the standard linearization of the system by fixing the
function appearing in the nonlinear part of the system, and then applying the
Leray-Schauder fixed point theorem. We shall also need the continuous induction
arguments to prolong the control to the final state which is a novel approach
in the field. This enables us to obtain controllability for arbitrarily large
initial data (so called global controllability).
|
A common challenge across all areas of machine learning is that training data
is not distributed like test data, due to natural shifts, "blind spots," or
adversarial examples; such test examples are referred to as out-of-distribution
(OOD) test examples. We consider a model where one may abstain from predicting,
at a fixed cost. In particular, our transductive abstention algorithm takes
labeled training examples and unlabeled test examples as input, and provides
predictions with optimal prediction loss guarantees. The loss bounds match
standard generalization bounds when test examples are i.i.d. from the training
distribution, but add an additional term that is the cost of abstaining times
the statistical distance between the train and test distribution (or the
fraction of adversarial examples). For linear regression, we give a
polynomial-time algorithm based on Celis-Dennis-Tapia optimization algorithms.
For binary classification, we show how to efficiently implement it using a
proper agnostic learner (i.e., an Empirical Risk Minimizer) for the class of
interest. Our work builds on a recent abstention algorithm of Goldwasser,
Kalais, and Montasser (2020) for transductive binary classification.
|
Helical symmetry of massive Dirac fermions is broken explicitly in the
presence of electric and magnetic fields. Here we present two equations for the
divergence of helical and axial-vector currents following the Jackiw-Johnson
approach to the anomaly of the neutral axial vector current. We discover the
contribution from the helical symmetry breaking is attributed to the occupancy
of the two states at the top of the valence band and the bottom of the
conduction band. The explicit symmetry breaking fully cancels the anomalous
correction from the quantum fluctuation in the band gap. The chiral anomaly can
be derived from the helical symmetry breaking. It provides an alternative route
to understand the chiral anomaly from the point of view of the helical symmetry
breaking. The pertinent physical consequences in condensed matter are the
helical magnetic effect which means a charge current circulating at the
direction of the magnetic field, and the mass-dependent positive longitudinal
magnetoconductivity as a transport signature. The discovery not only reflects
anomalous magneto-transport properties of massive Dirac materials but also
reveals the close relation between the helical symmetry breaking and the
physics of chiral anomaly in quantum field theory and high energy physics.
|
The study deals with the methods and means of checking the reliability of
usernames of online communities on the basis of computer-linguistic analysis of
the results of their communicative interaction. The methodological basis of the
study is a combination of general scientific methods and special approaches to
the study of the data verification of online communities in the Ukrainian
segment of the global information environment. The algorithm of functioning of
the utility Verifier of online community username is developed. The
informational model of the automated means of checking the usernames of online
community is designed. The utility Verifier of online community username data
validation system approbation is realized in the online community. The
indicator of the data verification system effectiveness is determined.
|
Deep learning is a powerful approach with good performance on many different
tasks. However, these models often require massive computational resources. It
is a worrying trend that we increasingly need models that work well on more
complex problems. In this paper, we propose and verify the effectiveness and
efficiency of SCNN, an innovative neural network inspired by the swarm concept.
In addition to introducing the relevant theories, our detailed experiments
suggest that fewer parameters may perform better than models with more
parameters. Besides, our experiments show that SCNN needs less data than
traditional models. That could be an essential hint for problems where there is
not much data.
|
FO transductions, aperiodic deterministic two-way transducers, as well as
aperiodic streaming string transducers are all equivalent models for first
order definable functions. In this paper, we solve the long standing open
problem of expressions capturing first order definable functions, thereby
generalizing the seminal SF=AP (star free expressions = aperiodic languages)
result of Sch\"utzenberger. Our result also generalizes a lesser known
characterization by Sch\"utzenberger of aperiodic languages by SD-regular
expressions (SD=AP). We show that every first order definable function over
finite words captured by an aperiodic deterministic two-way transducer can be
described with an SD-regular transducer expression (SDRTE). An SDRTE is a
regular expression where Kleene stars are used in a restricted way: they can
appear only on aperiodic languages which are prefix codes of bounded
synchronization delay. SDRTEs are constructed from simple functions using the
combinators unambiguous sum (deterministic choice), Hadamard product, and
unambiguous versions of the Cauchy product and the k-chained Kleene-star, where
the star is restricted as mentioned. In order to construct an SDRTE associated
with an aperiodic deterministic two-way transducer, (i) we concretize
Sch\"utzenberger's SD=AP result, by proving that aperiodic languages are
captured by SD-regular expressions which are unambiguous and stabilising; (ii)
by structural induction on the unambiguous, stabilising SD-regular expressions
describing the domain of the transducer, we construct SDRTEs. Finally, we also
look at various formalisms equivalent to SDRTEs which use the function
composition, allowing to trade the k-chained star for a 1-star.
|
Cassegrain designs can be used to build thin lenses. We analyze the
relationships between system thickness and aperture sizes of the two mirrors as
well as FoV size. Our analysis shows that decrease in lens thickness imposes
tight constraint on the aperture and FoV size. To mitigate this limitation, we
propose to fill the gaps between the primary and the secondary with high index
material. The Gassegrain optics cuts the track length into half and high index
material reduces ray angle and height, consequently the incident ray angle can
be increased, i.e., the FoV angle is extended. Defining telephoto ratio as the
ratio of lens thickness to focal length, we achieve telephoto ratios as small
as 0.43 for a visible Cassegrain thin lens and 1.20 for an infrared Cassegrain
thin lens. To achieve an arbitrary FoV coverage, we present an strategy by
integrating multiple thin lenses on one plane with each unit covering a
different FoV region. To avoid physically tilting each unit, we propose beam
steering with metasurface. By image stitching, we obtain wide FoV images.
|
Deep neural networks have been widely studied in autonomous driving
applications such as semantic segmentation or depth estimation. However,
training a neural network in a supervised manner requires a large amount of
annotated labels which are expensive and time-consuming to collect. Recent
studies leverage synthetic data collected from a virtual environment which are
much easier to acquire and more accurate compared to data from the real world,
but they usually suffer from poor generalization due to the inherent domain
shift problem. In this paper, we propose a Domain-Agnostic Contrastive Learning
(DACL) which is a two-stage unsupervised domain adaptation framework with
cyclic adversarial training and contrastive loss. DACL leads the neural network
to learn domain-agnostic representation to overcome performance degradation
when there exists a difference between training and test data distribution. Our
proposed approach achieves better performance in the monocular depth estimation
task compared to previous state-of-the-art methods and also shows effectiveness
in the semantic segmentation task.
|
Class imbalance is an inherent problem in many machine learning
classification tasks. This often leads to trained models that are unusable for
any practical purpose. In this study we explore an unsupervised approach to
address these imbalances by leveraging transfer learning from pre-trained image
classification models to encoder-based Generative Adversarial Network (eGAN).
To the best of our knowledge, this is the first work to tackle this problem
using GAN without needing to augment with synthesized fake images.
In the proposed approach we use the discriminator network to output a
negative or positive score. We classify as minority, test samples with negative
scores and as majority those with positive scores. Our approach eliminates
epistemic uncertainty in model predictions, as the P(minority) + P(majority)
need not sum up to 1. The impact of transfer learning and combinations of
different pre-trained image classification models at the generator and
discriminator is also explored. Best result of 0.69 F1-score was obtained on
CIFAR-10 classification task with imbalance ratio of 1:2500.
Our approach also provides a mechanism of thresholding the specificity or
sensitivity of our machine learning system. Keywords: Class imbalance, Transfer
Learning, GAN, nash equilibrium
|
Continuous-depth neural models, where the derivative of the model's hidden
state is defined by a neural network, have enabled strong sequential data
processing capabilities. However, these models rely on advanced numerical
differential equation (DE) solvers resulting in a significant overhead both in
terms of computational cost and model complexity. In this paper, we present a
new family of models, termed Closed-form Continuous-depth (CfC) networks, that
are simple to describe and at least one order of magnitude faster while
exhibiting equally strong modeling abilities compared to their ODE-based
counterparts. The models are hereby derived from the analytical closed-form
solution of an expressive subset of time-continuous models, thus alleviating
the need for complex DE solvers all together. In our experimental evaluations,
we demonstrate that CfC networks outperform advanced, recurrent models over a
diverse set of time-series prediction tasks, including those with long-term
dependencies and irregularly sampled data. We believe our findings open new
opportunities to train and deploy rich, continuous neural models in
resource-constrained settings, which demand both performance and efficiency.
|
We examine the dynamics of electron beams that, in free space, are
self-accelerating, in the presence of an additional magnetic field. We focus
our attention in the case of Airy beams that follow parabolic trajectories and
in generalized classes of beams associated with power-law trajectories. We
study the interplay between beam self-acceleration and the circular motion
caused by the magnetic field. In the case of Airy beams, using an integral
representation, we find closed-form solutions for the electron wavefunction. We
also derive asymptotic formulas for the beam trajectories both for Airy beams
and for self-accelerating power-law beams. A ray optics description is rather
useful for the interpretation of the beam dynamics. Our results are in
excellent comparison with direct numerical simulations.
|
We report on a Python-toolbox for unbiased statistical analysis of
fluorescence intermittency properties of single emitters. Intermittency, i.e.,
step-wise temporal variations in the instantaneous emission intensity and
fluorescence decay rate properties are common to organic fluorophores, II-VI
quantum dots and perovskite quantum dots alike. Unbiased statistical analysis
of intermittency switching time distributions, involved levels and lifetimes is
important to avoid interpretation artefacts. This work provides an
implementation of Bayesian changepoint analysis and level clustering applicable
to time-tagged single-photon detection data of single emitters that can be
applied to real experimental data and as tool to verify the ramifications of
hypothesized mechanistic intermittency models. We provide a detailed Monte
Carlo analysis to illustrate these statistics tools, and to benchmark the
extent to which conclusions can be drawn on the photophysics of highly complex
systems, such as perovskite quantum dots that switch between a plethora of
states instead of just two.
|
Self-adaptive software systems continuously adapt in response to internal and
external changes in their execution environment, captured as contexts. The COP
paradigm posits a technique for the development of self-adaptive systems,
capturing their main characteristics with specialized programming language
constructs. COP adaptations are specified as independent modules composed in
and out of the base system as contexts are activated and deactivated in
response to sensed circumstances from the surrounding environment. However, the
definition of adaptations, their contexts and associated specialized behavior,
need to be specified at design time. In complex CPS this is intractable due to
new unpredicted operating conditions. We propose Auto-COP, a new technique to
enable generation of adaptations at run time. Auto-COP uses RL options to build
action sequences, based on the previous instances of the system execution.
Options are explored in interaction with the environment, and the most suitable
options for each context are used to generate adaptations exploiting COP. To
validate Auto-COP, we present two case studies exhibiting different system
characteristics and application domains: a driving assistant and a robot
delivery system. We present examples of Auto-COP code generated at run time, to
illustrate the types of circumstances (contexts) requiring adaptation, and the
corresponding generated adaptations for each context. We confirm that the
generated adaptations exhibit correct system behavior measured by
domain-specific performance metrics, while reducing the number of required
execution/actuation steps by a factor of two showing that the adaptations are
regularly selected by the running system as adaptive behavior is more
appropriate than the execution of primitive actions.
|
We consider second order differential equations with real coefficients that
are in the limit circle case at infinity. Using the semiclassical Ansatz, we
construct solutions (the Jost solutions) of such equations with a prescribed
asymptotic behavior for $x\to\infty$. It turns out that in the limit circle
case, this Ansatz can be chosen common for all values of the spectral parameter
$z$. This leads to asymptotic formulas for all solutions of considered
differential equations, both homogeneous and non-homogeneous. We also
efficiently describe all self-adjoint realizations of the corresponding
differential operators in terms of boundary conditions at infinity and
find a representation for their resolvents.
|
In order to detect unknown intrusions and runtime errors of computer
programs, the cyber-security community has developed various detection
techniques. Anomaly detection is an approach that is designed to profile the
normal runtime behavior of computer programs in order to detect intrusions and
errors as anomalous deviations from the observed normal. However, normal but
unobserved behavior can trigger false positives. This limitation has
significantly decreased the practical viability of anomaly detection
techniques. Reported approaches to this limitation span a simple alert
threshold definition to distribution models for approximating all normal
behavior based on the limited observation. However, each assumption or
approximation poses the potential for even greater false positive rates. This
paper presents our study on how to explain the presence of anomalies using a
neural network, particularly Long Short-Term Memory, independent of actual data
distributions. We present and compare three anomaly detection models, and
report on our experience running different types of attacks on an Apache
Hypertext Transfer Protocol server. We performed a comparative study, focusing
on each model's ability to detect the onset of each attack while avoiding false
positives resulting from unknown normal behavior. Our best-performing model
detected the true onset of every attack with zero false positives.
|
Weakly coupled ferroelectric/dielectric superlattice thin film
heterostructures exhibit complex nanoscale polarization configurations that
arise from a balance of competing electrostatic, elastic, and domain-wall
contributions to the free energy. A key feature of these configurations is that
the polarization can locally have a significant component that is not along the
thin-film surface normal direction, while maintaining zero net in-plane
polarization. PbTiO3/SrTiO3 thin-film superlattice heterostructures on a
conducting SrRuO3 bottom electrode on SrTiO3 have a room-temperature stripe
nanodomain pattern with nanometer-scale lateral period. Ultrafast time-resolved
x-ray free electron laser diffraction and scattering experiments reveal that
above-bandgap optical pulses induce rapidly propagating acoustic pulses and a
perturbation of the domain diffuse scattering intensity arising from the
nanoscale stripe domain configuration. With 400 nm optical excitation, two
separate acoustic pulses are observed: a high-amplitude pulse resulting from
strong optical absorption in the bottom electrode and a weaker pulse arising
from the depolarization field screening effect due to absorption directly
within the superlattice. The picosecond scale variation of the nanodomain
diffuse scattering intensity is consistent with a larger polarization change
than would be expected due to the polarization-tetragonality coupling of
uniformly polarized ferroelectrics. The polarization change is consistent
instead with polarization rotation facilitated by the reorientation of the
in-plane component of the polarization at the domain boundaries of the striped
polarization structure. The complex steady-state configuration within these
ferroelectric heterostructures leads to polarization rotation phenomena that
have been previously available only through the selection of bulk crystal
composition.
|
We introduce a novel method for the implementation of shape optimziation in
fluid dynamics applications, where we propose to use the shape derivative to
determine deformation fields with the help of the $p-$ Laplacian for $p > 2$.
This approach is closely related to the computation of steepest descent
directions of the shape functional in the $W^{1,\infty}-$ topology. Our
approach is demonstrated for shape optimization related to drag-minimal free
floating bodies. The method is validated against existing approaches with
respect to convergence of the optimization algorithm, the obtained shape, and
regarding the quality of the computational grid after large deformations. Our
numerical results strongly indicate that shape optimization related to the
$W^{1,\infty}$-topology -- though numerically more demanding -- seems to be
superior over the classical approaches invoking Hilbert space methods,
concerning the convergence, the obtained shapes and the mesh quality after
large deformations, in particular when the optimal shape features sharp
corners.
|
We consider a Nicholson's equation with multiple pairs of time-varying delays
and nonlinear terms given by mixed monotone functions. Sufficient conditions
for the permanence, local stability and global attractivity of its positive
equilibrium are established. Our criteria depend on the size of some delays,
improve results in recent literature and provide answers to some open problems.
|
Underactuated systems like sea vessels have degrees of motion that are
insufficiently matched by a set of independent actuation forces. In addition,
the underlying trajectory-tracking control problems grow in complexity in order
to decide the optimal rudder and thrust control signals. This enforces several
difficult-to-solve constraints that are associated with the error dynamical
equations using classical optimal tracking and adaptive control approaches. An
online machine learning mechanism based on integral reinforcement learning is
proposed to find a solution for a class of nonlinear tracking problems with
partial prior knowledge of the system dynamics. The actuation forces are
decided using innovative forms of temporal difference equations relevant to the
vessel's surge and angular velocities. The solution is implemented using an
online value iteration process which is realized by employing means of the
adaptive critics and gradient descent approaches. The adaptive learning
mechanism exhibited well-functioning and interactive features in react to
different desired reference-tracking scenarios.
|
The inverse problem of finding the optimal network structure for a specific
type of dynamical process stands out as one of the most challenging problems in
network science. Focusing on the susceptible-infected-susceptible type of
dynamics on annealed networks whose structures are fully characterized by the
degree distribution, we develop an analytic framework to solve the inverse
problem. We find that, for relatively low or high infection rates, the optimal
degree distribution is unique, which consists of no more than two distinct
nodal degrees. For intermediate infection rates, the optimal degree
distribution is multitudinous and can have a broader support. We also find
that, in general, the heterogeneity of the optimal networks decreases with the
infection rate. A surprising phenomenon is the existence of a specific value of
the infection rate for which any degree distribution would be optimal in
generating maximum spreading prevalence. The analytic framework and the
findings provide insights into the interplay between network structure and
dynamical processes with practical implications.
|
Quantum entanglement between two or more bipartite entities is a core concept
in quantum information areas limited to microscopic regimes directly governed
by Heisenberg uncertainty principle via quantum superposition, resulting in
nondeterministic and probabilistic quantum features. Such quantum features
cannot be generated by classical means. Here, a pure classical method of
on-demand entangled light-pair generation is presented in a macroscopic regime
via basis randomness. This conflicting idea of conventional quantum mechanics
invokes a fundamental question about both classicality and quantumness, where
superposition is key to its resolution.
|
In this paper we consider the problem of transmission power allocation for
remote estimation of a dynamical system in the case where the estimator is able
to simultaneously receive packets from multiple interfering sensors, as it is
possible e.g. with the latest wireless technologies such as 5G and WiFi. To
this end we introduce a general model where packet arrival probabilities are
determined based on the received Signal-to-Interference-and-Noise Ratio and
with two different receivers design schemes, one implementing standard
multi-packet reception technique and one implementing Successive Interference
Cancellation decoding algorithm in addition. Then we cast the power allocation
problem as an optimization task where the mean error covariance at the remote
estimator is minimized, while penalizing the mean transmission power
consumption. For the infinite-horizon problem we show the existence of a
stationary optimal policy, while for the finite-horizon case we derive some
structural properties under the special scenario where the overall system to be
estimated can be seen as a set of independent subsystems. Numerical simulations
illustrate the improvement given by the proposed receivers over orthogonal
schemes that schedules only one sensor transmission at a time in order to avoid
interference.
|
Cloud-Radio Access Networks (Cloud-RANs) are separating the mobile networks
base station functions into three units, the connection between the two of them
is referred to as the fronthaul network. This work demonstrates the
transmission of user data transport blocks between the distributed Medium
Access Control (MAC) layer and local Physical (PHY) layer in the radiounit over
a Passive Optical Network (PON). PON networks provide benefits in terms of
economy and flexibility when used for Cloud-RAN fronthaul transport. However,
the PON upstream scheduling can introduce additional latency that might not
satisfy the requirements imposed by Cloud-RAN functional split. In this work we
demonstrate how our virtual Dynamic Bandwidth Allocation(DBA) concept can be
used to effectively communicate with the mobile Long Term Evolution (LTE)
scheduler, adopting the well known cooperative DBA mechanism, to reduce the PON
latency to satisfactory values. Thus, our results show the feasibility ofusing
PON technology as transport medium of the fronthaul for the MAC/PHY functional
split, in a fully virtualised environment.Further background traffic is added,
so that measurements show a more realistic scenario. The obtained round trip
times indicates that using PON at fronthaul might be limited to the distance of
11km for a synchronised scenario, or no compliance for a non-synchronised
scenario.
|
This paper studied the faint, diffuse extended X-ray emission associated with
the radio lobes and the hot gas in the intracluster medium (ICM) environment
for a sample of radio galaxies. We used shallow ($\sim 10$ ks) archival Chandra
observations for 60 radio galaxies (7 FR I and 53 FR II) with $0.0222 \le z \le
1.785$ selected from the 298 extragalactic radio sources identified in the 3CR
catalog. We used Bayesian statistics to look for any asymmetry in the extended
X-ray emission between regions that contain the radio lobes and regions that
contain the hot gas in the ICM. In the Chandra broadband ($0.5 - 7.0$ keV),
which has the highest detected X-ray flux and the highest signal-to-noise
ratio, we found that the non-thermal X-ray emission from the radio lobes
dominates the thermal X-ray emission from the environment for $\sim 77\%$ of
the sources in our sample. We also found that the relative amount of on-jet
axis non-thermal emission from the radio lobes tends to increase with redshift
compared to the off-jet axis thermal emission from the environment. This
suggests that the dominant X-ray mechanism for the non-thermal X-ray emission
in the radio lobes is due to the inverse Compton upscattering of cosmic
microwave background (CMB) seed photons by relativistic electrons in the radio
lobes, a process for which the observed flux is roughly redshift independent
due to the increasing CMB energy density with increasing redshift.
|
We investigate what can be learned about a population of distant KBOs by
studying the statistical properties of their light curves. Whereas others have
successfully inferred the properties of individual, highly variable KBOs, we
show that the fraction of KBOs with low amplitudes also provides fundamental
information about a population. Each light curve is primarily the result of two
factors: shape and orientation. We consider contact binaries and ellipsoidal
shapes, with and without flattening. After developing the mathematical
framework, we apply it to the existing body of KBO light curve data. Principal
conclusions are as follows. (1) When using absolute magnitude H as a proxy for
size, it is more accurate to use the maximum of the light curve rather than the
mean. (2) Previous investigators have noted that smaller KBOs have
higher-amplitude light curves, and have interpreted this as evidence that they
are systematically more irregular in shape than larger KBOs; we show that a
population of flattened bodies with uniform proportions could also explain this
result. (3) Our analysis indicates that prior assessments of the fraction of
contact binaries in the Kuiper Belt may be artificially low. (4) The pole
orientations of some KBOs can be inferred from observed changes in their light
curves; however, these KBOs constitute a biased sample, whose pole orientations
are not representative of the population overall. (5) Although surface
topography, albedo patterns, limb darkening, and other surface properties can
affect individual light curves, they do not have a strong influence on the
statistics overall. (6) Photometry from the OSSOS survey is incompatible with
previous results and its statistical properties defy easy interpretation. We
also discuss the promise of this approach for the analysis of future, much
larger data sets such as the one anticipated from the Rubin Observatory.
|
We study the relation (and differences) between stability and Property (S) in
the simple and stably finite framework. This leads us to characterize stable
elements in terms of its support, and study these concepts from different sides
: hereditary subalgebras, projections in the multiplier algebra and order
properties in the Cuntz semigroup. We use these approaches to show both that
cancellation at infinity on the Cuntz semigroup just holds when its Cuntz
equivalence is given by isomorphism at the level of Hilbert right-modules, and
that different notions as Regularity, $\omega$-comparison, Corona Factorization
Property, property R, etc.. are equivalent under mild assumptions.
|
Engineered dynamical maps that combine not only coherent, but also unital and
dissipative transformations of quantum states, have demonstrated a number of
technological applications, and promise to be a beneficial tool also in quantum
thermodynamic processes. Here, we exploit control of a spin qutrit to
investigate energy exchange fluctuations of an open quantum system. The qutrit
engineer dynamics can be understood as an autonomous feedback process, where
random measurement events condition the subsequent dissipative evolution. To
analyze this dynamical process, we introduce a generalization of the
Sagawa-Ueda-Tasaki relation for dissipative dynamics and verify it
experimentally. Not only we characterize the efficacy of the autonomous
feedback protocol, but also find that the characteristic function of energy
variations $G(\eta)$ becomes insensitive to the process details at a single
specific value of its argument. This allows us to demonstrate that a
fluctuation theorem of the Jarzynski type holds for this general dissipative
feedback dynamics, while previous relations were limited to unital dynamics.
Moreover, in addition to the feedback efficacy, we find a witness of unitality
associated with the fixed point of the dynamics.
|
We present high-pressure electrical transport measurements on the newly
discovered V-based superconductors $A$V$_3$Sb$_5$ ($A$ = Rb and K), which have
an ideal Kagome lattice of vanadium. Two superconducting domes under pressure
are observed in both compounds, as previously observed in their sister compound
CsV$_3$Sb$_5$. For RbV$_3$Sb$_5$, the $T_c$ increases from 0.93 K at ambient
pressure to the maximum of 4.15 K at 0.38 GPa in the first dome. The second
superconducting dome has the highest $T_c$ of 1.57 K at 28.8 GPa. KV$_3$Sb$_5$
displays a similar double-dome phase diagram, however, its two maximum $T_c$s
are lower, and the $T_c$ drops faster in the second dome than RbV$_3$Sb$_5$. An
integrated temperature-pressure phase diagram of $A$V$_3$Sb$_5$ ($A$ = Cs, Rb
and K) is constructed, showing that the ionic radius of the intercalated
alkali-metal atoms has a significant effect. Our work demonstrates that
double-dome superconductivity under pressure is a common feature of these
V-based Kagome metals.
|
We present a new algorithm for efficiently computing the $N$-point
correlation functions (NPCFs) of a 3D density field for arbitrary $N$. This can
be applied both to a discrete spectroscopic galaxy survey and a continuous
field. By expanding the statistics in a separable basis of isotropic functions
built from spherical harmonics, the NPCFs can be estimated by counting pairs of
particles in space, leading to an algorithm with complexity $\mathcal{O}(N_{\rm
g}^2)$ for $N_{\rm g}$ particles, or $\mathcal{O}\left(N_\mathrm{FFT}\log
N_\mathrm{FFT}\right)$ when using a Fast Fourier Transform with
$N_\mathrm{FFT}$ grid-points. In practice, the rate-limiting step for $N>3$
will often be the summation of the histogrammed spherical harmonic
coefficients, particularly if the number of radial and angular bins is large.
In this case, the algorithm scales linearly with $N_{\rm g}$. The approach is
implemented in the ENCORE code, which can compute the 3PCF, 4PCF, 5PCF, and
6PCF of a BOSS-like galaxy survey in $\sim$ $100$ CPU-hours, including the
corrections necessary for non-uniform survey geometries. We discuss the
implementation in depth, along with its GPU acceleration, and provide practical
demonstration on realistic galaxy catalogs. Our approach can be
straightforwardly applied to current and future datasets to unlock the
potential of constraining cosmology from the higher-point functions.
|
We unveil the microscopic origin of the topologically ordered counterpart of
the s-wave superconductor in this work. For this, we employ the recently
developed unitary renormalisation group (URG) method on a generalised model of
2D electrons attractive interactions. The effective Hamiltonian obtained at the
stable low-energy fixed point of the RG flow corresponds to a gapped,
insulating state of quantum matter we call the Cooper pair insulator (CPI). We
show that the CPI ground state manifold displays several signatures of
topological order, including a four-fold degeneracy when placed on the torus.
Spectral flow arguments reveal the emergent gauge-theoretic structure of the
effective Hamiltonian, as it can be written entirely in terms of non-local
Wilson loops. It also contains a topological $\theta$-term whose coefficient is
quantised, in keeping with the requirement of invariance of the ground state
under large gauge transformations. We find that the long-ranged many-particle
entanglement content of the CPI ground state is driven by inter-helicity
two-particle scattering processes. Analysis of the passage from CPI to BCS
superconducting ground state shows the RG flow promotes fluctuations in the
number of condensed Cooper pairs and lowers those in the conjugate global
phase. Consequently, the distinct signatures of long-ranged entanglement in the
CPI are replaced by the well-known short-ranged entanglement of the BCS state.
Finally, we study the renormalisation of the entanglement in $k$-space for both
the CPI and BCS ground states. The topologically ordered CPI state is shown to
possess an emergent hierarchy of scales of entanglement, and that this
hierarchy collapses in the BCS state. Our work offers clear evidence for the
microscopic origins of topological order in this prototypical system, and lays
the foundation for similar investigations in other systems of correlated
electrons.
|
In this paper, we develop a compositional vector-based semantics of positive
transitive sentences in quantum natural language processing for a non-English
language, i.e. Persian, to compare the parametrized quantum circuits of two
synonymous sentences in two languages, English and Persian. By considering
grammar+meaning of a transitive sentence, we translate DisCoCat diagram via
ZX-calculus into quantum circuit form. Also, we use a bigraph method to rewrite
DisCoCat diagram and turn into quantum circuit in the semantic side.
|
This paper details the FPGA implementation methodology for Convolutional
Spiking Neural Networks (CSNN) and applies this methodology to low-power
radioisotope identification using high-resolution data. Power consumption of 75
mW has been achieved on an FPGA implementation of a CSNN, with an inference
accuracy of 90.62% on a synthetic dataset. The chip validation method is
presented. Prototyping was accelerated by evaluating SNN parameters using
SpiNNaker neuromorphic platform.
|
We present a comprehensive review of the structural chemistry of hybrid lead
halides of stoichiometry APbX4, A2PbX4 or AAPbX4, where A and A are organic
ammonium cations and X = Cl, Br or I. These compounds may be considered as
layered perovskites, containing isolated, infinite layers of corner-sharing
PbX4 octahedra separated by the organic species. We first extract over 250
crystal structures from the CCDC and classify them in terms of unit cell
metrics and crystal symmetry. Symmetry mode analysis is then used to identify
the nature of key structural distortions of the [PbX4] layers. Two generic
types of distortion are prevalent in this family: tilting of the octahedral
units and shifts of the inorganic layers relative to each other. Although the
octahedral tilting modes are well-known in the crystallography of purely
inorganic perovskites, the additional layer shift modes are shown to enrich
enormously the structural options available in layered hybrid perovskites. Some
examples and trends are discussed in more detail in order to show how the
nature of the interlayer organic species can influence the overall structural
architecture, although the main aim of the paper is to encourage workers in the
field to make use of the systematic crystallographic methods used here to
further understand and rationalise their own compounds, and perhaps to be able
to design-in particular structural features in future work.
|
Many different types of fractional calculus have been proposed, which can be
organised into some general classes of operators. For a unified mathematical
theory, results should be proved in the most general possible setting. Two
important classes of fractional-calculus operators are the fractional integrals
and derivatives with respect to functions (dating back to the 1970s) and those
with general analytic kernels (introduced in 2019). To cover both of these
settings in a single study, we can consider fractional integrals and
derivatives with analytic kernels with respect to functions, which have never
been studied in detail before. Here we establish the basic properties of these
general operators, including series formulae, composition relations, function
spaces, and Laplace transforms. The tools of convergent series, from fractional
calculus with analytic kernels, and of operational calculus, from fractional
calculus with respect to functions, are essential ingredients in the analysis
of the general class that covers both.
|
The twin group $TW_n$ on $n$ strands is the group generated by $t_1, \dots,
t_{n-1}$ with defining relations $t_i^2=1$, $t_it_j = t_jt_i$ if $|i-j|>1$. We
find a new instance of semisimple Schur--Weyl duality for tensor powers of a
natural $n$-dimensional reflection representation of $TW_n$, depending on a
parameter $q$. At $q=1$ the representation coincides with the natural
permutation representation of the symmetric group, so the new Schur--Weyl
duality may be regarded as a $q$-analogue of the one motivating the definition
of the partition algebra.
|
Developing theoretical models for nonequilibrium quantum systems poses
significant challenges. Here we develop and study a multimode model of a
driven-dissipative Josephson junction chain of atomic Bose-Einstein
condensates, as realised in the experiment of Labouvie et al. [Phys. Rev. Lett.
116, 235302 (2016)]. The model is based on c-field theory, a beyond-mean-field
approach to Bose-Einstein condensates that incorporates fluctuations due to
finite temperature and dissipation. We find the c-field model is capable of
capturing all key features of the nonequilibrium phase diagram, including
bistability and a critical slowing down in the lower branch of the bistable
region. Our model is closely related to the so-called Lugiato-Lefever equation,
and thus establishes new connections between nonequilibrium dynamics of
ultracold atoms with nonlinear optics, exciton-polariton superfluids, and
driven damped sine-Gordon systems.
|
Motivated by applications in unsourced random access, this paper develops a
novel scheme for the problem of compressed sensing of binary signals. In this
problem, the goal is to design a sensing matrix $A$ and a recovery algorithm,
such that the sparse binary vector $\mathbf{x}$ can be recovered reliably from
the measurements $\mathbf{y}=A\mathbf{x}+\sigma\mathbf{z}$, where $\mathbf{z}$
is additive white Gaussian noise. We propose to design $A$ as a parity check
matrix of a low-density parity-check code (LDPC), and to recover $\mathbf{x}$
from the measurements $\mathbf{y}$ using a Markov chain Monte Carlo algorithm,
which runs relatively fast due to the sparse structure of $A$. The performance
of our scheme is comparable to state-of-the-art schemes, which use dense
sensing matrices, while enjoying the advantages of using a sparse sensing
matrix.
|
Few-shot class-incremental learning (FSCIL) aims to design machine learning
algorithms that can continually learn new concepts from a few data points,
without forgetting knowledge of old classes. The difficulty lies in that
limited data from new classes not only lead to significant overfitting issues
but also exacerbate the notorious catastrophic forgetting problems. Moreover,
as training data come in sequence in FSCIL, the learned classifier can only
provide discriminative information in individual sessions, while FSCIL requires
all classes to be involved for evaluation. In this paper, we address the FSCIL
problem from two aspects. First, we adopt a simple but effective decoupled
learning strategy of representations and classifiers that only the classifiers
are updated in each incremental session, which avoids knowledge forgetting in
the representations. By doing so, we demonstrate that a pre-trained backbone
plus a non-parametric class mean classifier can beat state-of-the-art methods.
Second, to make the classifiers learned on individual sessions applicable to
all classes, we propose a Continually Evolved Classifier (CEC) that employs a
graph model to propagate context information between classifiers for
adaptation. To enable the learning of CEC, we design a pseudo incremental
learning paradigm that episodically constructs a pseudo incremental learning
task to optimize the graph parameters by sampling data from the base dataset.
Experiments on three popular benchmark datasets, including CIFAR100,
miniImageNet, and Caltech-USCD Birds-200-2011 (CUB200), show that our method
significantly outperforms the baselines and sets new state-of-the-art results
with remarkable advantages.
|
Instance contrast for unsupervised representation learning has achieved great
success in recent years. In this work, we explore the idea of instance
contrastive learning in unsupervised domain adaptation (UDA) and propose a
novel Category Contrast technique (CaCo) that introduces semantic priors on top
of instance discrimination for visual UDA tasks. By considering instance
contrastive learning as a dictionary look-up operation, we construct a
semantics-aware dictionary with samples from both source and target domains
where each target sample is assigned a (pseudo) category label based on the
category priors of source samples. This allows category contrastive learning
(between target queries and the category-level dictionary) for
category-discriminative yet domain-invariant feature representations: samples
of the same category (from either source or target domain) are pulled closer
while those of different categories are pushed apart simultaneously. Extensive
UDA experiments in multiple visual tasks ($e.g.$, segmentation, classification
and detection) show that the simple implementation of CaCo achieves superior
performance as compared with the highly-optimized state-of-the-art methods.
Analytically and empirically, the experiments also demonstrate that CaCo is
complementary to existing UDA methods and generalizable to other learning
setups such as semi-supervised learning, unsupervised model adaptation, etc.
|
This paper presents the first wireless and programmable neural stimulator
leveraging magnetoelectric (ME) effects for power and data transfer. Thanks to
low tissue absorption, low misalignment sensitivity and high power transfer
efficiency, the ME effect enables safe delivery of high power levels (a few
milliwatts) at low resonant frequencies (~250 kHz) to mm-sized implants deep
inside the body (30-mm depth). The presented MagNI (Magnetoelectric Neural
Implant) consists of a 1.5-mm$^2$ 180-nm CMOS chip, an in-house built 4x2 mm ME
film, an energy storage capacitor, and on-board electrodes on a flexible
polyimide substrate with a total volume of 8.2 mm$^3$ . The chip with a power
consumption of 23.7 $\mu$W includes robust system control and data recovery
mechanisms under source amplitude variations (1-V variation tolerance). The
system delivers fully-programmable bi-phasic current-controlled stimulation
with patterns covering 0.05-to-1.5-mA amplitude, 64-to-512-$\mu$s pulse width,
and 0-to-200Hz repetition frequency for neurostimulation.
|
Robot Assisted Therapy is a new paradigm in many therapies such as the
therapy of children with autism spectrum disorder. In this paper we present the
use of a parrot-like robot as an assistive tool in turn taking therapy. The
therapy is designed in the form of a card game between a child with autism and
a therapist or the robot. The intervention was implemented in a single subject
study format and the effect sizes for different turn taking variables are
calculated. The results show that the child robot interaction had larger effect
size than the child trainer effect size in most of the turn taking variables.
Furthermore the therapist point of view on the proposed Robot Assisted Therapy
is evaluated using a questionnaire. The therapist believes that the robot is
appealing to children which may ease the therapy process. The therapist
suggested to add other functionalities and games to let children with autism to
learn more turn taking tasks and better generalize the learned tasks
|
We present a new suite of atmosphere models with flexible cloud parameters to
investigate the effects of clouds on brown dwarfs across the L/T transition. We
fit these models to a sample of 13 objects with well-known masses, distances,
and spectral types spanning L3-T5. Our modelling is guided by
spatially-resolved photometry from the Hubble Space Telescope and the W. M.
Keck Telescopes covering visible to near-infrared wavelengths. We find that,
with appropriate cloud parameters, the data can be fit well by atmospheric
models with temperature and surface gravity in agreement with the predictions
of evolutionary models. We see a clear trend in the cloud parameters with
spectral type, with earlier-type objects exhibiting higher-altitude clouds with
smaller grains (0.25-0.50 micron) and later-type objects being better fit with
deeper clouds and larger grains ($\geq$1 micron). Our results confirm previous
work that suggests L dwarfs are dominated by submicron particles, whereas T
dwarfs have larger particle sizes.
|
Herein, design of false data injection attack on a distributed cyber-physical
system is considered. A stochastic process with linear dynamics and Gaussian
noise is measured by multiple agent nodes, each equipped with multiple sensors.
The agent nodes form a multi-hop network among themselves. Each agent node
computes an estimate of the process by using its sensor observation and
messages obtained from neighboring nodes, via Kalman-consensus filtering. An
external attacker, capable of arbitrarily manipulating the sensor observations
of some or all agent nodes, injects errors into those sensor observations. The
goal of the attacker is to steer the estimates at the agent nodes as close as
possible to a pre-specified value, while respecting a constraint on the attack
detection probability. To this end, a constrained optimization problem is
formulated to find the optimal parameter values of a certain class of linear
attacks. The parameters of linear attack are learnt on-line via a combination
of stochastic approximation based update of a Lagrange multiplier, and an
optimization technique involving either the Karush-Kuhn-Tucker (KKT) conditions
or online stochastic gradient descent. The problem turns out to be convex for
some special cases. Desired convergence of the proposed algorithms are proved
by exploiting the convexity and properties of stochastic approximation
algorithms. Finally, numerical results demonstrate the efficacy of the attack.
|
Population protocols are a fundamental model in distributed computing, where
many nodes with bounded memory and computational power have random pairwise
interactions over time. This model has been studied in a rich body of
literature aiming to understand the tradeoffs between the memory and time
needed to perform computational tasks.
We study the population protocol model focusing on the communication
complexity needed to achieve consensus with high probability. When the number
of memory states is $s = O(\log \log{n})$, the best upper bound known was given
by a protocol with $O(n \log{n})$ communication, while the best lower bound was
$\Omega(n \log(n)/s)$ communication.
We design a protocol that shows the lower bound is sharp. When each agent has
$s=O(\log{n}^{\theta})$ states of memory, with $\theta \in (0,1/2)$, consensus
can be reached in time $O(\log(n))$ with $O(n \log{(n)}/s)$ communications with
high probability.
|
We show that nonparaxial polarized light beams propagating in a bulk
nonlinear Kerr medium naturally exhibit a coupling between the motional and the
polarization degrees of freedom, realizing a spin-orbit-coupled mixture of
fluids of light. We investigate the impact of this mechanism on the Bogoliubov
modes of the fluid, using a suitable density-phase formalism built upon a
linearization of the exact Helmholtz equation. The Bogoliubov spectrum is found
to be anisotropic, and features both low-frequency gapless branches and
high-frequency gapped ones. We compute the amplitudes of these modes and
propose a couple of experimental protocols to study their excitation
mechanisms. This allows us to highlight a phenomenon of hybridization between
density and spin modes, which is absent in the paraxial description and
represents a typical fingerprint of spin-orbit coupling.
|
Model-based reinforcement learning (RL) is more sample efficient than
model-free RL by using imaginary trajectories generated by the learned dynamics
model. When the model is inaccurate or biased, imaginary trajectories may be
deleterious for training the action-value and policy functions. To alleviate
such problem, this paper proposes to adaptively reweight the imaginary
transitions, so as to reduce the negative effects of poorly generated
trajectories. More specifically, we evaluate the effect of an imaginary
transition by calculating the change of the loss computed on the real samples
when we use the transition to train the action-value and policy functions.
Based on this evaluation criterion, we construct the idea of reweighting each
imaginary transition by a well-designed meta-gradient algorithm. Extensive
experimental results demonstrate that our method outperforms state-of-the-art
model-based and model-free RL algorithms on multiple tasks. Visualization of
our changing weights further validates the necessity of utilizing reweight
scheme.
|
We present a learning-based planner that aims to robustly drive a vehicle by
mimicking human drivers' driving behavior. We leverage a mid-to-mid approach
that allows us to manipulate the input to our imitation learning network
freely. With that in mind, we propose a novel feedback synthesizer for data
augmentation. It allows our agent to gain more driving experience in various
previously unseen environments that are likely to encounter, thus improving
overall performance. This is in contrast to prior works that rely purely on
random synthesizers. Furthermore, rather than completely commit to imitating,
we introduce task losses that penalize undesirable behaviors, such as
collision, off-road, and so on. Unlike prior works, this is done by introducing
a differentiable vehicle rasterizer that directly converts the waypoints output
by the network into images. This effectively avoids the usage of heavyweight
ConvLSTM networks, therefore, yields a faster model inference time. About the
network architecture, we exploit an attention mechanism that allows the network
to reason critical objects in the scene and produce better interpretable
attention heatmaps. To further enhance the safety and robustness of the
network, we add an optional optimization-based post-processing planner
improving the driving comfort. We comprehensively validate our method's
effectiveness in different scenarios that are specifically created for
evaluating self-driving vehicles. Results demonstrate that our learning-based
planner achieves high intelligence and can handle complex situations. Detailed
ablation and visualization analysis are included to further demonstrate each of
our proposed modules' effectiveness in our method.
|
In numerous practical applications, especially in medical image
reconstruction, it is often infeasible to obtain a large ensemble of
ground-truth/measurement pairs for supervised learning. Therefore, it is
imperative to develop unsupervised learning protocols that are competitive with
supervised approaches in performance. Motivated by the maximum-likelihood
principle, we propose an unsupervised learning framework for solving ill-posed
inverse problems. Instead of seeking pixel-wise proximity between the
reconstructed and the ground-truth images, the proposed approach learns an
iterative reconstruction network whose output matches the ground-truth in
distribution. Considering tomographic reconstruction as an application, we
demonstrate that the proposed unsupervised approach not only performs on par
with its supervised variant in terms of objective quality measures but also
successfully circumvents the issue of over-smoothing that supervised approaches
tend to suffer from. The improvement in reconstruction quality comes at the
expense of higher training complexity, but, once trained, the reconstruction
time remains the same as its supervised counterpart.
|
Transmon qubits fabricated with tantalum metal have been shown to possess
energy relaxation times greater than 300 $\mu$s and, as such, present an
attractive platform for high precision, correlated noise studies across
multiple higher energy transitions. Tracking the multi-level fluctuating qudit
frequencies with a precision enabled by the high coherence of the device allows
us to extract the charge offset and quasi-particle dynamics. We observe
qualitatively different charge offset behavior in the tantalum device than
those measured in previous low frequency charge noise studies. In particular,
we find the charge offset dynamics are dominated by rare, discrete jumps
between a finite number of quasi-stationary charge configurations, a previously
unobserved charge noise process in superconducting qubits.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.