abstract
stringlengths 42
2.09k
|
---|
We propose Scale-aware AutoAug to learn data augmentation policies for object
detection. We define a new scale-aware search space, where both image- and
box-level augmentations are designed for maintaining scale invariance. Upon
this search space, we propose a new search metric, termed Pareto Scale Balance,
to facilitate search with high efficiency. In experiments, Scale-aware AutoAug
yields significant and consistent improvement on various object detectors
(e.g., RetinaNet, Faster R-CNN, Mask R-CNN, and FCOS), even compared with
strong multi-scale training baselines. Our searched augmentation policies are
transferable to other datasets and box-level tasks beyond object detection
(e.g., instance segmentation and keypoint estimation) to improve performance.
The search cost is much less than previous automated augmentation approaches
for object detection. It is notable that our searched policies have meaningful
patterns, which intuitively provide valuable insight for human data
augmentation design. Code and models will be available at
https://github.com/Jia-Research-Lab/SA-AutoAug.
|
With the incoming 5G network, the ubiquitous Internet of Things (IoT) devices
can benefit our daily life, such as smart cameras, drones, etc. With the
introduction of the millimeter-wave band and the thriving number of IoT
devices, it is critical to design new dynamic spectrum access (DSA) system to
coordinate the spectrum allocation across massive devices in 5G. In this paper,
we present Hermes, the first decentralized DSA system for massive devices
deployment. Specifically, we propose an efficient multi-agent reinforcement
learning algorithm and introduce a novel shuffle mechanism, addressing the
drawbacks of collision and fairness in existing decentralized systems. We
implement Hermes in 5G network via simulations. Extensive evaluations show that
Hermes significantly reduces collisions and improves fairness compared to the
state-of-the-art decentralized methods. Furthermore, Hermes is able to adapt
the environmental changes within 0.5 seconds, showing its deployment
practicability in dynamic environment of 5G.
|
We report the first-ever calculation of the isovector flavor combination of
the chiral-odd twist-3 parton distribution $h_L(x)$ for the proton from lattice
QCD. We employ gauge configurations with two degenerate light, a strange and a
charm quark ($N_f=2+1+1$) of maximally twisted mass fermions with a clover
improvement. The lattice has a spatial extent of 3 fm and lattice spacing of
0.093 fm. The values of the quark masses lead to a pion mass of $260$ MeV. We
use a source-sink time separation of 1.12 fm to control contamination from
excited states. Our calculation is based on the quasi-distribution approach,
with three values for the proton momentum: 0.83 GeV, 1.25 GeV, and 1.67 GeV.
The lattice data are renormalized non-perturbatively using the RI$'$ scheme,
and the final result for $h_L(x)$ is presented in the $\overline{\rm MS}$
scheme at the scale of 2 GeV. Furthermore, we compute in the same setup the
transversity distribution, $h_1(x)$, which allows us, in particular, to compare
$h_L(x)$ to its Wandzura-Wilczek approximation. We also combine results for the
isovector and isoscalar flavor combinations to disentangle the individual quark
contributions for $h_1(x)$ and $h_L(x)$, and address the Wandzura-Wilczek
approximation in that case as well.
|
We present an adaptation of the NPA hierarchy to the setting of synchronous
correlation matrices. Our adaptation improves upon the original NPA hierarchy
by using smaller certificates and fewer constraints, although it can only be
applied to certify synchronous correlations. We recover characterizations for
the sets of synchronous quantum commuting and synchronous quantum correlations.
For applications, we show that the existence of symmetric informationally
complete positive operator-valued measures and maximal sets of mutually
unbiased bases can be verified or invalidated with only two certificates of our
adapted NPA hierarchy.
|
Nonlinear phononics relies on the resonant optical excitation of
infrared-active lattice vibrations to coherently induce targeted structural
deformations in solids. This form of dynamical crystal-structure design has
been applied to control the functional properties of many interesting systems,
including magneto-resistive manganites, magnetic materials, superconductors,
and ferroelectrics. However, phononics has so far been restricted to protocols
in which structural deformations occur locally within the optically excited
volume, sometimes resulting in unwanted heating. Here, we extend nonlinear
phononics to propagating polaritons, effectively separating in space the
optical drive from the functional response. Mid-infrared optical pulses are
used to resonantly drive an 18 THz phonon at the surface of ferroelectric
LiNbO3. A time-resolved stimulated Raman scattering probe reveals that the
ferroelectric polarization is reduced over the entire 50 micron depth of the
sample, far beyond the ~ micron depth of the evanescent phonon field. We
attribute the bulk response of the ferroelectric polarization to the excitation
of a propagating 2.5 THz soft-mode phonon-polariton. For the highest excitation
amplitudes, we reach a regime in which the polarization is reversed. In this
this non-perturbative regime, we expect that the polariton model evolves into
that of a solitonic domain wall that propagates from the surface into the
materials at near the speed of light.
|
About 5-8% of individuals over the age of 60 have dementia. With our
ever-aging population this number is likely to increase, making dementia one of
the most important threats to public health in the 21st century. Given the
phenotypic overlap of individual dementias the diagnosis of dementia is a major
clinical challenge, even with current gold standard diagnostic approaches.
However, it has been shown that certain dementias show specific structural
characteristics in the brain. Progressive supranuclear palsy (PSP) and multiple
system atrophy (MSA) are prototypical examples of this phenomenon, as they
often present with characteristic brainstem atrophy. More detailed
characterization of brain atrophy due to individual diseases is urgently
required to select biomarkers and therapeutic targets that are meaningful to
each disease. Here we present a joint multi-atlas-segmentation and
deep-learning-based segmentation method for fast and robust parcellation of the
brainstem into its four sub-structures, i.e., the midbrain, pons, medulla, and
superior cerebellar peduncles (SCP), that in turn can provide detailed
volumetric information on the brainstem sub-structures affected in PSP and MSA.
The method may also benefit other neurodegenerative diseases, such as
Parkinson's disease; a condition which is often considered in the differential
diagnosis of PSP and MSA. Comparison with state-of-the-art labeling techniques
evaluated on ground truth manual segmentations demonstrate that our method is
significantly faster than prior methods as well as showing improvement in
labeling the brainstem indicating that this strategy may be a viable option to
provide a better characterization of the brainstem atrophy seen in PSP and MSA.
|
It is shown that the relativistic invariance plays a key role in the study of
integrable systems. Using the relativistically invariant sine-Gordon equation,
the Tzitzeica equation, the Toda fields and the second heavenly equation as
dual relations, some continuous and discrete integrable positive hierarchies
such as the potential modified Korteweg-de Vries hierarchy, the potential
Fordy-Gibbons hierarchies, the potential dispersionless
Kadomtsev-Petviashvili-like (dKPL) hierarchy, the differential-difference dKPL
hierarchy and the second heavenly hierarchies are converted to the integrable
negative hierarchies including the sG hierarchy and the Tzitzeica hierarchy,
the two-dimensional dispersionless Toda hierarchy, the two-dimensional Toda
hierarchies and negative heavenly hierarchy. In (1+1)-dimensional cases the
positive/negative hierarchy dualities are guaranteed by the dualities between
the recursion operators and their inverses. In (2+1)-dimensional cases, the
positive/negative hierarchy dualities are explicitly shown by using the formal
series symmetry approach, the mastersymmetry method and the relativistic
invariance of the duality relations. For the 4-dimensional heavenly system, the
duality problem is studied firstly by formal series symmetry approach. Two
elegant commuting recursion operators of the heavenly equation appear naturally
from the formal series symmetry approach so that the duality problem can also
be studied by means of the recursion operators.
|
Predictions of biodiversity trajectories under climate change are crucial in
order to act effectively in maintaining the diversity of species. In many
ecological applications, future predictions are made under various global
warming scenarios as described by a range of different climate models. The
outputs of these various predictions call for a reliable interpretation. We
propose a interpretable and flexible two step methodology to measure the
similarity between predicted species range maps and cluster the future scenario
predictions utilizing a spectral clustering technique. We find that clustering
based on ecological impact (predicted species range maps) is mainly driven by
the amount of warming. We contrast this with clustering based only on predicted
climate features, which is driven mainly by climate models. The differences
between these clusterings illustrate that it is crucial to incorporate
ecological information to understand the relevant differences between climate
models. The findings of this work can be used to better synthesize forecasts of
biodiversity loss under the wide spectrum of results that emerge when
considering potential future biodiversity loss.
|
Short-period sub-Neptunes with substantial volatile envelopes are among the
most common type of known exoplanets. However, recent studies of the Kepler
population have suggested a dearth of sub-Neptunes on highly irradiated orbits,
where they are vulnerable to atmospheric photoevaporation. Physically, we
expect this "photoevaporation desert" to depend on the total lifetime X-ray and
extreme ultraviolet flux, the main drivers of atmospheric escape. In this work,
we study the demographics of sub-Neptunes as a function of lifetime exposure to
high energy radiation and host star mass. We find that for a given present day
insolation, planets orbiting a 0.3 $M_{sun}$ star experience $\sim$100 $\times$
more X-ray flux over their lifetimes versus a 1.2 $M_{sun}$ star. Defining the
photoevaporation desert as a region consistent with zero occurrence at 2
$\sigma$, the onset of the desert happens for integrated X-ray fluxes greater
than 1.43 $\times 10^{22}$ erg/cm$^2$ to 8.23 $\times 10^{20}$ erg/cm$^2$ as a
function of planetary radii for 1.8 -- 4 $R_{\oplus}$. We also compare the
location of the photoevaporation desert for different stellar types. We find
much greater variability in the desert onset in bolometric flux space compared
to integrated X-ray flux space, suggestive of photoevaporation driven by steady
state stellar X-ray emissions as the dominant control on desert location.
Finally, we report tentative evidence for the sub-Neptune valley, first seen
around Sun-like stars, for M & K dwarfs. The discovery of additional planets
around low-mass stars from surveys such as the TESS mission will enable
detailed exploration of these trends.
|
The theoretical maximum efficiency of a solar cell is typically characterized
by a detailed balance of optical absorption and emission for a semiconductor in
the limit of unity radiative efficiency and an ideal step-function response for
the density of states and absorbance at the semiconductor band edges, known as
the Shockley-Queisser limit. However, real materials have non-abrupt band
edges, which are typically characterized by an exponential distribution of
states, known as an Urbach tail. We develop here a modified detailed balance
limit of solar cells with imperfect band edges, using optoelectronic
reciprocity relations. We find that for semiconductors whose band edges are
broader than the thermal energy, kT, there is an effective renormalized bandgap
given by the quasi-Fermi level splitting within the solar cell. This
renormalized bandgap creates a Stokes shift between the onset of the absorption
and photoluminescence emission energies, which significantly reduces the
maximum achievable efficiency. The abruptness of the band edge density of
states therefore has important implications for the maximum achievable
photovoltaic efficiency.
|
We study relativistic hydrodynamics in the presence of a non vanishing spin
chemical potential. Using a variety of techniques we carry out an exhaustive
analysis, and identify the constitutive relations for the stress tensor and
spin current in such a setup, allowing us to write the hydrodynamic equations
of motion to second order in derivatives. We then solve the equations of motion
in a perturbative setup and find surprisingly good agreement with measurements
of global $\Lambda$-hyperon polarization carried out at RHIC.
|
A monopolist wants to sell one item per period to a consumer with evolving
and persistent private information. The seller sets a price each period
depending on the history so far, but cannot commit to future prices. We show
that, regardless of the degree of persistence, any equilibrium under a D1-style
refinement gives the seller revenue no higher than what she would get from
posting all prices in advance.
|
Development of the new artificial systems with unique characteristics is very
challenging task. In this paper the application of the hybrid super
intelligence concept with object-process methodology to develop unique
high-performance computational systems is considered. The methodological
approach how to design new intelligent components for existing high-performance
computing development systems is proposed on the example of system requirements
creation for "MicroAI" and "Artificial Electronic" systems.
|
We define a class of discrete operators that, in particular, include the
delta and nabla fractional operators.
|
Existing image segmentation networks mainly leverage large-scale labeled
datasets to attain high accuracy. However, labeling medical images is very
expensive since it requires sophisticated expert knowledge. Thus, it is more
desirable to employ only a few labeled data in pursuing high segmentation
performance. In this paper, we develop a data augmentation method for one-shot
brain magnetic resonance imaging (MRI) image segmentation which exploits only
one labeled MRI image (named atlas) and a few unlabeled images. In particular,
we propose to learn the probability distributions of deformations (including
shapes and intensities) of different unlabeled MRI images with respect to the
atlas via 3D variational autoencoders (VAEs). In this manner, our method is
able to exploit the learned distributions of image deformations to generate new
authentic brain MRI images, and the number of generated samples will be
sufficient to train a deep segmentation network. Furthermore, we introduce a
new standard segmentation benchmark to evaluate the generalization performance
of a segmentation network through a cross-dataset setting (collected from
different sources). Extensive experiments demonstrate that our method
outperforms the state-of-the-art one-shot medical segmentation methods. Our
code has been released at
https://github.com/dyh127/Modeling-the-Probabilistic-Distribution-of-Unlabeled-Data.
|
We study the homogenized energy densities of periodic ferromagnetic Ising
systems. We prove that, for finite range interactions, the homogenized energy
density, identifying the effective limit, is crystalline, i.e. its Wulff
crystal is a polytope, for which we can (exponentially) bound the number of
vertices. This is achieved by deriving a dual representation of the energy
density through a finite cell formula. This formula also allows easy numerical
computations: we show a few experiments where we compute periodic patterns
which minimize the anisotropy of the surface tension.
|
Classical two-sample permutation tests for equality of distributions have
exact size in finite samples, but they fail to control size for testing
equality of parameters that summarize each distribution. This paper proposes
permutation tests for equality of parameters that are estimated at root-n or
slower rates. Our general framework applies to both parametric and
nonparametric models, with two samples or one sample split into two subsamples.
Our tests have correct size asymptotically while preserving exact size in
finite samples when distributions are equal. They have no loss in
local-asymptotic power compared to tests that use asymptotic critical values.
We propose confidence sets with correct coverage in large samples that also
have exact coverage in finite samples if distributions are equal up to a
transformation. We apply our theory to four commonly-used hypothesis tests of
nonparametric functions evaluated at a point. Lastly, simulations show good
finite sample properties of our tests.
|
The private simultaneous messages model is a non-interactive version of the
multiparty secure computation, which has been intensively studied to examine
the communication cost of the secure computation. We consider its quantum
counterpart, the private simultaneous quantum messages (PSQM) model, and
examine the advantages of quantum communication and prior entanglement of this
model. In the PSQM model, $k$ parties $P_1,\ldots,P_k$ initially share a common
random string (or entangled states in a stronger setting), and they have
private classical inputs $x_1,\ldots, x_k$. Every $P_i$ generates a quantum
message from the private input $x_i$ and the shared random string (entangled
states), and then sends it to the referee $R$. Receiving the messages, $R$
computes $F(x_1,\ldots,x_k)$. Then, $R$ learns nothing except for
$F(x_1,\ldots,x_k)$ as the privacy condition. We obtain the following results
for this PSQM model. (1) We demonstrate that the privacy condition inevitably
increases the communication cost in the two-party PSQM model as well as in the
classical case presented by Applebaum, Holenstein, Mishra, and Shayevitz. In
particular, we prove a lower bound $(3-o(1))n$ of the communication complexity
in PSQM protocols with a shared random string for random Boolean functions of
$2n$-bit input, which is larger than the trivial upper bound $2n$ of the
communication complexity without the privacy condition. (2) We demonstrate a
factor two gap between the communication complexity of PSQM protocols with
shared entangled states and with shared random strings by designing a
multiparty PSQM protocol with shared entangled states for a total function that
extends the two-party equality function. (3) We demonstrate an exponential gap
between the communication complexity of PSQM protocols with shared entangled
states and with shared random strings for a two-party partial function.
|
Low-Rank Parity-Check (LRPC) codes are a class of rank metric codes that have
many applications specifically in network coding and cryptography. Recently,
LRPC codes have been extended to Galois rings which are a specific case of
finite rings. In this paper, we first define LRPC codes over finite commutative
local rings, which are bricks of finite rings, with an efficient decoder. We
improve the theoretical bound of the failure probability of the decoder. Then,
we extend the work to arbitrary finite commutative rings. Certain conditions
are generally used to ensure the success of the decoder. Over finite fields,
one of these conditions is to choose a prime number as the extension degree of
the Galois field. We have shown that one can construct LRPC codes without this
condition on the degree of Galois extension.
|
The isolated susceptibility $\chi_{\rm I}$ may be defined as a
(non-thermodynamic) average over the canonical ensemble, but while it has often
been discussed in the literature, it has not been clearly measured. Here, we
demonstrate an unambiguous measurement of $\chi_{\rm I}$ at avoided
nuclear-electronic level crossings in a dilute spin ice system, containing
well-separated holmium ions. We show that $\chi_{\rm I}$ quantifies the
superposition of quasi-classical spin states at these points, and is a direct
measure of state concurrence and populations.
|
The econometric literature on program-evaluation and optimal treatment-choice
takes functionals of outcome-distributions as target welfare, and ignores
program-impacts on unobserved utilities, including utilities of those whose
outcomes may be unaffected by the intervention. We show that in the practically
important setting of discrete-choice, under general preference-heterogeneity
and income-effects, the distribution of indirect-utility is nonparametrically
identified from average demand. This enables cost-benefit analysis and
treatment-targeting based on social welfare and planners' distributional
preferences, while also allowing for general unobserved heterogeneity in
individual preferences. We demonstrate theoretical connections between
utilitarian social welfare and Hicksian compensation. An empirical application
illustrates our results.
|
We demonstrate that multiple higher-order topological transitions can be
triggered via the continuous change of the geometry in kagome photonic crystals
composed of three dielectric rods. By tuning a single geometry parameter, the
photonic corner and edge states emerge or disappear with the higher-order
topological transitions. Two distinct higher-order topological insulator phases
and a normal insulator phase are revealed. Their topological indices are
obtained from symmetry representations. A photonic analog of fractional corner
charge is introduced to distinguish the two higher-order topological insulator
phases. Our predictions can be readily realized and verified in configurable
dielectric photonic crystals.
|
A prototype neutron detector has been created through modification to a
commercial non-volatile flash memory device. Studies are being performed to
modify this prototype into a purpose-built device with greater performance and
functionality. This paper describes a demonstration of this technology using a
thermal neutron beam produced by a TRIGA research reactor. With a 4x4 array of
16 prototype devices, the full widths of the beam dimensions at half maximum
are measured to be 2.2x2.1 cm2.
|
Mathematical modelling of ionic electrodiffusion and water movement is
emerging as a powerful avenue of investigation to provide new physiological
insight into brain homeostasis. However, in order to provide solid answers and
resolve controversies, the accuracy of the predictions is essential. Ionic
electrodiffusion models typically comprise non-trivial systems of non-linear
and highly coupled partial and ordinary differential equations that govern
phenomena on disparate time scales. Here, we study numerical challenges related
to approximating these systems. We consider a homogenized model for
electrodiffusion and osmosis in brain tissue and present and evaluate different
associated finite element-based splitting schemes in terms of their numerical
properties, including accuracy, convergence, and computational efficiency for
both idealized scenarios and for the physiologically relevant setting of
cortical spreading depression (CSD). We find that the schemes display optimal
convergence rates in space for problems with smooth manufactured solutions.
However, the physiological CSD setting is challenging: we find that the
accurate computation of CSD wave characteristics (wave speed and wave width)
requires a very fine spatial and fine temporal resolution.
|
The Chermak-Delgado lattice of a finite group $G$ is a self-dual sublattice
of the subgroup lattice of $G$. In this paper, we focus on finite groups whose
Chermak-Delgado lattice is a subgroup lattice of an elementary abelian
$p$-group. We prove that such groups are nilpotent of class $2$. We also prove
that, for any elementary abelian $p$-group $E$, there exists a finite group $G$
such that the Chermak-Delgado lattice of $G$ is a subgroup lattice of $E$.
|
Let $(X, H)$ be a polarized smooth projective algebraic surface and $E$ is
globally generated, stable vector bundle on $X$. Then the Syzygy bundle $M_E$
associated to it is defined as the kernel bundle corresponding to the
evaluation map. In this article we will study the stability property of $M_E$
with respect to $H$.
|
Contagion arising from clustering of multiple time series like those in the
stock market indicators can further complicate the nature of volatility,
rendering a parametric test (relying on asymptotic distribution) to suffer from
issues on size and power. We propose a test on volatility based on the
bootstrap method for multiple time series, intended to account for possible
presence of contagion effect. While the test is fairly robust to distributional
assumptions, it depends on the nature of volatility. The test is correctly
sized even in cases where the time series are almost nonstationary. The test is
also powerful specially when the time series are stationary in mean and that
volatility are contained only in fewer clusters. We illustrate the method in
global stock prices data.
|
The collateral choice option gives the collateral posting party the
opportunity to switch between different collateral currencies which is
well-known to impact the asset price. Quantification of the option's value is
of practical importance but remains challenging under the assumption of
stochastic rates, as it is determined by an intractable distribution which
requires involved approximations. Indeed, many practitioners still rely on
deterministic spreads between the rates for valuation. We develop a scalable
and stable stochastic model of the collateral spreads under the assumption of
conditional independence. This allows for a common factor approximation which
admits analytical results from which further estimators are obtained. We show
that in modelling the spreads between collateral rates, a second order model
yields accurate results for the value of the collateral choice option. The
model remains precise for a wide range of model parameters and is numerically
efficient even for a large number of collateral currencies.
|
In this work we investigate neutron stars (NS) in $f(\mathtt{R,L_m})$ theory
of gravity for the case $f(\mathtt{R,L_m}) = \mathtt{R} + \mathtt{L_m} +
\sigma\mathtt{R}\mathtt{L_m}$, where $\mathtt{R}$ is the Ricci scalar and
$\mathtt{L_m}$ the Lagrangian matter density. In the term
$\sigma\mathtt{R}\mathtt{L_m}$, $\sigma$ represents the coupling between the
gravitational and particles fields. For the first time the hydrostatic
equilibrium equations in the theory are solved considering realistic equations
of state and NS masses and radii obtained are subject to joint constrains from
massive pulsars, the gravitational wave event GW170817 and from the PSR
J0030+0451 mass-radius from NASA's Neutron Star Interior Composition Explorer
(${\it NICER}$) data. We show that in this theory of gravity, the mass-radius
results can accommodate massive pulsars, while the general theory of relativity
can hardly do it. The theory also can explain the observed NS within the radius
region constrained by the GW170817 and PSR J0030+0451 observations for masses
around $1.4~M_{\odot}$.
|
Let $B\subset A$ be a left or right bounded extension of finite dimensional
algebras. We use the Jacobi-Zariski long nearly exact sequence to show that $B$
satisfies Han's conjecture if and only if $A$ does, regardless if the extension
splits or not. We provide conditions ensuring that an extension by arrows and
relations is left or right bounded. Finally we give a structure result for
extensions of an algebra given by a quiver and admissible relations, and
examples of non split left or right bounded extensions.
|
The genuine concurrence is a standard quantifier of multipartite
entanglement, detection and quantification of which still remains a difficult
problem from both theoretical and experimental point of view. Although many
efforts have been devoted toward the detection of multipartite entanglement
(e.g., using entanglement witnesses), measuring the degree of multipartite
entanglement, in general, requires some knowledge about an exact shape of a
density matrix of the quantum state. An experimental reconstruction of such
density matrix can be done by full state tomography which amounts to having the
distant parties share a common reference frame and well calibrated devices.
Although this assumption is typically made implicitly in theoretical works,
establishing a common reference frame, as well as aligning and calibrating
measurement devices in experimental situations are never trivial tasks. It is
therefore an interesting and important question whether the requirements of
having a shared reference frame and calibrated devices can be relaxed. In this
work we study both theoretically and experimentally the genuine concurrence for
the generalized Greenberger-Horne-Zeilinger states under randomly chosen
measurements on a single qubits without a shared frame of reference and
calibrated devices. We present the relation between genuine concurrence and
so-called nonlocal volume, a recently introduced indicator of nonlocality.
|
This paper presents dEchorate: a new database of measured multichannel Room
Impulse Responses (RIRs) including annotations of early echo timings and 3D
positions of microphones, real sources and image sources under different wall
configurations in a cuboid room. These data provide a tool for benchmarking
recent methods in echo-aware speech enhancement, room geometry estimation, RIR
estimation, acoustic echo retrieval, microphone calibration, echo labeling and
reflectors estimation. The database is accompanied with software utilities to
easily access, manipulate and visualize the data as well as baseline methods
for echo-related tasks.
|
We present X-ray analysis of the ejecta of supernova remnant G350.1$-$0.3
observed with Chandra and Suzaku, and clarify the ejecta's kinematics over a
decade and obtain a new observational clue to understanding the origin of the
asymmetric explosion. Two images of Chandra X-ray Observatory taken in 2009 and
2018 are analyzed in several methods, and enable us to measure the velocities
in the plane of the sky. A maximum velocity is 4640$\pm$290 km s$^{-1}$
(0.218$\pm$0.014 arcsec yr$^{-1}$) in the eastern region in the remnant. These
findings trigger us to scrutinize the Doppler effects in the spectra of the
thermal emission, and the velocities in the line-of-sight direction are
estimated to be a thousand km s$^{-1}$. The results are confirmed by analyzing
the spectra of Suzaku. Combining the proper motions and line-of-sight
velocities, the ejecta's three-dimensional velocities are $\sim$3000-5000 km
s$^{-1}$. The center of the explosion is more stringently constrained by
finding the optimal time to reproduce the observed spatial expansion. Our
findings that the age of the SNR is estimated at most to be 655 years, and the
CCO is observed as a point source object against the SNR strengthen the
'hydrodynamical kick' hypothesis on the origin of the remnant.
|
We present a derivation-based Atiyah sequence for noncommutative principal
bundles. Along the way we treat the problem of deciding when a given
*-automorphism on the quantum base space lifts to a *-automorphism on the
quantum total space that commutes with the underlying structure group.
|
We formulate the Lagrangian of the Newtonian cosmology where the cosmological
constant is also introduced. Following the affine quantization procedure, the
Hamiltonian operator is derived. The wave functions of the Newtonian universe
and the corresponding eigenvalues for the case of matter dominated by a
negative cosmological constant are given.
|
Learning effective representations in image-based environments is crucial for
sample efficient Reinforcement Learning (RL). Unfortunately, in RL,
representation learning is confounded with the exploratory experience of the
agent -- learning a useful representation requires diverse data, while
effective exploration is only possible with coherent representations.
Furthermore, we would like to learn representations that not only generalize
across tasks but also accelerate downstream exploration for efficient
task-specific training. To address these challenges we propose Proto-RL, a
self-supervised framework that ties representation learning with exploration
through prototypical representations. These prototypes simultaneously serve as
a summarization of the exploratory experience of an agent as well as a basis
for representing observations. We pre-train these task-agnostic representations
and prototypes on environments without downstream task information. This
enables state-of-the-art downstream policy learning on a set of difficult
continuous control tasks.
|
The paper discusses how robots enable occupant-safe continuous protection for
students when schools reopen. Conventionally, fixed air filters are not used as
a key pandemic prevention method for public indoor spaces because they are
unable to trap the airborne pathogens in time in the entire room. However, by
combining the mobility of a robot with air filtration, the efficacy of cleaning
up the air around multiple people is largely increased. A disinfection co-robot
prototype is thus developed to provide continuous and occupant-friendly
protection to people gathering indoors, specifically for students in a
classroom scenario. In a static classroom with students sitting in a grid
pattern, the mobile robot is able to serve up to 14 students per cycle while
reducing the worst-case pathogen dosage by 20%, and with higher robustness
compared to a static filter. The extent of robot protection is optimized by
tuning the passing distance and speed, such that a robot is able to serve more
people given a threshold of worst-case dosage a person can receive.
|
We study orbit codes in the field extension ${\mathbb F}_{q^n}$. First we
show that the automorphism group of a cyclic orbit code is contained in the
normalizer of the Singer subgroup if the orbit is generated by a subspace that
is not contained in a proper subfield of ${\mathbb F}_{q^n}$. We then
generalize to orbits under the normalizer of the Singer subgroup. In that
situation some exceptional cases arise and some open cases remain. Finally we
characterize linear isometries between such codes.
|
Optimal design of distributed decision policies can be a difficult task,
illustrated by the famous Witsenhausen counterexample. In this paper we
characterize the optimal control designs for the vector-valued setting assuming
that it results in an internal state that can be described by a continuous
random variable which has a probability density function. More specifically, we
provide a genie-aided outer bound that relies on our previous results for
empirical coordination problems. This solution turns out to be not optimal in
general, since it consists of a time-sharing strategy between two linear
schemes of specific power. It follows that the optimal decision strategy for
the original scalar Witsenhausen problem must lead to an internal state that
cannot be described by a continuous random variable which has a probability
density function.
|
A user generates n independent and identically distributed data random
variables with a probability mass function that must be guarded from a querier.
The querier must recover, with a prescribed accuracy, a given function of the
data from each of n independent and identically distributed query responses
upon eliciting them from the user. The user chooses the data probability mass
function and devises the random query responses to maximize distribution
privacy as gauged by the (Kullback-Leibler) divergence between the former and
the querier's best estimate of it based on the n query responses. Considering
an arbitrary function, a basic achievable lower bound for distribution privacy
is provided that does not depend on n and corresponds to worst-case privacy.
Worst-case privacy equals the logsum cardinalities of inverse atoms under the
given function, with the number of summands decreasing as the querier recovers
the function with improving accuracy. Next, upper (converse) and lower
(achievability) bounds for distribution privacy, dependent on n, are developed.
The former improves upon worst-case privacy and the latter does so under
suitable assumptions; both converge to it as n grows. The converse and
achievability proofs identify explicit strategies for the user and the querier.
|
We consider the problem of assigning agents to programs in the presence of
two-sided preferences, commonly known as the Hospital Residents problem. In the
standard setting each program has a rigid upper-quota which cannot be violated.
Motivated by applications where quotas are governed by resource availability,
we propose and study the problem of computing optimal matchings with
cost-controlled quotas -- denoted as the CCQ setting. In the CCQ setting we
have a cost associated with every program which denotes the cost of matching a
single agent to the program and these costs control the quotas. Our goal is to
compute a matching that matches all agents, respects the preference lists of
agents and programs and is optimal with respect to the cost criteria. We study
two optimization problems with respect to the costs -- minimize the total cost
(MINSUM) and minimize the maximum cost at a program (MINMAX). We show that
there is a sharp contrast in the complexity status of these two problems --
MINMAX is polynomial time solvable whereas MINSUM is NP-hard and hard to
approximate within a constant factor unless P = NP even under severe
restrictions. On the positive side, we present approximation algorithms for the
MINSUM for the general case and a special hard case. The special hard case is
theoretically challenging as well as practically motivated and we present a
Linear Programming based algorithm for this case. We also establish the
connection of our model with the stable extension problem in an apparently
different two-round setting of the stable matching problem [Gajulapalli et al.
FSTTCS 2020]. We show that our results in the CCQ setting generalize the stable
extension problem.
|
One-shot semantic image segmentation aims to segment the object regions for
the novel class with only one annotated image. Recent works adopt the episodic
training strategy to mimic the expected situation at testing time. However,
these existing approaches simulate the test conditions too strictly during the
training process, and thus cannot make full use of the given label information.
Besides, these approaches mainly focus on the foreground-background target
class segmentation setting. They only utilize binary mask labels for training.
In this paper, we propose to leverage the multi-class label information during
the episodic training. It will encourage the network to generate more
semantically meaningful features for each category. After integrating the
target class cues into the query features, we then propose a pyramid feature
fusion module to mine the fused features for the final classifier. Furthermore,
to take more advantage of the support image-mask pair, we propose a
self-prototype guidance branch to support image segmentation. It can constrain
the network for generating more compact features and a robust prototype for
each semantic class. For inference, we propose a fused prototype guidance
branch for the segmentation of the query image. Specifically, we leverage the
prediction of the query image to extract the pseudo-prototype and combine it
with the initial prototype. Then we utilize the fused prototype to guide the
final segmentation of the query image. Extensive experiments demonstrate the
superiority of our proposed approach.
|
Digital contents have grown dramatically in recent years, leading to
increased attention to copyright. Image watermarking has been considered one of
the most popular methods for copyright protection. With the recent advancements
in applying deep neural networks in image processing, these networks have also
been used in image watermarking. Robustness and imperceptibility are two
challenging features of watermarking methods that the trade-off between them
should be satisfied. In this paper, we propose to use an end-to-end network for
watermarking. We use a convolutional neural network (CNN) to control the
embedding strength based on the image content. Dynamic embedding helps the
network to have the lowest effect on the visual quality of the watermarked
image. Different image processing attacks are simulated as a network layer to
improve the robustness of the model. Our method is a blind watermarking
approach that replicates the watermark string to create a matrix of the same
size as the input image. Instead of diffusing the watermark data into the input
image, we inject the data into the feature space and force the network to do
this in regions that increase the robustness against various attacks.
Experimental results show the superiority of the proposed method in terms of
imperceptibility and robustness compared to the state-of-the-art algorithms.
|
This paper examines a continuous time intertemporal consumption and portfolio
choice problem with a stochastic differential utility preference of Epstein-Zin
type for a robust investor, who worries about model misspecification and seeks
robust decision rules. We provide a verification theorem which formulates the
Hamilton-Jacobi-Bellman-Isaacs equation under a non-Lipschitz condition. Then,
with the verification theorem, the explicit closed-form optimal robust
consumption and portfolio solutions to a Heston model are given. Also we
compare our robust solutions with the non-robust ones, and the comparisons
shown in a few figures coincide with our common sense.
|
We study transmission in a system consisting of a curved graphene surface as
an arc (ripple) of circle connected to two flat graphene sheets on the left and
right sides. We introduce a mass term in the curved part and study the effect
of a generated band gap in spectrum on transport properties for spin-up/-down.
The tunneling analysis allows us to find all transmission and reflections
channels modeled by the band gap. This later acts by decreasing the
transmissions with spin-up/-down but increasing with spin opposite, which
exhibit some behaviors look like bell-shaped curve. We find resonances
appearing in reflection with the same spin, thus backscattering with a
spin-up/-down is not null in ripple. We observe huge spatial shifts for the
total conduction in our model and the magnitudes of these shifts can be
efficiently controlled by adjusting the band gap. This high order tunability of
the tunneling effect can be used to design highly accurate devises based on
graphene.
|
The cells and their spatial patterns in the tumor microenvironment (TME) play
a key role in tumor evolution, and yet the latter remains an understudied topic
in computational pathology. This study, to the best of our knowledge, is among
the first to hybridize local and global graph methods to profile orchestration
and interaction of cellular components. To address the challenge in
hematolymphoid cancers, where the cell classes in TME may be unclear, we first
implemented cell-level unsupervised learning and identified two new cell
subtypes. Local cell graphs or supercells were built for each image by
considering the individual cell's geospatial location and classes. Then, we
applied supercell level clustering and identified two new cell communities. In
the end, we built global graphs to abstract spatial interaction patterns and
extract features for disease diagnosis. We evaluate the proposed algorithm on
H&E slides of 60 hematolymphoid neoplasms and further compared it with three
cell level graph-based algorithms, including the global cell graph, cluster
cell graph, and FLocK. The proposed algorithm achieved a mean diagnosis
accuracy of 0.703 with the repeated 5-fold cross-validation scheme. In
conclusion, our algorithm shows superior performance over the existing methods
and can be potentially applied to other cancer types.
|
Research in the Vision and Language area encompasses challenging topics that
seek to connect visual and textual information. When the visual information is
related to videos, this takes us into Video-Text Research, which includes
several challenging tasks such as video question answering, video summarization
with natural language, and video-to-text and text-to-video conversion. This
paper reviews the video-to-text problem, in which the goal is to associate an
input video with its textual description. This association can be mainly made
by retrieving the most relevant descriptions from a corpus or generating a new
one given a context video. These two ways represent essential tasks for
Computer Vision and Natural Language Processing communities, called text
retrieval from video task and video captioning/description task. These two
tasks are substantially more complex than predicting or retrieving a single
sentence from an image. The spatiotemporal information present in videos
introduces diversity and complexity regarding the visual content and the
structure of associated language descriptions. This review categorizes and
describes the state-of-the-art techniques for the video-to-text problem. It
covers the main video-to-text methods and the ways to evaluate their
performance. We analyze twenty-six benchmark datasets, showing their drawbacks
and strengths for the problem requirements. We also show the progress that
researchers have made on each dataset, we cover the challenges in the field,
and we discuss future research directions.
|
We study static magnetic susceptibility $\chi(T, \mu)$ in $SU(2)$ lattice
gauge theory with $N_f = 2$ light flavours of dynamical fermions at finite
chemical potential $\mu$. Using linear response theory we find that $SU(2)$
gauge theory exhibits paramagnetic behavior in both the high-temperature
deconfined regime and the low-temperature confining regime. Paramagnetic
response becomes stronger at higher temperatures and larger values of the
chemical potential. For our range of temperatures $0.727 \leq T/T_c \leq 2.67$,
the first coefficient of the expansion of $\chi(T, \mu)$ in even powers of
$\mu/T$ around $\mu=0$ is close to that of free quarks and lies in the range
$(2 \ldots 5) \cdot 10^{-3}$. The strongest paramagnetic response is found in
the diquark condensation phase at $\mu > m_{\pi}/2$.
|
We consider the problem of estimation and structure learning of high
dimensional signals via a normal sequence model, where the underlying parameter
vector is piecewise constant, or has a block structure. We develop a Bayesian
fusion estimation method by using the Horseshoe prior to induce a strong
shrinkage effect on successive differences in the mean parameters,
simultaneously imposing sufficient prior concentration for non-zero values of
the same. The proposed method thus facilitates consistent estimation and
structure recovery of the signal pieces. We provide theoretical justifications
of our approach by deriving posterior convergence rates and establishing
selection consistency under suitable assumptions. We also extend our proposed
method to signal de-noising over arbitrary graphs and develop efficient
computational methods along with providing theoretical guarantees. We
demonstrate the superior performance of the Horseshoe based Bayesian fusion
estimation method through extensive simulations and two real-life examples on
signal de-noising in biological and geophysical applications. We also
demonstrate the estimation performance of our method on a real-world large
network for the graph signal de-noising problem.
|
Let $G$ be a finite group and ${\rm cd}(G)$ will be the set of the degrees of
the complex irreducible characters of $G$. Also let ${\rm cod}(G)$ be the set
of codegrees of the irreducible characters of $G$. The Taketa problem
conjectures if $G$ is solvable, then ${\rm dl}(G) \leq |{\rm cd}(G)|$, where
${\rm dl}(G)$ is the derived length of $G$. In this note, we show that ${\rm
dl}(G) \leq |{\rm cod}(G)|$ in some cases and we conjecture that this
inequality holds if $G$ is a finite solvable group.
|
In this article, we model Earth's lower small-scale eddies motion in the
atmosphere as a compressible neutral fluid flow on a rotating sphere. To
justify the model, we carried out a numerical computation of the thermodynamic
and hydrodynamic properties of the viscous atmospheric motion in two dimensions
using Naiver-Stokes dynamics, conservation of atmospheric energy, and
continuity equation. The dynamics of the atmosphere, governed by a partial
differential equation without any approximation , and without considering
latitude-dependent acceleration due to gravity. The numerical solution for
those governed equations was solved by applying the finite difference method
with applying some sort of horizontal air mass density as a perturbation to the
atmosphere at a longitude of $5\Delta\lambda$ . Based on this initial boundary
condition with taking temperature-dependent transport coefficient into account,
we obtain the propagation for each atmospheric parameter and presented it
graphically as a function of geometrically position and time. All of the
parameters oscillating with respect to time and satisfy the characteristics of
an atmospheric waves.
Finally, the effect of the Coriolis force on resultant velocity was also
discussed by plotting contour lines for the resultant velocity for the
different magnitude of Coriolis force, then we also obtain an interesting wave
phenomena for the respective rotation of the Coriolis force.
~~~~Keywords: Naiver-Stokes Equations; Finite difference method; Viscous
atmospheric motion; Viscous dissipation; convective motion.
|
Microbiota profiles measure the structure of microbial communities in a
defined environment (known as microbiomes). In the past decade, microbiome
research has focused on health applications as a result of which the gut
microbiome has been implicated in the development of a broad range of diseases
such as obesity, inflammatory bowel disease, and major depressive disorder. A
key goal of many microbiome experiments is to characterise or describe the
microbial community. High-throughput sequencing is used to generate microbiota
profiles, but data gathered via this method are extremely challenging to
analyse, as the data violate multiple strong assumptions of standard models.
Rough Set Theory (RST) has weak assumptions that are less likely to be
violated, and offers a range of attractive tools for extracting knowledge from
complex data. In this paper we present the first application of RST for
characterising microbiomes. We begin with a demonstrative benchmark microbiota
profile and extend the approach to gut microbiomes gathered from depressed
subjects to enable knowledge discovery. We find that RST is capable of
excellent characterisation of the gut microbiomes in depressed subjects and
identifying previously undescribed alterations to the microbiome-gut-brain
axis. An important aspect of the application of RST is that it provides a
possible solution to an open research question regarding the search for an
optimal normalisation approach for microbiome census data, as one does not
currently exist.
|
Multistability is a common phenomenon which naturally occurs in complex
networks. If coexisting attractors are numerous and their basins of attraction
are complexly interwoven, the long-term response to a perturbation can be
highly uncertain. We examine the uncertainty in the outcome of perturbations to
the synchronous state in a Kuramoto-like representation of the British power
grid. Based on local basin landscapes which correspond to single-node
perturbations, we demonstrate that the uncertainty shows strong spatial
variability. While perturbations at many nodes only allow for a few outcomes,
other local landscapes show extreme complexity with more than a hundred basins.
Particularly complex domains in the latter can be related to unstable invariant
chaotic sets of saddle type. Most importantly, we show that the characteristic
dynamics on these chaotic saddles can be associated with certain topological
structures of the network. We find that one particular tree-like substructure
allows for the chaotic response to perturbations at nodes in the north of Great
Britain. The interplay with other peripheral motifs increases the uncertainty
in the system response even further.
|
Ginzburg algebras associated to triangulated surfaces provide a means to
categorify the cluster algebras of these surfaces. As shown by Ivan Smith, the
finite derived category of such a Ginzburg algebra can be embedded into the
Fukaya category of the total space of a Lefschetz fibration over the surface.
Inspired by this perspective we provide a description of the full derived
category in terms of a perverse schober. The main novelty is a gluing formalism
describing the Ginzburg algebra as a colimit of certain local Ginzburg algebras
associated to discs. As a first application we give a new proof of the derived
invariance of these Ginzburg algebras under flips of an edge of the
triangulation. Finally, we note that the perverse schober as well as the
resulting gluing construction can also be defined over the sphere spectrum.
|
Fourth-order interference is an information processing primitive for photonic
quantum technologies. When used in conjunction with post-selection, it forms
the basis of photonic controlled logic gates, entangling measurements, and can
be used to produce quantum correlations. Here, using classical weak coherent
states as inputs, we study fourth-order interference in novel $4 \times 4$
multi-port beam splitters built within multi-core optical fibers. Using two
mutually incoherent weak laser pulses as inputs, we observe high-quality fourth
order interference between photons from different cores, as well as
self-interference of a two-photon wavepacket. In addition, we show that quantum
correlations, in the form of quantum discord, can be maximized by controlling
the intensity ratio between the two input weak coherent states. This should
allow for the exploitation of quantum correlations in future telecommunication
networks.
|
Plant phenotyping, that is, the quantitative assessment of plant traits
including growth, morphology, physiology, and yield, is a critical aspect
towards efficient and effective crop management. Currently, plant phenotyping
is a manually intensive and time consuming process, which involves human
operators making measurements in the field, based on visual estimates or using
hand-held devices. In this work, methods for automated grapevine phenotyping
are developed, aiming to canopy volume estimation and bunch detection and
counting. It is demonstrated that both measurements can be effectively
performed in the field using a consumer-grade depth camera mounted onboard an
agricultural vehicle.
|
Unsupervised domain adaptation (UDA) methods for person re-identification
(re-ID) aim at transferring re-ID knowledge from labeled source data to
unlabeled target data. Although achieving great success, most of them only use
limited data from a single-source domain for model pre-training, making the
rich labeled data insufficiently exploited. To make full use of the valuable
labeled data, we introduce the multi-source concept into UDA person re-ID
field, where multiple source datasets are used during training. However,
because of domain gaps, simply combining different datasets only brings limited
improvement. In this paper, we try to address this problem from two
perspectives, \ie{} domain-specific view and domain-fusion view. Two
constructive modules are proposed, and they are compatible with each other.
First, a rectification domain-specific batch normalization (RDSBN) module is
explored to simultaneously reduce domain-specific characteristics and increase
the distinctiveness of person features. Second, a graph convolutional network
(GCN) based multi-domain information fusion (MDIF) module is developed, which
minimizes domain distances by fusing features of different domains. The
proposed method outperforms state-of-the-art UDA person re-ID methods by a
large margin, and even achieves comparable performance to the supervised
approaches without any post-processing techniques.
|
We provide high-precision predictions for muon-pair and tau-pair productions
in a photon-photon collision by considering a complete set of one-loop-level
scattering amplitudes, i.e., electroweak (EW) corrections together with soft
and hard QED radiation. Accordingly, we present a detailed numerical discussion
with particular emphasis on the pure QED corrections as well as genuinely weak
corrections. The effects of angular and initial beam polarisation distributions
on production rates are also discussed. An improvement is observed by a factor
of two with oppositely polarized photons. Our results indicate that the
one-loop EW radiative corrections enhance the Born cross section and the total
relative correction is typically about ten percent for both production
channels. It appears that the full EW corrections to $\gamma \gamma \to \ell^-
\ell^+$ are required to match a percent level accuracy.
|
The program-over-monoid model of computation originates with Barrington's
proof that the model captures the complexity class $\mathsf{NC^1}$. Here we
make progress in understanding the subtleties of the model. First, we identify
a new tameness condition on a class of monoids that entails a natural
characterization of the regular languages recognizable by programs over monoids
from the class. Second, we prove that the class known as $\mathbf{DA}$
satisfies tameness and hence that the regular languages recognized by programs
over monoids in $\mathbf{DA}$ are precisely those recognizable in the classical
sense by morphisms from $\mathbf{QDA}$. Third, we show by contrast that the
well studied class of monoids called $\mathbf{J}$ is not tame. Finally, we
exhibit a program-length-based hierarchy within the class of languages
recognized by programs over monoids from $\mathbf{DA}$.
|
We show that in analytic sub-Riemannian manifolds of rank 2 satisfying a
commutativity condition spiral-like curves are not length minimizing near the
center of the spiral. The proof relies upon the delicate construction of a
competing curve.
|
We introduce a thermodynamically consistent, minimal stochastic model for
complementary logic gates built with field-effect transistors. We characterize
the performance of such gates with tools from information theory and study the
interplay between accuracy, speed, and dissipation of computations. With a few
universal building blocks, such as the NOT and NAND gates, we are able to model
arbitrary combinatorial and sequential logic circuits, which are modularized to
implement computing tasks. We find generically that high accuracy can be
achieved provided sufficient energy consumption and time to perform the
computation. However, for low-energy computing, accuracy and speed are coupled
in a way that depends on the device architecture and task. Our work bridges the
gap between the engineering of low dissipation digital devices and theoretical
developments in stochastic thermodynamics, and provides a platform to study
design principles for low dissipation digital devices.
|
A low complexity frequency offset estimation algorithm based on all-phase FFT
for M-QAM is proposed. Compared with two-stage algorithms such as FFT+CZT and
FFT+ZoomFFT, our algorithm can lower computational complexity by 73% and 30%
respectively, without loss of the estimation accuracy.
|
While self-supervised pretraining has proven beneficial for many computer
vision tasks, it requires expensive and lengthy computation, large amounts of
data, and is sensitive to data augmentation. Prior work demonstrates that
models pretrained on datasets dissimilar to their target data, such as chest
X-ray models trained on ImageNet, underperform models trained from scratch.
Users that lack the resources to pretrain must use existing models with lower
performance. This paper explores Hierarchical PreTraining (HPT), which
decreases convergence time and improves accuracy by initializing the
pretraining process with an existing pretrained model. Through experimentation
on 16 diverse vision datasets, we show HPT converges up to 80x faster, improves
accuracy across tasks, and improves the robustness of the self-supervised
pretraining process to changes in the image augmentation policy or amount of
pretraining data. Taken together, HPT provides a simple framework for obtaining
better pretrained representations with less computational resources.
|
Currently, every 1 in 54 children have been diagnosed with Autism Spectrum
Disorder (ASD), which is 178% higher than it was in 2000. An early diagnosis
and treatment can significantly increase the chances of going off the spectrum
and making a full recovery. With a multitude of physical and behavioral tests
for neurological and communication skills, diagnosing ASD is very complex,
subjective, time-consuming, and expensive. We hypothesize that the use of
machine learning analysis on facial features and social behavior can speed up
the diagnosis of ASD without compromising real-world performance. We propose to
develop a hybrid architecture using both categorical data and image data to
automate traditional ASD pre-screening, which makes diagnosis a quicker and
easier process. We created and tested a Logistic Regression model and a Linear
Support Vector Machine for Module 1, which classifies ADOS categorical data. A
Convolutional Neural Network and a DenseNet network are used for module 2,
which classifies video data. Finally, we combined the best performing models, a
Linear SVM and DenseNet, using three data averaging strategies. We used a
standard average, weighted based on number of training data, and weighted based
on the number of ASD patients in the training data to average the results,
thereby increasing accuracy in clinical applications. The results we obtained
support our hypothesis. Our novel architecture is able to effectively automate
ASD pre-screening with a maximum weighted accuracy of 84%.
|
With the Deep Neural Networks (DNNs) as a powerful function approximator,
Deep Reinforcement Learning (DRL) has been excellently demonstrated on robotic
control tasks. Compared to DNNs with vanilla artificial neurons, the
biologically plausible Spiking Neural Network (SNN) contains a diverse
population of spiking neurons, making it naturally powerful on state
representation with spatial and temporal information. Based on a hybrid
learning framework, where a spike actor-network infers actions from states and
a deep critic network evaluates the actor, we propose a Population-coding and
Dynamic-neurons improved Spiking Actor Network (PDSAN) for efficient state
representation from two different scales: input coding and neuronal coding. For
input coding, we apply population coding with dynamically receptive fields to
directly encode each input state component. For neuronal coding, we propose
different types of dynamic-neurons (containing 1st-order and 2nd-order neuronal
dynamics) to describe much more complex neuronal dynamics. Finally, the PDSAN
is trained in conjunction with deep critic networks using the Twin Delayed Deep
Deterministic policy gradient algorithm (TD3-PDSAN). Extensive experimental
results show that our TD3-PDSAN model achieves better performance than
state-of-the-art models on four OpenAI gym benchmark tasks. It is an important
attempt to improve RL with SNN towards the effective computation satisfying
biological plausibility.
|
Mode-locking operation and multimode instabilities in Terahertz (THz) quantum
cascade lasers (QCLs) have been intensively investigated during the last
decade. These studies have unveiled a rich phenomenology, owing to the unique
properties of these lasers, in particular their ultrafast gain medium. Thanks
to this, in QCLs a modulation of the intracavity field intensity gives rise to
a strong modulation of the population inversion, directly affecting the laser
current. In this work we show that this property can be used to study the
real-time dynamics of multimode THz QCLs, using a self-detection technique
combined with a 60GHz real-time oscilloscope. To demonstrate the potential of
this technique we investigate a free-running 4.2THz QCL, and observe a
self-starting periodic modulation of the laser current, producing trains of
regularly spaced, ~100ps-long pulses. Depending on the drive current we find
two regimes of oscillation with dramatically different properties: a first
regime at the fundamental repetition rate, characterised by large amplitude and
phase noise, with coherence times of a few tens of periods; a much more regular
second-harmonic-comb regime, with typical coherence times of ~105 oscillation
periods. We interpret these measurements using a set of effective semiconductor
Maxwell-Bloch equations that qualitatively reproduce the fundamental features
of the laser dynamics, indicating that the observed carrier-density and optical
pulses are in antiphase, and appear as a rather shallow modulation on top of a
continuous wave background. Thanks to its simplicity and versatility, the
demonstrated technique is a powerful tool for the study of ultrafast dynamics
in THz QCLs.
|
We study the renormalization of Entanglement Entropy in holographic CFTs dual
to Lovelock gravity. It is known that the holographic EE in Lovelock gravity is
given by the Jacobson-Myers (JM) functional. As usual, due to the divergent
Weyl factor in the Fefferman-Graham expansion of the boundary metric for
Asymptotically AdS spaces, this entropy functional is infinite. By considering
the Kounterterm renormalization procedure, which utilizes extrinsic boundary
counterterms in order to renormalize the on-shell Lovelock gravity action for
AAdS spacetimes, we propose a new renormalization prescription for the
Jacobson-Myers functional. We then explicitly show the cancellation of
divergences in the EE up to next-to-leading order in the holographic radial
coordinate, for the case of spherical entangling surfaces. Using this new
renormalization prescription, we directly find the $C-$function candidates for
odd and even dimensional CFTs dual to Lovelock gravity. Our results illustrate
the notable improvement that the Kounterterm method affords over other
approaches, as it is non-perturbative and does not require that the Lovelock
theory has limiting Einstein behavior.
|
We study identification of linear systems with multiplicative noise from
multiple trajectory data. A least-squares algorithm, based on exploratory
inputs, is proposed to simultaneously estimate the parameters of the nominal
system and the covariance matrix of the multiplicative noise. The algorithm
does not need prior knowledge of the noise or stability of the system, but
requires mild conditions of inputs and relatively small length for each
trajectory. Identifiability of the noise covariance matrix is studied, showing
that there exists an equivalent class of matrices that generate the same
second-moment dynamic of system states. It is demonstrated how to obtain the
equivalent class based on estimates of the noise covariance. Asymptotic
consistency of the algorithm is verified under sufficiently exciting inputs and
system controllability conditions. Non-asymptotic estimation performance is
also analyzed under the assumption that system states and noise are bounded,
providing vanishing high-probability bounds as the number of trajectories grows
to infinity. The results are illustrated by numerical simulations.
|
Yield farming has been an immensely popular activity for cryptocurrency
holders since the explosion of Decentralized Finance (DeFi) in the summer of
2020. In this Systematization of Knowledge (SoK), we study a general framework
for yield farming strategies with empirical analysis. First, we summarize the
fundamentals of yield farming by focusing on the protocols and tokens used by
aggregators. We then examine the sources of yield and translate those into
three example yield farming strategies, followed by the simulations of yield
farming performance, based on these strategies. We further compare four major
yield aggregrators -- Idle, Pickle, Harvest and Yearn -- in the ecosystem,
along with brief introductions of others. We systematize their strategies and
revenue models, and conduct an empirical analysis with on-chain data from
example vaults, to find a plausible connection between data anomalies and
historical events. Finally, we discuss the benefits and risks of yield
aggregators.
|
We propose a Point-Voxel DeConvolution (PVDeConv) module for 3D data
autoencoder. To demonstrate its efficiency we learn to synthesize
high-resolution point clouds of 10k points that densely describe the underlying
geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as
protrusions, missing parts, smoothed edges and holes, inevitably appear in real
3D scans of fabricated CAD objects. Learning the original CAD model
construction from a 3D scan requires a ground truth to be available together
with the corresponding 3D scan of an object. To solve the gap, we introduce a
new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their
corresponding 3D meshes. This dataset is used to learn a convolutional
autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.
The challenges of this new dataset are demonstrated in comparison with other
generative point cloud sampling models trained on ShapeNet. The CC3D
autoencoder is efficient with respect to memory consumption and training time
as compared to stateof-the-art models for 3D data generation.
|
We investigate quantitative aspects of the LEF property for subgroups of the
topological full group $[[ \sigma ]]$ of a two-sided minimal subshift over a
finite alphabet, measured via the LEF growth function. We show that the LEF
growth of $[[ \sigma ]]^{\prime}$ may be bounded from above and below in terms
of the recurrence function and the complexity function of the subshift,
respectively. As an application, we construct groups of previously unseen LEF
growth types, and exhibit a continuum of finitely generated LEF groups which
may be distinguished from one another by their LEF growth.
|
Riordan arrays, denoted by pairs of generating functions (g(z), f(z)), are
infinite lower-triangular matrices that are used as combinatorial tools. In
this paper, we present Riordan and stochastic Riordan arrays that have
connections to the Fibonacci and modified Lucas numbers. Then, we present some
pseudo-involutions in the Riordan group that are based on constructions
starting with a certain generating function g(z). We also present a theorem
that shows how to construct pseudo-involutions in the Riordan group starting
with a certain generating function f(z) whose additive inverse has
compositional order 2. The theorem is then used to construct more
pseudo-involutions in the Riordan group where some arrays have connections to
the Fibonacci and modified Lucas numbers. A MATLAB algorithm for constructing
the pseudo-involutions is also given.
|
The measurement of bias in machine learning often focuses on model
performance across identity subgroups (such as man and woman) with respect to
groundtruth labels. However, these methods do not directly measure the
associations that a model may have learned, for example between labels and
identity subgroups. Further, measuring a model's bias requires a fully
annotated evaluation dataset which may not be easily available in practice. We
present an elegant mathematical solution that tackles both issues
simultaneously, using image classification as a working example. By treating a
classification model's predictions for a given image as a set of labels
analogous to a bag of words, we rank the biases that a model has learned with
respect to different identity labels. We use (man, woman) as a concrete example
of an identity label set (although this set need not be binary), and present
rankings for the labels that are most biased towards one identity or the other.
We demonstrate how the statistical properties of different association metrics
can lead to different rankings of the most "gender biased" labels, and conclude
that normalized pointwise mutual information (nPMI) is most useful in practice.
Finally, we announce an open-sourced nPMI visualization tool using TensorBoard.
|
Usually, in mechanics, we obtain the trajectory of a particle in a given
force field by solving Newton's second law with chosen initial conditions. In
contrast, through our work here, we first demonstrate how one may analyse the
behaviour of a suitably defined family of trajectories of a given mechanical
system. Such an approach leads us to develop a mechanics analog following the
well-known Raychaudhuri equation largely studied in Riemannian geometry and
general relativity. The idea of geodesic focusing, which is more familiar to a
relativist, appears to be analogous to the meeting of trajectories of a
mechanical system within a finite time. Applying our general results to the
case of simple pendula, we obtain relevant quantitative consequences.
Thereafter, we set up and perform a straightforward experiment based on a
system with two pendula. The experimental results on this system are found to
tally well with our proposed theoretical model. In summary, the simple theory,
as well as the related experiment, provides us with a way to understand the
essence of a fairly involved concept in advanced physics from an elementary
standpoint.
|
Recently it has become essential to search for and retrieve high-resolution
and efficient images easily due to swift development of digital images, many
present annotation algorithms facing a big challenge which is the variance for
represent the image where high level represent image semantic and low level
illustrate the features, this issue is known as semantic gab. This work has
been used MPEG-7 standard to extract the features from the images, where the
color feature was extracted by using Scalable Color Descriptor (SCD) and Color
Layout Descriptor (CLD), whereas the texture feature was extracted by employing
Edge Histogram Descriptor (EHD), the CLD produced high dimensionality feature
vector therefore it is reduced by Principal Component Analysis (PCA). The
features that have extracted by these three descriptors could be passing to the
classifiers (Naive Bayes and Decision Tree) for training. Finally, they
annotated the query image. In this study TUDarmstadt image bank had been used.
The results of tests and comparative performance evaluation indicated better
precision and executing time of Naive Bayes classification in comparison with
Decision Tree classification.
|
According to the O'Nan--Scott Theorem, a finite primitive permutation group
either preserves a structure of one of three types (affine space, Cartesian
lattice, or diagonal semilattice), or is almost simple. However, diagonal
groups are a much larger class than those occurring in this theorem. For any
positive integer $m$ and group $G$ (finite or infinite), there is a diagonal
semilattice, a sub-semilattice of the lattice of partitions of a set $\Omega$,
whose automorphism group is the corresponding diagonal group. Moreover, there
is a graph (the diagonal graph), bearing much the same relation to the diagonal
semilattice and group as the Hamming graph does to the Cartesian lattice and
the wreath product of symmetric groups.
Our purpose here, after a brief introduction to this semilattice and graph,
is to establish some properties of this graph. The diagonal graph
$\Gamma_D(G,m)$ is a Cayley graph for the group~$G^m$, and so is
vertex-transitive. We establish its clique number in general and its chromatic
number in most cases, with a conjecture about the chromatic number in the
remaining cases. We compute the spectrum of the adjacency matrix of the graph,
using a calculation of the M\"obius function of the diagonal semilattice. We
also compute some other graph parameters and symmetry properties of the graph.
We believe that this family of graphs will play a significant role in
algebraic graph theory.
|
Many Riordan arrays play a significant role in algebraic combinatorics. We
explore the inversion of Riordan arrays in this context. We give a general
construct for the inversion of a Riordan array, and study this in the case of
various subgroups of the Riordan group. For instance, we show that the
inversion of an ordinary Bell matrix is an exponential Riordan array in the
associated subgroup. Examples from combinatorics and algebraic combinatorics
illustrate the usefulness of such inversions. We end with a brief look at the
inversion of exponential Riordan arrays. A final example places Airey's
convergent factor in the context of a simple exponential Riordan array.
|
Unveiling point defects concentration in transition metal oxide thin films is
essential to understand and eventually control their functional properties,
employed in an increasing number of applications and devices. Despite this
unquestionable interest, there is a lack of available experimental techniques
able to estimate the defect chemistry and equilibrium constants in such oxides
at intermediate-to-low temperatures. In this study, the defect chemistry of a
relevant material such as La1-xSrxFeO3-d (LSF) with (x = 0.2, 0.4 and 0.5
(LSF20, LSF40 and LSF50 respectively) is obtained by using a novel in situ
spectroscopic ellipsometry approach applied to thin films. Through this
technique, the concentration of holes in LSF is correlated to measured optical
properties and its evolution with temperature and oxygen partial pressure is
determined. In this way, a systematic description of defect chemistry in LSF
thin films in the temperature range from 350dC to 500dC is obtained for the
first time, which represents a step forward in the understanding of LSF20,
LSF40 and LSF50 for emerging low temperature applications.
|
We use a theorem of P. Berger and D. Turaev to construct an example of a
Finsler geodesic flow on the 2-torus with a transverse section, such that its
Poincar\'e return map has positive metric entropy. The Finsler metric
generating the flow can be chosen to be arbitrarily $C^\infty$-close to a flat
metric.
|
In this paper, we characterize the performance of a three-dimensional (3D)
two-hop cellular network in which terrestrial base stations (BSs) coexist with
unmanned aerial vehicles (UAVs) to serve a set of ground user equipment (UE).
In particular, a UE connects either directly to its serving terrestrial BS by
an access link or connects first to its serving UAV which is then wirelessly
backhauled to a terrestrial BS (joint access and backhaul). We consider
realistic antenna radiation patterns for both BSs and UAVs using practical
models developed by the third generation partnership project (3GPP). We assume
a probabilistic channel model for the air-to-ground transmission, which
incorporates both line-of-sight (LoS) and non-line-of-sight (NLoS) links.
Assuming the max-power association policy, we study the performance of the
network in both amplify-and-forward (AF) and decode-and-forward (DF) relaying
protocols. Using tools from stochastic geometry, we analyze the joint
distribution of distance and zenith angle of the closest (and serving) UAV to
the origin in a 3D setting. Further, we identify and extensively study key
mathematical constructs as the building blocks of characterizing the received
signal-to-interference-plus-noise ratio (SINR) distribution. Using these
results, we obtain exact mathematical expressions for the coverage probability
in both AF and DF relaying protocols. Furthermore, considering the fact that
backhaul links could be quite weak because of the downtilted antennas at the
BSs, we propose and analyze the addition of a directional uptilted antenna at
the BS that is solely used for backhaul purposes. The superiority of having
directional antennas with wirelessly backhauled UAVs is further demonstrated
via simulation.
|
Edge computing has become one of the key enablers for ultra-reliable and
low-latency communications in the industrial Internet of Things in the fifth
generation communication systems, and is also a promising technology in the
future sixth generation communication systems. In this work, we consider the
application of edge computing to smart factories for mission-critical task
offloading through wireless links. In such scenarios, although high end-to-end
delays from the generation to completion of tasks happen with low probability,
they may incur severe casualties and property loss, and should be seriously
treated. Inspired by the risk management theory widely used in finance, we
adopt the Conditional Value at Risk to capture the tail of the delay
distribution. An upper bound of the Conditional Value at Risk is derived
through analysis of the queues both at the devices and the edge computing
servers. We aim to find out the optimal offloading policy taking into
consideration both the average and the worst case delay performance of the
system. Given that the formulated optimization problem is a non-convex mixed
integer non-linear programming problem, a decomposition into sub-problems is
performed and a two-stage heuristic algorithm is proposed. Simulation results
validate our analysis and indicate that the proposed algorithm can reduce the
risk in both the queuing and end-to-end delay.
|
For many years, the image databases used in steganalysis have been relatively
small, i.e. about ten thousand images. This limits the diversity of images and
thus prevents large-scale analysis of steganalysis algorithms.
In this paper, we describe a large JPEG database composed of 2 million colour
and grey-scale images. This database, named LSSD for Large Scale Steganalysis
Database, was obtained thanks to the intensive use of \enquote{controlled}
development procedures. LSSD has been made publicly available, and we aspire it
could be used by the steganalysis community for large-scale experiments.
We introduce the pipeline used for building various image database versions.
We detail the general methodology that can be used to redevelop the entire
database and increase even more the diversity. We also discuss computational
cost and storage cost in order to develop images.
|
Machine-Learning-as-a-Service providers expose machine learning (ML) models
through application programming interfaces (APIs) to developers. Recent work
has shown that attackers can exploit these APIs to extract good approximations
of such ML models, by querying them with samples of their choosing. We propose
VarDetect, a stateful monitor that tracks the distribution of queries made by
users of such a service, to detect model extraction attacks. Harnessing the
latent distributions learned by a modified variational autoencoder, VarDetect
robustly separates three types of attacker samples from benign samples, and
successfully raises an alarm for each. Further, with VarDetect deployed as an
automated defense mechanism, the extracted substitute models are found to
exhibit poor performance and transferability, as intended. Finally, we
demonstrate that even adaptive attackers with prior knowledge of the deployment
of VarDetect, are detected by it.
|
Fog computing can be used to offload computationally intensive tasks from
battery powered Internet of Things (IoT) devices. Although it reduces energy
required for computations in an IoT device, it uses energy for communications
with the fog. This paper analyzes when usage of fog computing is more energy
efficient than local computing. Detailed energy consumption models are built in
both scenarios with the focus set on the relation between energy consumption
and distortion introduced by a Power Amplifier (PA). Numerical results show
that task offloading to a fog is the most energy efficient for short, wideband
links.
|
The geometric properties of sigma models with target space a Jacobi manifold
are investigated. In their basic formulation, these are topological field
theories - recently introduced by the authors - which share and generalise
relevant features of Poisson sigma models, such as gauge invariance under
diffeomorphisms and finite dimension of the reduced phase space. After
reviewing the main novelties and peculiarities of these models, we perform a
detailed analysis of constraints and ensuing gauge symmetries in the
Hamiltonian approach. Contact manifolds as well as locally conformal symplectic
manifolds are discussed, as main instances of Jacobi manifolds.
|
Using symmetrization techniques, we show that, for every $N \geq 2$, any
second eigenfunction of the fractional Laplacian in the $N$-dimensional unit
ball with homogeneous Dirichlet conditions is nonradial, and hence its nodal
set is an equatorial section of the ball.
|
In March 2020 the United Kingdom (UK) entered a nationwide lockdown period
due to the Covid-19 pandemic. As a result, levels of nitrogen dioxide (NO2) in
the atmosphere dropped. In this work, we use 550,134 NO2 data points from 237
stations in the UK to build a spatiotemporal Gaussian process capable of
predicting NO2 levels across the entire UK. We integrate several covariate
datasets to enhance the model's ability to capture the complex spatiotemporal
dynamics of NO2. Our numerical analyses show that, within two weeks of a UK
lockdown being imposed, UK NO2 levels dropped 36.8%. Further, we show that as a
direct result of lockdown NO2 levels were 29-38% lower than what they would
have been had no lockdown occurred. In accompaniment to these numerical
results, we provide a software framework that allows practitioners to easily
and efficiently fit similar models.
|
Locally-rotationally-symmetric Bianchi type-I viscous and non -viscous
cosmological models are explored in general relativity (GR) and in f(R,T)
gravity. Solutions are obtained by assuming that the expansion scalar is
proportional to the shear scalar which yields a constant value for the
deceleration parameter (q=2). Constraints are obtained by requiring the
physical viability of the solutions. A comparison is made between the viscous
and non-viscous models, and between the models in GR and in f(R,T) gravity. The
metric potentials remain the same in GR and in f(R,T) gravity. Consequently,
the geometrical behavior of the $f(R,T)$ gravity models remains the same as the
models in GR. It is found that f(R,T) gravity or bulk viscosity does not affect
the behavior of effective matter which acts as a stiff fluid in all models. The
individual fluids have very rich behavior. In one of the viscous models, the
matter either follows a semi-realistic EoS or exhibits a transition from stiff
matter to phantom, depending on the values of the parameter. In another model,
the matter describes radiation, dust, quintessence, phantom, and the
cosmological constant for different values of the parameter. In general, f(R,T)
gravity diminishes the effect of bulk viscosity.
|
We propose TubeR: a simple solution for spatio-temporal video action
detection. Different from existing methods that depend on either an off-line
actor detector or hand-designed actor-positional hypotheses like proposals or
anchors, we propose to directly detect an action tubelet in a video by
simultaneously performing action localization and recognition from a single
representation. TubeR learns a set of tubelet-queries and utilizes a
tubelet-attention module to model the dynamic spatio-temporal nature of a video
clip, which effectively reinforces the model capacity compared to using
actor-positional hypotheses in the spatio-temporal space. For videos containing
transitional states or scene changes, we propose a context aware classification
head to utilize short-term and long-term context to strengthen action
classification, and an action switch regression head for detecting the precise
temporal action extent. TubeR directly produces action tubelets with variable
lengths and even maintains good results for long video clips. TubeR outperforms
the previous state-of-the-art on commonly used action detection datasets AVA,
UCF101-24 and JHMDB51-21.
|
We prove the existence of an extremal function in the
Hardy-Littlewood-Sobolev inequality for the energy associated to an stable
operator. To this aim we obtain a concentration-compactness principle for
stable processes in $\mathbb{R}^N$.
|
Smooth interfaces of topological systems are known to host massive surface
states along with the topologically protected chiral one. We show that in Weyl
semimetals these massive states, along with the chiral Fermi arc, strongly
alter the form of the Fermi-arc plasmon, Most saliently, they yield further
collective plasmonic modes that are absent in a conventional interfaces. The
plasmon modes are completely anisotropic as a consequence of the underlying
anisotropy in the surface model and expected to have a clear-cut experimental
signature, e.g. in electron-energy loss spectroscopy.
|
A two-class Processor-Sharing queue with one impatient class is studied.
Local exponential decay rates for its stationary distribution (N, M) are
established in the heavy traffic regime where the arrival rate of impatient
customers grows proportionally to a large factor A. This regime is
characterized by two time-scales, so that no general Large Deviations result is
applicable. In the framework of singular perturbation methods, we instead
assume that an asymptotic expansion of the solution of associated Kolmogorov
equations exists for large A and derive it in the form P(N = Ax, M = Ay) ~
g(x,y)/A exp(-A H(x,y)) for x > 0 and y > 0 with explicit functions g and H.
This result is then applied to the model of mobile networks proposed in a
previous work and accounting for the spatial movement of users. We give further
evidence of a unusual growth behavior in heavy traffic in that the stationary
mean queue length E(N') and E(M') of each customer-class increases
proportionally to E(N') ~ E(M') ~ -log(1-rho) with system load rho tending to
1, instead of the usual 1/(1-rho) growth behavior.
|
Loosely bound van der Waals dimers of lanthanide atoms, as might be obtained
in ultracold atom experiments, are investigated. These molecules are known to
exhibit a degree of quantum chaos, due to the strong anisotropic mixing of
their angular spin and rotation degrees of freedom. Within a model of these
molecules, we identify different realms of this anisotropic mixing, depending
on whether the spin, the rotation, or both, are significantly mixed by the
anisotropy. These realms are in turn generally correlated with the resulting
magnetic moments of the states.
|
We have investigated the structural, magnetic and dielectric properties of
Pb-based langasite compound Pb$_3$TeMn$_3$P$_2$O$_{14}$ both experimentally and
theoretically in the light of metal-oxygen covalency, and the consequent
generation of multiferroicity. It is known that large covalency between Pb 6$p$
and O 2$p$ plays instrumental role behind stereochemical lone pair activity of
Pb. The same happens here but a subtle structural phase transition above room
temperature changes the degree of such lone pair activity and the system
becomes ferroelectric below 310 K. Interestingly, this structural change also
modulates the charge densities on different constituent atoms and consequently
the overall magnetic response of the system while maintaining global
paramagnetism behavior of the compound intact. This single origin of modulation
in polarity and paramagnetism inherently connects both the functionalities and
the system exhibits mutiferroicity at room temperature.
|
We present a series of models of three-dimensional rotation-symmetric fragile
topological insulators in class AI (time-reversal symmetric and spin-orbit-free
systems), which have gapless surface states protected by time-reversal ($T$)
and $n$-fold rotation ($C_n$) symmetries ($n=2,4,6$). Our models are
generalizations of Fu's model of a spinless topological crystalline insulator,
in which orbital degrees of freedom play the role of pseudo-spins. We consider
minimal surface Hamiltonian with $C_n$ symmetry in class AI and discuss
possible symmetry-protected gapless surface states, i.e., a quadratic band
touching and multiple Dirac cones with linear dispersion. We characterize
topological structure of bulk wave functions in terms of two kinds of
topological invariants obtained from Wilson loops: $\mathbb{Z}_2$ invariants
protected by $C_n$ ($n=4,6$) and time-reversal symmetries, and
$C_2T$-symmetry-protected $\mathbb{Z}$ invariants (the Euler class) when the
number of occupied bands is two. Accordingly, our models realize two kinds of
fragile topological insulators. One is a fragile $\mathbb{Z}$ topological
insulator whose only nontrivial topological index is the Euler class that
specifies the number of surface Dirac cones. The other is a fragile
$\mathbb{Z}_2$ topological insulator having gapless surface states with either
a quadratic band touching or four (six) Dirac cones, which are protected by
time-reversal and $C_4$ ($C_6$) symmetries. Finally, we discuss the instability
of gapless surface states against the addition of $s$-orbital bands and
demonstrate that surface states are gapped out through hybridization with
surface-localized $s$-orbital bands.
|
Complexity of products, volatility in global markets, and the increasingly
rapid pace of innovations may make it difficult to know how to approach
challenging situations in mechatronic design and production. Technical Debt
(TD) is a metaphor that describes the practical bargain of exchanging
short-term benefits for long-term negative consequences. Oftentimes, the scope
and impact of TD, as well as the cost of corrective measures, are
underestimated. Especially for mechatronic teams in the mechanical, electrical,
and software disciplines, the adverse interdisciplinary ripple effects of TD
incidents are passed on throughout the life cycle. The analysis of the first
comprehensive survey showed that not only do the TD types differ in
cross-disciplinary comparisons, but different characteristics can also be
observed depending on whether a discipline is studied in isolation or in
combination with others. To validate the study results and to report on a
general consciousness of TD in the disciplines, this follow-up study involves
15 of the 50 experts of the predecessor study and reflects the frequency and
impact of technical debt in industrial experts' daily work using a
questionnaire. These experts rate 14 TD types, 47 TD causes, and 33 TD symptoms
in terms of their frequency and impact. Detailed analyses reveal consistent
results for the most frequent TD types and causes, yet they show divergent
characteristics in a profound exploration of discipline-specific phenomena.
Thus, this study has the potential to set the foundations for future automated
TD identification analyses in mechatronics.
|
Defect detection at commit check-in time prevents the introduction of defects
into software systems. Current defect detection approaches rely on metric-based
models which are not very accurate and whose results are not directly useful
for developers. We propose a method to detect bug-inducing commits by comparing
the incoming changes with all past commits in the project, considering both
those that introduced defects and those that did not. Our method considers
individual changes in the commit separately, at the method-level granularity.
Doing so helps developers as they are informed of specific methods that need
further attention instead of being told that the entire commit is problematic.
Our approach represents source code as abstract syntax trees and uses tree
kernels to estimate the similarity of the code with previous commits. We
experiment with subtree kernels (STK), subset tree kernels (SSTK), or partial
tree kernels (PTK). An incoming change is then classified using a K-NN
classifier on the past changes. We evaluate our approach on the BigCloneBench
benchmark and on the Technical Debt dataset, using the NiCad clone detector as
the baseline. Our experiments with the BigCloneBench benchmark show that the
tree kernel approach can detect clones with a comparable MAP to that of NiCad.
Also, on defect detection with the Technical Debt dataset, tree kernels are
least as effective as NiCad with MRR, F-score, and Accuracy of 0.87, 0.80, and
0.82 respectively.
|
Particle-In-Cell codes are widely used for plasma physics simulations. It is
often the case that particles within a computational cell need to be split to
improve the statistics or, in the case of non-uniform meshes, to avoid the
development of fictitious self-forces. Existing particle splitting methods are
largely empirical and their accuracy in preserving the distribution function
has not been evaluated in a quantitative way. Here we present a new method
specifically designed for codes using adaptive mesh refinement. Although we
point out that an exact, distribution function preserving method does exist, it
requires a large number of split particles and its practical use is limited. We
derive instead a method that minimizes the cost function representing the
distance between the assignment function of the original particle and that of
the sum of split particles. Depending on the interpolation degree and the
dimension of the problem, we provide tabulated results for the weight and
position of the split particles. This strategy represents no overhead in
computing time and for a large enough number of split-particles it
asymptotically tends to the exact solution.
|
Lettericity is a graph parameter introduced by Petkov\v{s}ek in 2002 in order
to study well-quasi-orderability under the induced subgraph relation. In the
world of permutations, geometric griddability was independently introduced in
2013 by Albert, Atkinson, Bouvel, Ru\v{s}kuc and Vatter, partly as an
enumerative tool. Despite their independent origins, those two notions share a
connection: they highlight very similar structural features in their respective
objects. The fact that those structural features arose separately on two
different occasions makes them very interesting to study in their own right.
In the present paper, we explore the notion of lettericity through the lens
of "minimal obstructions", i.e., minimal classes of graphs of unbounded
lettericity, and identify an infinite collection of such classes. We also
discover an intriguing structural hierarchy that arises in the study of
lettericity and that of griddability.
|
Deuterated molecules are good tracers of the evolutionary stage of
star-forming cores. During the star formation process, deuterated molecules are
expected to be enhanced in cold, dense pre-stellar cores and to deplete after
protostellar birth. In this paper we study the deuteration fraction of
formaldehyde in high-mass star-forming cores at different evolutionary stages
to investigate whether the deuteration fraction of formaldehyde can be used as
an evolutionary tracer. Using the APEX SEPIA Band 5 receiver, we extended our
pilot study of the $J$=3$\rightarrow$2 rotational lines of HDCO and D$_2$CO to
eleven high-mass star-forming regions that host objects at different
evolutionary stages. High-resolution follow-up observations of eight objects in
ALMA Band 6 were performed to reveal the size of the H$_2$CO emission and to
give an estimate of the deuteration fractions HDCO/H$_2$CO and D$_2$CO/HDCO at
scales of $\sim$6" (0.04-0.15 pc at the distance of our targets). Our
observations show that singly- and doubly deuterated H$_2$CO are detected
toward high-mass protostellar objects (HMPOs) and ultracompact HII regions
(UCHII regions), the deuteration fraction of H$_2$CO is also found to decrease
by an order of magnitude from the earlier HMPO phases to the latest
evolutionary stage (UCHII), from $\sim$0.13 to $\sim$0.01. We have not detected
HDCO and D$_2$CO emission from the youngest sources (high-mass starless cores,
HMSCs). Our extended study supports the results of the previous pilot study:
the deuteration fraction of formaldehyde decreases with evolutionary stage, but
higher sensitivity observations are needed to provide more stringent
constraints on the D/H ratio during the HMSC phase. The calculated upper limits
for the HMSC sources are high, so the trend between HMSC and HMPO phases cannot
be constrained.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.