abstract
stringlengths 42
2.09k
|
---|
Due to the limit of mesh density, the improvement of the spatial precision of
numerical computation always leads to a decrease in computing efficiency.
Aiming at this inability of numerical computation, we propose a novel method
for boosting the mesh density in numerical computation within the 2D domain.
Based on the low mesh-density stress field in the 2D plane strain problem
computed by the finite element method, this method utilizes a deep neural
network named SuperMeshingNet to learn the non-linear mapping from low
mesh-density to high mesh-density stress field, and realizes the improvement of
numerical computation accuracy and efficiency simultaneously. We adopt residual
dense blocks to our mesh-density boost model called SuperMeshingNet for
extracting abundant local features and enhancing the prediction capacity of the
model. Experimental results show that the SuperMeshingNet proposed in this work
can effectively boost the spatial resolution of the stress field under the
multiple scaling factors: 2X, 4X, 8X. Compared to the results of the finite
element method, the predicted stress field error of SuperMeshingNet is only
0.54%, which is within the acceptable range of stress field estimation, and the
SuperMeshingNet predicts the maximum stress value also without significant
accuracy loss. We publicly share our work with full detail of implementation at
https://github.com/zhenguonie/2021_SuperMeshing_2D_Plane_Strain.
|
We propose a segmentation-based bounding box generation method for
omnidirectional pedestrian detection that enables detectors to tightly fit
bounding boxes to pedestrians without omnidirectional images for training. Due
to the wide angle of view, omnidirectional cameras are more cost-effective than
standard cameras and hence suitable for large-scale monitoring. The problem of
using omnidirectional cameras for pedestrian detection is that the performance
of standard pedestrian detectors is likely to be substantially degraded because
pedestrians' appearance in omnidirectional images may be rotated to any angle.
Existing methods mitigate this issue by transforming images during inference.
However, the transformation substantially degrades the detection accuracy and
speed. A recently proposed method obviates the transformation by training
detectors with omnidirectional images, which instead incurs huge annotation
costs. To obviate both the transformation and annotation works, we leverage an
existing large-scale object detection dataset. We train a detector with rotated
images and tightly fitted bounding box annotations generated from the
segmentation annotations in the dataset, resulting in detecting pedestrians in
omnidirectional images with tightly fitted bounding boxes. We also develop
pseudo-fisheye distortion augmentation, which further enhances the performance.
Extensive analysis shows that our detector successfully fits bounding boxes to
pedestrians and demonstrates substantial performance improvement.
|
Sentinel-1 is a synthetic aperture radar (SAR) platform with an operational
mode called extra wide (EW) that offers large regions of ocean areas to be
observed. A major issue with EW images is that the cross-polarized HV and VH
channels have prominent additive noise patterns relative to low backscatter
intensity, which disrupts tasks that require manual or automated
interpretation. The European Space Agency (ESA) provides a method for removing
the additive noise pattern by means of lookup tables, but applying them
directly produces unsatisfactory results because characteristics of the noise
still remain. Furthermore, evidence suggests that the magnitude of the additive
noise dynamically depends on factors that are not considered by the ESA
estimated noise field.
To address these issues we propose a quadratic objective function to model
the mis-scale of the provided noise field on an image. We consider a linear
denoising model that re-scales the noise field for each subswath, whose
parameters are found from a least-squares solution over the objective function.
This method greatly reduces the presence of additive noise while not requiring
a set of training images, is robust to heterogeneity in images, dynamically
estimates parameters for each image, and finds parameters using a closed-form
solution.
Two experiments were performed to validate the proposed method. The first
experiment simulated noise removal on a set of RADARSAT-2 images with noise
fields artificially imposed on them. The second experiment conducted noise
removal on a set of Sentinel-1 images taken over the five oceans. Afterwards,
quality of the noise removal was evaluated based on the appearance of
open-water. The two experiments indicate that the proposed method marks an
improvement both visually and through numerical measures.
|
We present a new overview of the life of very massive stars (VMS) in terms of
neutrino emission from thermal processes: pair annihilation, plasmon decay,
photoneutrino process, bremsstrahlung and recombination processes in burning
stages of selected VMS models. We use the realistic conditions of temperature,
density, electron fraction and nuclear isotropic composition of the VMS.
Results are presented for a set of progenitor stars with mass of 150, 200 and
300 M$_\odot$ Z=0.002 and 500 M$_\odot$ Z=0.006 rotating models which are
expected to explode as a pair instability supernova at the end of their life
except the 300 M$_\odot$ would end up as a black hole. It is found that for
VMS, thermal neutrino emission occurs as early as towards the end of hydrogen
burning stage due to the high initial temperature and density of these VMS. We
calculate the total neutrino emissivity, $Q_\nu$ and luminosity, $L_\nu$ using
the structure profile of each burning stages of the models and observed the
contribution of photoneutrino at early burning stages (H and He) and pair
annihilation at the advanced stages. Pair annihilation and photoneutrino
processes are the most dominant neutrino energy loss mechanisms throughout the
evolutionary track of the VMS. At the O-burning stage, the neutrino luminosity
$\sim 10^{47-48}$ erg/s depending on their initial mass and metallicity are
slightly higher than the neutrino luminosity from massive stars. This could
shed light on the possibility of using detection of neutrinos to locate the
candidates for pair instability supernova in our local universe.
|
Noisy labels are very common in deep supervised learning. Although many
studies tend to improve the robustness of deep training for noisy labels, rare
works focus on theoretically explaining the training behaviors of learning with
noisily labeled data, which is a fundamental principle in understanding its
generalization. In this draft, we study its two phenomena, clean data first and
phase transition, by explaining them from a theoretical viewpoint.
Specifically, we first show that in the first epoch training, the examples with
clean labels will be learned first. We then show that after the learning from
clean data stage, continuously training model can achieve further improvement
in testing error when the rate of corrupted class labels is smaller than a
certain threshold; otherwise, extensively training could lead to an increasing
testing error.
|
This study uses the semantic brand score, a novel measure of brand importance
in big textual data, to forecast elections based on online news. About 35,000
online news articles were transformed into networks of co-occurring words and
analyzed by combining methods and tools from social network analysis and text
mining. Forecasts made for four voting events in Italy provided consistent
results across different voting systems: a general election, a referendum, and
a municipal election in two rounds. This work contributes to the research on
electoral forecasting by focusing on predictions based on online big data; it
offers new perspectives regarding the textual analysis of online news through a
methodology which is relatively fast and easy to apply. This study also
suggests the existence of a link between the brand importance of political
candidates and parties and electoral results.
|
We study minimal $\mathbb{Z}^d$-Cantor systems and the relationship between
their speedups, their collections of invariant Borel measures, their associated
unital dimension groups, and their orbit equivalence classes. In the particular
case of minimal $\mathbb{Z}^d$-odometers, we show that their bounded speedups
must again be odometers but, contrary to the 1-dimensional case, they need not
be conjugate, or even isomorphic, to the original.
|
The non-Hermitian skin effect (NHSE) in non-Hermitian lattice systems depicts
the exponential localization of eigenstates at system's boundaries. It has led
to a number of counter-intuitive phenomena and challenged our understanding of
bulk-boundary correspondence in topological systems. This work aims to
investigate how the NHSE localization and topological localization of in-gap
edge states compete with each other, with several representative static and
periodically driven 1D models, whose topological properties are protected by
different symmetries. The emerging insight is that at critical system
parameters, even topologically protected edge states can be perfectly
delocalized. In particular, it is discovered that this intriguing
delocalization occurs if the real spectrum of the system's edge states falls on
the same system's complex spectral loop obtained under the periodic boundary
condition. We have also performed sample numerical simulation to show that such
delocalized topological edge states can be safely reconstructed from
time-evolving states. Possible applications of delocalized topological edge
states are also briefly discussed.
|
The equilibration of sinusoidally modulated distribution of the kinetic
temperature is analyzed in the $\beta$-Fermi-Pasta-Ulam-Tsingou chain with
different degrees of nonlinearity and for different wavelengths of temperature
modulation. Two different types of initial conditions are used to show that
either one gives the same result as the number of realizations increases and
that the initial conditions that are closer to the state of thermal equilibrium
give faster convergence. The kinetics of temperature equilibration is monitored
and compared to the analytical solution available for the linear chain in the
continuum limit. The transition from ballistic to diffusive thermal
conductivity with an increase in the degree of anharmonicity is shown. In the
ballistic case, the energy equilibration has an oscillatory character with an
amplitude decreasing in time, and in the diffusive case, it is monotonous in
time. For smaller wavelength of temperature modulation, the oscillatory
character of temperature equilibration remains for a larger degree of
anharmonicity. For a given wavelength of temperature modulation, there is such
a value of the anharmonicity parameter at which the temperature equilibration
occurs most rapidly.
|
PSR B1259-63 is a gamma-ray binary system hosting a radio pulsar orbiting
around a O9.5Ve star, LS 2883, with a period of ~3.4 years. The interaction of
the pulsar wind with the LS 2883 outflow leads to unpulsed broadband emission
in the radio, X-ray, GeV, and TeV domains. One of the most unusual features of
the system is an outburst at GeV energies around the periastron, during which
the energy release substantially exceeds the spin down luminosity under the
assumption of the isotropic energy release. In this paper, we present the first
results of a recent multi-wavelength campaign (radio, optical, and X-ray bands)
accompanied by the analysis of publicly available GeV Fermi/LAT data. The
campaign covered a period of more than 100 days around the 2021 periastron and
revealed substantial differences from previously observed passages. We report a
major delay of the GeV flare, weaker X-ray flux during the peaks, which are
typically attributed to the times when the pulsar crosses the disk, and the
appearance of a third X-ray peak never observed before. We argue that these
features are consistent with the emission cone model of Chernyakova et al
(2020) in the case of a sparser and clumpier disk of the Be star.
|
Social media such as Twitter, Facebook, etc. has led to a generated growing
number of comments that contains users opinions. Sentiment analysis research
deals with these comments to extract opinions which are positive or negative.
Arabic language is a rich morphological language; thus, classical techniques of
English sentiment analysis cannot be used for Arabic. Word embedding technique
can be considered as one of successful methods to gaping the morphological
problem of Arabic. Many works have been done for Arabic sentiment analysis
based on word embedding, but there is no study focused on variable parameters.
This study will discuss three parameters (Window size, Dimension of vector and
Negative Sample) for Arabic sentiment analysis using DBOW and DMPV
architectures. A large corpus of previous works generated to learn word
representations and extract features. Four binary classifiers (Logistic
Regression, Decision Tree, Support Vector Machine and Naive Bayes) are used to
detect sentiment. The performance of classifiers evaluated based on; Precision,
Recall and F1-score.
|
Quasiconformal maps are homeomorphisms with useful local distortion
inequalities; infinitesimally, they map balls to ellipsoids with bounded
eccentricity. This leads to a number of useful regularity properties, including
quantitative H\"older continuity estimates; on the other hand, one can use the
radial stretches to characterize the extremizers for H\"older continuity. In
this work, given any bounded countable set in $\mathbb{R}^d$, we will construct
an example of a $K$-quasiconformal map which exhibits the maximum stretching at
each point of the set. This will provide an example of a quasiconformal map
that exhibits the worst-case regularity on a surprisingly large set, and
generalizes constructions from the planar setting into $\mathbb{R}^d$.
|
Quantum communication networks are emerging as a promising technology that
could constitute a key building block in future communication networks in the
6G era and beyond. These networks have an inherent feature of parallelism that
allows them to boost the capacity and enhance the security of communication
systems. Recent advances led to the deployment of small- and large-scale
quantum communication networks with real quantum hardware. In quantum networks,
entanglement is a key resource that allows for data transmission between
different nodes. However, to reap the benefits of entanglement and enable
efficient quantum communication, the number of generated entangled pairs must
be optimized. Indeed, if the entanglement generation rates are not optimized,
then some of these valuable resources will be discarded and lost. In this
paper, the problem of optimizing the entanglement generation rates and their
distribution over a quantum memory is studied. In particular, a quantum network
in which users have heterogeneous distances and applications is considered.
This problem is posed as a mixed integer nonlinear programming optimization
problem whose goal is to efficiently utilize the available quantum memory by
distributing the quantum entangled pairs in a way that maximizes the user
satisfaction. An interior point optimization method is used to solve the
optimization problem and extensive simulations are conducted to evaluate the
effectiveness of the proposed system. Simulation results show the key design
considerations for efficient quantum networks, and the effect of different
network parameters on the network performance.
|
We present new X-ray and UV observations of the Wolf-Rayet + black hole
binary system NGC 300 X-1 with the Chandra X-ray Observatory and the Hubble
Space Telescope Cosmic Origins Spectrograph. When combined with archival X-ray
observations, our X-ray and UV observations sample the entire binary orbit,
providing clues to the system geometry and interaction between the black hole
accretion disk and the donor star wind. We measure a binary orbital period of
32.7921$\pm$0.0003 hr, in agreement with previous studies, and perform
phase-resolved spectroscopy using the X-ray data. The X-ray light curve reveals
a deep eclipse, consistent with inclination angles of $i=60-75^{\circ}$, and a
pre-eclipse excess consistent with an accretion stream impacting the disk edge.
We further measure radial velocity variations for several prominent FUV
spectral lines, most notably He II $\lambda$1640 and C IV $\lambda$1550. We
find that the He II emission lines systematically lag the expected Wolf-Rayet
star orbital motion by a phase difference $\Delta \phi\sim0.3$, while C IV
$\lambda$1550 matches the phase of the anticipated radial velocity curve of the
Wolf-Rayet donor. We assume the C IV $\lambda$1550 emission line follows a
sinusoidal radial velocity curve (semi-amplitude = 250 km s$^{-1}$) and infer a
BH mass of 17$\pm$4 M$_{\odot}$. Our observations are consistent with the
presence of a wind-Roche lobe overflow accretion disk, where an accretion
stream forms from gravitationally focused wind material and impacts the edge of
the black hole accretion disk.
|
We provide a unified interpretation of both paramagnon and plasmon modes in
high-$T_c$ copper-oxides, and verify it quantitatively against available
resonant inelastic $x$-ray scattering (RIXS) data across the hole-doped phase
diagram. Three-dimensional extended Hubbard model, with included long-range
Coulomb interactions and doping-independent microscopic parameters for both
classes of quantum fluctuations, is used. Collective modes are studied using
VWF+$1/\mathcal{N}_f$ approach which extends variational wave function (VWF)
scheme by means of an expansion in inverse number of fermionic flavors
($1/\mathcal{N}_f$). We show that intense paramagnons persist along the
anti-nodal line from the underdoped to overdoped regime and undergo rapid
overdamping in the nodal direction. Plasmons exhibit a three-dimensional
character, with minimal energy corresponding to anti-phase oscillations on
neighboring $\mathrm{CuO_2}$ planes. The theoretical spin- and charge
excitation energies reproduce semi-quantitatively RIXS data for $\mathrm{(Bi,
Pb)_2 (Sr, La)_2 CuO_{6+\delta}}$. The present VWF+$1/\mathcal{N}_f$ analysis
of dynamics and former VWF results for static quantities combine into a
consistent description of the principal properties of hole-doped high-$T_c$
cuprates as strongly correlated systems.
|
The atmospheric neutrino flux includes a component from the prompt decay of
charmed hadrons that becomes significant only at $E\ge 10$ TeV. At these
energies, however, the diffuse flux of cosmic neutrinos discovered by IceCube
seems to be larger than the atmospheric one. Here we study the possibility to
detect a neutrino interaction in down-going atmospheric events at km$^3$
telescopes. The neutrino signal will always appear together with a muon bundle
that reveals its atmospheric origin and, generically, it implies an increase in
the detector activity with the slant depth. We propose a simple algorithm that
could separate these events from regular muon bundles.
|
We compute the motive of the variety of representations of the torus knot of
type (m,n) into the affine groups $AGL_1$ and $AGL_2$ for an arbitrary field
$k$. In the case that $k = F_q$ is a finite field this gives rise to the count
of the number of points of the representation variety, while for $k = C$ this
calculation returns the E-polynomial of the representation variety. We discuss
the interplay between these two results in sight of Katz theorem that relates
the point count polynomial with the E-polynomial. In particular, we shall show
that several point count polynomials exist for these representation varieties,
depending on the arithmetic between m,n and the characteristic of the field,
whereas only one of them agrees with the actual E-polynomial.
|
We study {\alpha}-attractor models with both E-model and T-model potential in
an extended Non-Minimal Derivative (NMD) inflation where a canonical scalar
field and its derivatives are non-minimally coupled to gravity. We calculate
the evolution of perturbations during this regime. Then by adopting inflation
potentials of the model we show that in the large N and small {\alpha} limit,
value of the scalar spectral index ns and tensor to scalar ratio r are
universal. Next, we study reheating after inflation in this formalism. We
obtain some constraints on the model's parameter space by adopting the results
with Planck 2018.
|
We study the Nash Social Welfare problem: Given $n$ agents with valuation
functions $v_i:2^{[m]} \rightarrow {\mathbb R}$, partition $[m]$ into
$S_1,\ldots,S_n$ so as to maximize $(\prod_{i=1}^{n} v_i(S_i))^{1/n}$. The
problem has been shown to admit a constant-factor approximation for additive,
budget-additive, and piecewise linear concave separable valuations; the case of
submodular valuations is open.
We provide a $\frac{1}{e} (1-\frac{1}{e})^2$-approximation of the {\em
optimal value} for several classes of submodular valuations: coverage, sums of
matroid rank functions, and certain matching-based valuations.
|
This note extends a recently proposed algorithm for model identification and
robust MPC of asymptotically stable, linear time-invariant systems subject to
process and measurement disturbances. Independent output predictors for
different steps ahead are estimated with Set Membership methods. It is here
shown that the corresponding prediction error bounds are the least conservative
in the considered model class. Then, a new multi-rate robust MPC algorithm is
developed, employing said multi-step predictors to robustly enforce constraints
and stability against disturbances and model uncertainty, and to reduce
conservativeness. A simulation example illustrates the effectiveness of the
approach.
|
Let $\mathfrak q$ be a Lie algebra over a field $\mathbb K$ and $p,\tilde
p\in\mathbb K[t]$ two different normalised polynomials of degree at least 2. As
vector spaces both quotient Lie algebras $\mathfrak q[t]/(p)$ and $\mathfrak
q[t]/(\tilde p)$ can be identified with $W=\mathfrak q{\cdot}1\oplus\mathfrak
q\bar t\oplus\ldots\oplus\mathfrak q\bar t^{n-1}$. If $\mathrm{deg}\,(p-\tilde
p)$ is at most 1, then the Lie brackets $[\,\,,\,]_p$, $[\,\,,\,]_{\tilde p}$
induced on $W$ by $p$ and $\tilde p$, respectively, are compatible. By a
general method, known as the Lenard-Magri scheme, we construct a subalgebra
$Z=Z(p,\tilde p)\subset {\mathcal S}(W)^{\mathfrak q{\cdot}1}$ such that
$\{Z,Z\}_p=\{Z,Z\}_{\tilde p}=0$. If ${\mathrm{tr.deg\,}}{\mathcal S}(\mathfrak
q)^{\mathfrak q}=\mathrm{ind}\,\mathfrak q$ and $\mathfrak q$ has the codim-$2$
property, then ${\mathrm{tr.deg\,}} Z$ takes the maximal possible value, which
is $((n-1)\dim\mathfrak q)/2+((n+1)\mathrm{ind}\,\mathfrak q)/2$. If $\mathfrak
q=\mathfrak g$ is semisimple, then $Z$ contains the Hamiltonians of a suitably
chosen Gaudin model. Therefore, in a non-reductive case, we obtain a completely
integrable generalisation of Gaudin models.
|
Interactions in biology and social systems are not restricted to pairwise but
can take arbitrary sizes. Extensive studies have revealed that the
arbitrary-sized interactions significantly affect the spreading dynamics on
networked systems. Competing spreading dynamics, i.e., several epidemics spread
simultaneously and compete with each other, have been widely observed in the
real world, yet the way arbitrary-sized interactions affect competing spreading
dynamics still lacks systematic study. This study presents a model of two
competing simplicial susceptible-infected-susceptible epidemics on a
higher-order system represented by simplicial complex and analyzes the model's
critical phenomena. In the proposed model, a susceptible node can only be
infected by one of the two epidemics, and the transmission of infection to
neighbors can occur through pairwise (i.e., an edge) and high-order (e.g.,
2-simplex) interactions simultaneously. Through a mean-field (MF) theory
analysis and numerical simulations, we show that the model displays rich
dynamical behavior depending on the 2-simplex infection strength. When the
2-simplex infection strength is weak, the model's phase diagram is consistent
with the simple graph, consisting of three regions: the absolute dominant
regions for each epidemic and the epidemic-free region. With the increase of
the 2-simplex infection strength, a new phase region called the alternative
dominant region emerges. In this region, the survival of one epidemic depends
on the initial conditions. Our theoretical analysis can reasonably predict the
time evolution and steady-state outbreak size in each region. In addition, we
further explore the model's phase diagram both when the 2-simplex infection
strength is symmetrical and asymmetrical. The results show that the 2-simplex
infection strength has a significant impact on the system phase diagram.
|
We consider online coordinated precoding design for downlink wireless network
virtualization (WNV) in a multi-cell multiple-input multiple-output (MIMO)
network with imperfect channel state information (CSI). In our WNV framework,
an infrastructure provider (InP) owns each base station that is shared by
several service providers (SPs) oblivious of each other. The SPs design their
precoders as virtualization demands for user services, while the InP designs
the actual precoding solution to meet the service demands from the SPs. Our aim
is to minimize the long-term time-averaged expected precoding deviation over
MIMO fading channels, subject to both per-cell long-term and short-term
transmit power limits. We propose an online coordinated precoding algorithm for
virtualization, which provides a fully distributed semi-closed-form precoding
solution at each cell, based only on the current imperfect CSI without any CSI
exchange across cells. Taking into account the two-fold impact of imperfect CSI
on both the InP and the SPs, we show that our proposed algorithm is within an
$O(\delta)$ gap from the optimum over any time horizon, where $\delta$ is a CSI
inaccuracy indicator. Simulation results validate the performance of our
proposed algorithm under two commonly used precoding techniques in a typical
urban micro-cell network environment.
|
In 2021, a new track has been initiated in the Challenge for Learned Image
Compression~: the video track. This category proposes to explore technologies
for the compression of short video clips at 1 Mbit/s. This paper proposes to
generate coded videos using the latest standardized video coders, especially
Versatile Video Coding (VVC). The objective is not only to measure the progress
made by learning techniques compared to the state of the art video coders, but
also to quantify their progress from years to years. With this in mind, this
paper documents how to generate the video sequences fulfilling the requirements
of this challenge, in a reproducible way, targeting the maximum performance for
VVC.
|
The Quantum Fisher Information (QFI) plays a crucial role in quantum
information theory and in many practical applications such as quantum
metrology. However, computing the QFI is generally a computationally demanding
task. In this work we analyze a lower bound on the QFI which we call the
sub-Quantum Fisher Information (sub-QFI). The bound can be efficiently
estimated on a quantum computer for an $n$-qubit state using $2n$ qubits. The
sub-QFI is based on the super-fidelity, an upper bound on Uhlmann's fidelity.
We analyze the sub-QFI in the context of unitary families, where we derive
several crucial properties including its geometrical interpretation. In
particular, we prove that the QFI and the sub-QFI are maximized for the same
optimal state, which implies that the sub-QFI is faithful to the QFI in the
sense that both quantities share the same global extrema. Based on this
faithfulness, the sub-QFI acts as an efficiently computable surrogate for the
QFI for quantum sensing and quantum metrology applications. Finally, we provide
additional meaning to the sub-QFI as a measure of coherence, asymmetry, and
purity loss.
|
We consider the problem of steering a multi-agent system to multi-consensus,
namely a regime where groups of agents agree on a given value which may be
different from group to group. We first address the problem by using
distributed proportional controllers that implement additional links in the
network modeling the communication protocol among agents and introduce a
procedure for the optimal selection of them. Both the cases of single
integrators and of second-order dynamics are taken into account and the
stability for the multi-consensus state is studied, ultimately providing
conditions for the gain of the controllers. We then extend the approach to
controllers that either add or remove links in the original structure, by
preserving eventually the weak connectedness of the resulting graph.
|
Let $\Gamma$ be a graph, $A$ an abelian group, $\mathcal{D}$ a given
orientation of $\Gamma$ and $R$ a unital subring of the endomorphism ring of
$A$. It is shown that the set of all maps $\varphi$ from $E(\Gamma)$ to $A$
such that $(\mathcal{D},\varphi)$ is an $A$-flow forms a left $R$-module. Let
$\Gamma$ be a union of two subgraphs $\Gamma_{1}$ and $\Gamma_{2}$, and $p^n$ a
prime power. It is proved that $\Gamma$ admits a nowhere-zero $p^n$-flow if
$\Gamma_{1}$ and $\Gamma_{2}$ have at most $p^n-2$ common edges and both have
nowhere-zero $p^n$-flows. More important, it is proved that $\Gamma$ admits a
nowhere-zero $4$-flow if $\Gamma_{1}$ and $\Gamma_{2}$ both have nowhere-zero
$4$-flows and their common edges induce a connected subgraph of $\Gamma$ of
size at most $3$. This covers a result of Catlin that a graph admits a
nowhere-zero $4$-flow if it is a union of a $4$-cycle and a subgraph admiting a
nowhere-zero $4$-flow.
|
We study automata-theoretic classes of string-to-string functions whose
output may grow faster than linearly in their input. Our central contribution
is to introduce a new such class, with polynomial growth and three equivalent
definitions: the smallest class containing regular functions and closed under a
"composition by substitution" operation; a restricted variant of pebble
transducers; a $\lambda$-calculus with a linear type system. As their name
suggests, these comparison-free polyregular functions form a subclass of
polyregular functions; we prove that the inclusion is strict. Other properties
of our new function class that we show are incomparability with HDT0L
transductions and closure under composition. Finally, we look at the recently
introduced layered streaming string transducers (SSTs), or equivalently
k-marble transducers. We prove that a function can be obtained by composing
such transducers together if and only if it is polyregular, and that k-layered
SSTs (or k-marble transducers) are equivalent to a corresponding notion of
(k+1)-layered HDT0L systems.
|
There are many sectors which have moved to Cloud and are planning
aggressively to move their workloads to Cloud since the world entered Covid-19
pandemic. There are various reasons why Cloud is an essential irresistible
technology and serves as an ultimate solution to access IT software and
systems. It has become a new essential catalyst for Enterprise Organisations
which are looking for Digital Transformation. Remote working is a common
phenomenon now across all the IT companies making the services available all
the time. Covid-19 has made cloud adoption an immediate priority for
Organisation rather than a slowly approached future transformation. The
benefits of Cloud lies in the fact that employees rather engineers of an
enterprise are no more dependent on the closed hardware-based IT infrastructure
and hence eliminates the necessity of working from the networked office
premises. This has raised a huge demand for skilled Cloud specialist who can
manage and support the systems running on cloud across different regions of the
world. In this research, the reasons for growing Cloud adoption after pandemic
Covid-19 has been described and the challenges which Organization will face is
also explained. This study also details the most used cloud services during the
pandemic considering Amazon Web Services as the cloud provider.
|
Relationships between people constantly evolve, altering interpersonal
behavior and defining social groups. Relationships between nodes in social
networks can be represented by a tie strength, often empirically assessed using
surveys. While this is effective for taking static snapshots of relationships,
such methods are difficult to scale to dynamic networks. In this paper, we
propose a system that allows for the continuous approximation of relationships
as they evolve over time. We evaluate this system using the NetSense study,
which provides comprehensive communication records of students at the
University of Notre Dame over the course of four years. These records are
complemented by semesterly ego network surveys, which provide discrete samples
over time of each participant's true social tie strength with others. We
develop a pair of powerful machine learning models (complemented by a suite of
baselines extracted from past works) that learn from these surveys to interpret
the communications records as signals. These signals represent dynamic tie
strengths, accurately recording the evolution of relationships between the
individuals in our social networks. With these evolving tie values, we are able
to make several empirically derived observations which we compare to past
works.
|
Mean-variance portfolio decisions that combine prediction and optimisation
have been shown to have poor empirical performance. Here, we consider the
performance of various shrinkage methods by their efficient frontiers under
different distributional assumptions to study the impact of reasonable
departures from Normality. Namely, we investigate the impact of first-order
auto-correlation, second-order auto-correlation, skewness, and excess kurtosis.
We show that the shrinkage methods tend to re-scale the sample efficient
frontier, which can change based on the nature of local perturbations from
Normality. This re-scaling implies that the standard approach of comparing
decision rules for a fixed level of risk aversion is problematic, and more so
in a dynamic market setting. Our results suggest that comparing efficient
frontiers has serious implications which oppose the prevailing thinking in the
literature. Namely, that sample estimators out-perform Stein type estimators of
the mean, and that improving the prediction of the covariance has greater
importance than improving that of the means.
|
Transition metal dichalcogenides (TMDs) are known to support complex
excitonic states. Revealing the differences in relaxation dynamics among
different excitonic species and elucidating the transition dynamics between
them may provide important guidelines for designing TMD-based excitonic
devices. Combining photoluminescence (PL) and reflectance contrast measurements
with ultrafast pump-probe spectroscopy under cryogenic temperatures, we herein
study the relaxation dynamics of neutral and charged excitons in a
back-gate-controlled monolayer device. Pump-probe results reveal quite
different relaxation dynamics of excitonic states under different interfacial
conditions: while neutral excitons experience much longer lifetime than trions
in monolayer WS2, the opposite is true in the WS2/h-BN heterostructure. It is
found that the insertion of h-BN layer between the TMD monolayer and the
substrate has a great influence on the lifetimes of different excitonic states.
The h-BN flakes can not only screen the effects of impurities and defects at
the interface, but also help establish a non-radiative transition from neutral
excitons to trions to be the dominant relaxation pathway, under cryogenic
temperature. Our findings highlight the important role interface may play in
governing the transient properties of carriers in 2D semiconductors, and may
also have implications for designing light-emitting and photo-detecting devices
based on TMDs.
|
Submillimeter/millimeter observations of dusty star-forming galaxies with the
Atacama Large Millimeter/submillimeter Array (ALMA) have shown that dust
continuum emission generally occurs in compact regions smaller than the stellar
distribution. However, it remains to be understood how systematic these
findings are. Studies often lack homogeneity in the sample selection, target
discontinuous areas with inhomogeneous sensitivities, and suffer from modest
$uv$ coverage coming from single array configurations. GOODS-ALMA is a 1.1mm
galaxy survey over a continuous area of 72.42arcmin$^2$ at a homogeneous
sensitivity. In this version 2.0, we present a new low resolution dataset and
its combination with the previous high resolution dataset from the survey,
improving the $uv$ coverage and sensitivity reaching an average of $\sigma =
68.4\mu$Jy beam$^{-1}$. A total of 88 galaxies are detected in a blind search
(compared to 35 in the high resolution dataset alone), 50% at $S/N_{peak} \geq
5$ and 50% at $3.5 \leq S/N_{peak} \leq 5$ aided by priors. Among them, 13 out
of the 88 are optically dark or faint sources ($H$- or $K$-band dropouts). The
sample dust continuum sizes at 1.1mm are generally compact, with a median
effective radius of $R_{e} = 0"10 \pm 0"05$ (a physical size of $R_{e} = 0.73
\pm 0.29$kpc at the redshift of each source). Dust continuum sizes evolve with
redshift and stellar mass resembling the trends of the stellar sizes measured
at optical wavelengths, albeit a lower normalization compared to those of
late-type galaxies. We conclude that for sources with flux densities $S_{1.1mm}
> 1$mJy, compact dust continuum emission at 1.1mm prevails, and sizes as
extended as typical star-forming stellar disks are rare. The $S_{1.1mm} < 1$mJy
sources appear slightly more extended at 1.1mm, although they are still
generally compact below the sizes of typical star-forming stellar disks.
|
Web servers scaled across distributed systems necessitate complex runtime
controls for providing quality of service (QoS) guarantees as well as
minimizing the energy costs under dynamic workloads. This paper presents a
QoS-aware runtime controller using horizontal scaling (node allocation) and
vertical scaling (resource allocation within nodes) methods synergistically to
provide adaptation to workloads while minimizing the power consumption under
QoS constraint (i.e., response time). A horizontal scaling determines the
number of active nodes based on workload demands and the required QoS according
to a set of rules. Then, it is coupled with vertical scaling using transfer
Q-learning, which further tunes power/performance based on workload profile
using dynamic voltage/frequency scaling (DVFS). It transfers Q-values within
minimally explored states reducing exploration requirements. In addition, the
approach exploits a scalable architecture of the many-core server allowing to
reuse available knowledge from fully or partially explored nodes. When
combined, these methods allow to reduce the exploration time and QoS violations
when compared to model-free Q-learning. The technique balances design-time and
runtime costs to maximize the portability and operational optimality
demonstrated through persistent power reductions with minimal QoS violations
under different workload scenarios on heterogeneous multi-processing nodes of a
server cluster.
|
We prove that the restriction map from the subspace of regular points of the
holonomy perturbed SU(2) traceless flat moduli space of a tangle in a
3-manifold to the traceless flat moduli space of its boundary marked surface is
a Lagrangian immersion.
A key ingredient in our proof is the use of composition in the Weinstein
category, combined with the fact that SU(2) holonomy perturbations in a
cylinder induce Hamiltonian isotopies. In addition, we show that $(S^2,4)$, the
2-sphere with four marked points, is its own traceless flat SU(2) moduli space.
|
Consider an energy harvesting node where generation of a status update
message takes non-negligible time due to sensing, computing and analytics
operations performed before making update transmissions. The node has to
harmonize its (re)transmission strategy with the sensing/computing. We call
this general set of problems intermittent status updating. In this paper, we
consider intermittent status updating through non-preemptive sensing/computing
(S/C) and transmission (Tx) operations, each costing a single energy recharge
of the node, through an erasure channel with (a) perfect channel feedback and
(b) no channel feedback. The S/C time for each update is independent with a
general distribution. The Tx queue has a single data buffer to save the latest
packet generated after the S/C operation and a single transmitter where
transmission time is deterministic. Once energy is harvested, the node has to
decide whether to activate S/C to generate a new update or to (re)send the
existing update (if any) to the receiver. We prove that when feedback is
available average peak age of information (AoI) at the receiver is minimized by
a threshold-based policy that allows only young packets to be (re)sent or else
generates a new update. We additionally propose window based and probabilistic
retransmission schemes for both cases (a) and (b) and obtain closed form
average peak AoI expressions. Our numerical results show average peak AoI
performance comparisons and improvements.
|
We introduce a simple, rigorous, and unified framework for solving nonlinear
partial differential equations (PDEs), and for solving inverse problems (IPs)
involving the identification of parameters in PDEs, using the framework of
Gaussian processes. The proposed approach: (1) provides a natural
generalization of collocation kernel methods to nonlinear PDEs and IPs; (2) has
guaranteed convergence for a very general class of PDEs, and comes equipped
with a path to compute error bounds for specific PDE approximations; (3)
inherits the state-of-the-art computational complexity of linear solvers for
dense kernel matrices. The main idea of our method is to approximate the
solution of a given PDE as the maximum a posteriori (MAP) estimator of a
Gaussian process conditioned on solving the PDE at a finite number of
collocation points. Although this optimization problem is infinite-dimensional,
it can be reduced to a finite-dimensional one by introducing additional
variables corresponding to the values of the derivatives of the solution at
collocation points; this generalizes the representer theorem arising in
Gaussian process regression. The reduced optimization problem has the form of a
quadratic objective function subject to nonlinear constraints; it is solved
with a variant of the Gauss--Newton method. The resulting algorithm (a) can be
interpreted as solving successive linearizations of the nonlinear PDE, and (b)
in practice is found to converge in a small number of iterations (2 to 10), for
a wide range of PDEs. Most traditional approaches to IPs interleave parameter
updates with numerical solution of the PDE; our algorithm solves for both
parameter and PDE solution simultaneously. Experiments on nonlinear elliptic
PDEs, Burgers' equation, a regularized Eikonal equation, and an IP for
permeability identification in Darcy flow illustrate the efficacy and scope of
our framework.
|
By using worldline and diagrammatic quantum Monte Carlo techniques, matrix
product state and a variational approach \`a la Feynman, we investigate the
equilibrium properties and relaxation features of a quantum system of $N$ spins
antiferromagnetically interacting with each other, with strength $J$, and
coupled to a common bath of bosonic oscillators, with strength $\alpha$. We
show that, in the Ohmic regime, a Beretzinski-Thouless-Kosterlitz quantum phase
transition occurs. While for $J=0$ the critical value of $\alpha$ decreases
asymptotically with $1/N$ by increasing $N$, for nonvanishing $J$ it turns out
to be practically independent on $N$, allowing to identify a finite range of
values of $\alpha$ where spin phase coherence is preserved also for large $N$.
Then, by using matrix product state simulations, and the Mori formalism and the
variational approach \`a la Feynman jointly, we unveil the features of the
relaxation, that, in particular, exhibits a non monotonic dependence on the
temperature reminiscent of the Kondo effect. For the observed quantum phase
transition we also establish a criterion analogous to that of the
metal-insulator transition in solids.
|
A seemingly simple oxide with a rutile structure, RuO2 has been shown to
possess several intriguing properties ranging from strain-stabilized
superconductivity to a strong catalytic activity. Much interest has arisen
surrounding the controlled synthesis of RuO2 films but, unfortunately,
utilizing atomically-controlled deposition techniques like molecular beam
epitaxy (MBE) has been difficult due to the ultra-low vapor pressure and low
oxidation potential of Ru. Here, we demonstrate the growth of epitaxial,
single-crystalline RuO2 films on different substrate orientations using the
novel solid-source metal-organic (MO) MBE. This approach circumvents these
issues by supplying Ru using a pre-oxidized solid metal-organic precursor
containing Ru. High-quality epitaxial RuO2 films with bulk-like
room-temperature resistivity of 55 micro-ohm-cm were obtained at a substrate
temperature as low as 300 C. By combining X-ray diffraction, transmission
electron microscopy, and electrical measurements, we discuss the effect of
substrate temperature, orientation, film thickness, and strain on the structure
and electrical properties of these films. Our results illustrating the use of
novel solid-source MOMBE approach paves the way to the atomic-layer controlled
synthesis of complex oxides of stubborn metals, which are not only difficult to
evaporate but also hard to oxidize.
|
We analyze the conclusions of the influence of a Coulomb-type potential on
the Klein-Gordon oscillator. We show that the truncation method proposed by the
authors do not yield all the eigenvalues of the radial equation but just one of
them for a particular value of a model parameter. Besides, the existence of
allowed oscillator frequencies that depend on the quantum numbers is an
artifact of the truncation method.
|
Recently, most siamese network based trackers locate targets via object
classification and bounding-box regression. Generally, they select the
bounding-box with maximum classification confidence as the final prediction.
This strategy may miss the right result due to the accuracy misalignment
between classification and regression. In this paper, we propose a novel
siamese tracking algorithm called SiamRCR, addressing this problem with a
simple, light and effective solution. It builds reciprocal links between
classification and regression branches, which can dynamically re-weight their
losses for each positive sample. In addition, we add a localization branch to
predict the localization accuracy, so that it can work as the replacement of
the regression assistance link during inference. This branch makes the training
and inference more consistent. Extensive experimental results demonstrate the
effectiveness of SiamRCR and its superiority over the state-of-the-art
competitors on GOT-10k, LaSOT, TrackingNet, OTB-2015, VOT-2018 and VOT-2019.
Moreover, our SiamRCR runs at 65 FPS, far above the real-time requirement.
|
We propose a Nitsche method for multiscale partial differential equations,
which retrieves the macroscopic information and the local microscopic
information at one stroke. We prove the convergence of the method for second
order elliptic problem with bounded and measurable coefficients. The rate of
convergence may be derived for coefficients with further structures such as
periodicity and ergodicity. Extensive numerical results confirm the theoretical
predictions.
|
We report new measurements of the branching fraction $\cal B(D_s^+\to
\ell^+\nu)$, where $\ell^+$ is either $\mu^+$ or
$\tau^+(\to\pi^+\bar{\nu}_\tau)$, based on $6.32$ fb$^{-1}$ of
electron-positron annihilation data collected by the BESIII experiment at six
center-of-mass energy points between $4.178$ and $4.226$ GeV. Simultaneously
floating the $D_s^+\to\mu^+\nu_\mu$ and $D_s^+\to\tau^+\nu_\tau$ components
yields $\cal B(D_s^+\to \tau^+\nu_\tau) = (5.21\pm0.25\pm0.17)\times10^{-2}$,
$\cal B(D_s^+\to \mu^+\nu_\mu) = (5.35\pm0.13\pm0.16)\times10^{-3}$, and the
ratio of decay widths $R=\frac{\Gamma(D_s^+\to \tau^+\nu_\tau)}{\Gamma(D_s^+\to
\mu^+\nu_\mu)} = 9.73^{+0.61}_{-0.58}\pm 0.36$, where the first uncertainties
are statistical and the second systematic. No evidence of ${\it CP}$ asymmetry
is observed in the decay rates $D_s^\pm\to\mu^\pm\nu_\mu$ and
$D_s^\pm\to\tau^\pm\nu_\tau$: $A_{\it CP}(\mu^\pm\nu) = (-1.2\pm2.5\pm1.0)\%$
and $A_{\it CP}(\tau^\pm\nu) = (+2.9\pm4.8\pm1.0)\%$. Constraining our
measurement to the Standard Model expectation of lepton universality
($R=9.75$), we find the more precise results $\cal B(D_s^+\to \tau^+\nu_\tau) =
(5.22\pm0.10\pm 0.14)\times10^{-2}$ and $A_{\it CP}(\tau^\pm\nu_\tau) =
(-0.1\pm1.9\pm1.0)\%$. Combining our results with inputs external to our
analysis, we determine the $c\to \bar{s}$ quark mixing matrix element, $D_s^+$
decay constant, and ratio of the decay constants to be $|V_{cs}| =
0.973\pm0.009\pm0.014$, $f_{D^+_s} = 249.9\pm2.4\pm3.5~\text{MeV}$, and
$f_{D^+_s}/f_{D^+} = 1.232\pm0.035$, respectively.
|
Various attention mechanisms are being widely applied to acoustic scene
classification. However, we empirically found that the attention mechanism can
excessively discard potentially valuable information, despite improving
performance. We propose the attentive max feature map that combines two
effective techniques, attention and a max feature map, to further elaborate the
attention mechanism and mitigate the above-mentioned phenomenon. We also
explore various joint training methods, including multi-task learning, that
allocate additional abstract labels for each audio recording. Our proposed
system demonstrates state-of-the-art performance for single systems on Subtask
A of the DCASE 2020 challenge by applying the two proposed techniques using
relatively fewer parameters. Furthermore, adopting the proposed attentive max
feature map, our team placed fourth in the recent DCASE 2021 challenge.
|
Determining the aqueous solubility of molecules is a vital step in many
pharmaceutical, environmental, and energy storage applications. Despite efforts
made over decades, there are still challenges associated with developing a
solubility prediction model with satisfactory accuracy for many of these
applications. The goal of this study is to develop a general model capable of
predicting the solubility of a broad range of organic molecules. Using the
largest currently available solubility dataset, we implement deep
learning-based models to predict solubility from molecular structure and
explore several different molecular representations including molecular
descriptors, simplified molecular-input line-entry system (SMILES) strings,
molecular graphs, and three-dimensional (3D) atomic coordinates using four
different neural network architectures - fully connected neural networks
(FCNNs), recurrent neural networks (RNNs), graph neural networks (GNNs), and
SchNet. We find that models using molecular descriptors achieve the best
performance, with GNN models also achieving good performance. We perform
extensive error analysis to understand the molecular properties that influence
model performance, perform feature analysis to understand which information
about molecular structure is most valuable for prediction, and perform a
transfer learning and data size study to understand the impact of data
availability on model performance.
|
The $^{23}$Na($\alpha,p$)$^{26}$Mg reaction has been identified as having a
significant impact on the nucleosynthesis of several nuclei between Ne and Ti
in type-Ia supernovae, and of $^{23}$Na and $^{26}$Al in massive stars. The
reaction has been subjected to renewed experimental interest recently,
motivated by high uncertainties in early experimental data and in the
statistical Hauser-Feshbach models used in reaction rate compilations. Early
experiments were affected by target deterioration issues and unquantifiable
uncertainties. Three new independent measurements instead are utilizing inverse
kinematics and Rutherford scattering monitoring to resolve this. In this work
we present directly measured angular distributions of the emitted protons to
eliminate a discrepancy in the assumptions made in the recent reaction rate
measurements, which results in cross sections differing by a factor of 3. We
derive a new combined experimental reaction rate for the
$^{23}$Na($\alpha,p$)$^{26}$Mg reaction with a total uncertainty of 30% at
relevant temperatures. Using our new $^{23}$Na($\alpha,p$)$^{26}$Mg rate, the
$^{26}$Al and $^{23}$Na production uncertainty is reduced to within 8%. In
comparison, using the factor of 10 uncertainty previously recommended by the
rate compilation STARLIB, $^{26}$Al and $^{23}$Na production was changing by
more than a factor of 2. In type-Ia supernova conditions, the impact on
production of $^{23}$Na is constrained to within 15%.
|
J-PAS will soon start imaging 8000 deg2 of the northern sky with its unique
set of 56 filters (R $\sim$ 60). Before, we observed 1 deg2 on the AEGIS field
with an interim camera with all the J-PAS filters. With this data (miniJPAS),
we aim at proving the scientific potential of J-PAS to identify and
characterize the galaxy populations with the goal of performing galaxy
evolution studies across cosmic time. Several SED-fitting codes are used to
constrain the stellar population properties of a complete flux-limited sample
(rSDSS <= 22.5 AB) of miniJPAS galaxies that extends up to z = 1. We find
consistent results on the galaxy properties derived from the different codes,
independently of the galaxy spectral-type or redshift. For galaxies with
SNR>=10, we estimate that the J-PAS photometric system allows to derive stellar
population properties with a precision that is equivalent to that obtained with
spectroscopic surveys of similar SNR. By using the dust-corrected (u-r)
colour-mass diagram, a powerful proxy to characterize galaxy populations, we
find that the fraction of red and blue galaxies evolves with cosmic time, with
red galaxies being $\sim$ 38% and $\sim$ 18% of the whole population at z = 0.1
and z = 0.5, respectively. At all redshifts, the more massive galaxies belong
to the red sequence and these galaxies are typically older and more metal rich
than their counterparts in the blue cloud. Our results confirm that with J-PAS
data we will be able to analyze large samples of galaxies up to z $\sim$ 1,
with galaxy stellar masses above of log(M$_*$/M$_{\odot}$) $\sim$ 8.9, 9.5, and
9.9 at z = 0.3, 0.5, and 0.7, respectively. The SFH of a complete sub-sample of
galaxies selected at z $\sim$ 0.1 with log(M$_*$/M$_{\odot}$) > 8.3 constrain
the cosmic evolution of the star formation rate density up to z $\sim$ 3 in
good agreement with results from cosmological surveys.
|
We have developed a torsion balance with a sensitivity about ten times better
than those of previously operating balances for the study of long range forces
coupling to baryon and lepton number. We present here the details of the design
and expected characteristics of this balance. Operation of this balance for a
year will also result in improved bounds on long range interactions of dark
matter violating Einstein's equivalence principle.
|
Synchronous model is a type of formal models for modelling and specifying
reactive systems. It has a great advantage over other real-time models that its
modelling paradigm supports a deterministic concurrent behaviour of systems.
Various approaches have been utilized for verification of synchronous models
based on different techniques, such as model checking, SAT/SMT sovling, term
rewriting, type inference and so on. In this paper, we propose a verification
approach for synchronous models based on compositional reasoning and term
rewriting. Specifically, we initially propose a variation of dynamic logic,
called synchronous dynamic logic (SDL). SDL extends the regular program model
of first-order dynamic logic (FODL) with necessary primitives to capture the
notion of synchrony and synchronous communication between parallel programs,
and enriches FODL formulas with temporal dynamic logical formulas to specify
safety properties -- a type of properties mainly concerned in reactive systems.
To rightly capture the synchronous communications, we define a constructive
semantics for the program model of SDL. We build a sound and relatively
complete proof system for SDL. Compared to previous verification approaches,
SDL provides a divide and conquer way to analyze and verify synchronous models
based on compositional reasoning of the syntactic structure of the programs of
SDL. To illustrate the usefulness of SDL, we apply SDL to specify and verify a
small example in the synchronous model SyncChart, which shows the potential of
SDL to be used in practice.
|
A limitation for collaborative robots (cobots) is their lack of ability to
adapt to human partners, who typically exhibit an immense diversity of
behaviors. We present an autonomous framework as a cobot's real-time
decision-making mechanism to anticipate a variety of human characteristics and
behaviors, including human errors, toward a personalized collaboration. Our
framework handles such behaviors in two levels: 1) short-term human behaviors
are adapted through our novel Anticipatory Partially Observable Markov Decision
Process (A-POMDP) models, covering a human's changing intent (motivation),
availability, and capability; 2) long-term changing human characteristics are
adapted by our novel Adaptive Bayesian Policy Selection (ABPS) mechanism that
selects a short-term decision model, e.g., an A-POMDP, according to an estimate
of a human's workplace characteristics, such as her expertise and collaboration
preferences. To design and evaluate our framework over a diversity of human
behaviors, we propose a pipeline where we first train and rigorously test the
framework in simulation over novel human models. Then, we deploy and evaluate
it on our novel physical experiment setup that induces cognitive load on humans
to observe their dynamic behaviors, including their mistakes, and their
changing characteristics such as their expertise. We conduct user studies and
show that our framework effectively collaborates non-stop for hours and adapts
to various changing human behaviors and characteristics in real-time. That
increases the efficiency and naturalness of the collaboration with a higher
perceived collaboration, positive teammate traits, and human trust. We believe
that such an extended human adaptation is key to the long-term use of cobots.
|
In a resource-constrained IoT network, end nodes like WSN, RFID, and embedded
systems are used which have memory, processing, and energy limitations. One of
the key distribution solutions in these types of networks is to use the key
pre-distribution scheme, which accomplishes the key distribution operation
offline before the resource-constrained devices deployment in the environment.
Also, in order to reduce the shared key discovery computing and communication
overhead, the use of combinatorial design in key pre-distribution has been
proposed as a solution in recent years. In this study, a ${\mu}$-PBIBD
combinatorial design is introduced and constructed and the mapping of such
design as a key pre-distribution scheme in the resource-constrained IoT network
is explained. Through using such key pre-distribution scheme, more keys are
obtained for communication between two devices in the IoT network. This means
that there will be a maximum of q + 2 keys between the two devices in the
network, where q is the prime power, that is, instead of having a common key
for a direct secure connection, the two devices can have q + 2 common keys in
their key chain. Accordingly, we would increase the resilience of the key
pre-distribution scheme compared to the SBIBD, TD, Trade-KP, UKP *, RD * and
2-D ${\mu}$-PBIBD designs.
Keywords: resource-constrained IoT network; combinatorial design;
${\mu}$-PBIBD; resilience.
|
We discuss various aspects of a neutrino physics program that can be carried
out with the neutrino Beam-Dump eXperiment DRIFT ($\nu$BDX-DRIFT) detector
using neutrino beams produced in next generation neutrino facilities.
$\nu$BDX-DRIFT is a directional low-pressure TPC detector suitable for
measurements of coherent elastic neutrino-nucleus scattering (CE$\nu$NS) using
a variety of gaseous target materials which include carbon disulfide, carbon
tetrafluoride and tetraethyllead, among others. The neutrino physics program
includes standard model (SM) measurements and beyond the standard model (BSM)
physics searches. Focusing on the Long Baseline Neutrino Facility (LBNF)
beamline at Fermilab, we first discuss basic features of the detector and
estimate backgrounds, including beam-induced neutron backgrounds. We then
quantify the CE$\nu$NS signal in the different target materials and study the
sensitivity of $\nu$BDX-DRIFT to measurements of the weak mixing angle and
neutron density distributions. We consider as well prospects for new physics
searches, in particular sensitivities to effective neutrino non-standard
interactions.
|
Existing studies in weakly-supervised semantic segmentation (WSSS) using
image-level weak supervision have several limitations: sparse object coverage,
inaccurate object boundaries, and co-occurring pixels from non-target objects.
To overcome these challenges, we propose a novel framework, namely Explicit
Pseudo-pixel Supervision (EPS), which learns from pixel-level feedback by
combining two weak supervisions; the image-level label provides the object
identity via the localization map and the saliency map from the off-the-shelf
saliency detection model offers rich boundaries. We devise a joint training
strategy to fully utilize the complementary relationship between both
information. Our method can obtain accurate object boundaries and discard
co-occurring pixels, thereby significantly improving the quality of
pseudo-masks. Experimental results show that the proposed method remarkably
outperforms existing methods by resolving key challenges of WSSS and achieves
the new state-of-the-art performance on both PASCAL VOC 2012 and MS COCO 2014
datasets.
|
The number of biomedical literature on new biomedical concepts is rapidly
increasing, which necessitates a reliable biomedical named entity recognition
(BioNER) model for identifying new and unseen entity mentions. However, it is
questionable whether existing BioNER models can effectively handle them. In
this work, we systematically analyze the three types of recognition abilities
of BioNER models: memorization, synonym generalization, and concept
generalization. We find that although BioNER models achieve state-of-the-art
performance on BioNER benchmarks based on overall performance, they have
limitations in identifying synonyms and new biomedical concepts such as
COVID-19. From this observation, we conclude that existing BioNER models are
overestimated in terms of their generalization abilities. Also, we identify
several difficulties in recognizing unseen mentions in BioNER and make the
following conclusions: (1) BioNER models tend to exploit dataset biases, which
hinders the models' abilities to generalize, and (2) several biomedical names
have novel morphological patterns with little name regularity such as COVID-19,
and models fail to recognize them. We apply a current statistics-based
debiasing method to our problem as a simple remedy and show the improvement in
generalization to unseen mentions. We hope that our analyses and findings would
be able to facilitate further research into the generalization capabilities of
NER models in a domain where their reliability is of utmost importance.
|
We investigate how entanglement can enhance two-photon absorption in a
three-level system. First, we employ the Schmidt decomposition to determine the
entanglement properties of the optimal two-photon state to drive such a
transition, and the maximum enhancement which can be achieved in comparison to
the optimal classical pulse. We then adapt the optimization problem to
realistic experimental constraints, where photon pairs from a down-conversion
source are manipulated by local operations such as spatial light modulators. We
derive optimal pulse shaping functions to enhance the absorption efficiency,
and compare the maximal enhancement achievable by entanglement to the yield of
optimally shaped, separable pulses.
|
We design and demonstrate a novel technique for the active stabilization of
the relative phase between seed and pump in an optical parametric oscillator
(OPO). We show that two error signals for the stabilization of the OPO
frequency, based on Pound-Drever-Hall (PDH), and of the seed-pump relative
phase can be obtained just from the reflected beam of the OPO cavity, without
the necessity of two different modulation and demodulation stages. We also
analyze the effect of the pump in the cavity stabilization for different
seed-pump relative phase configurations, resulting in an offset in the PDH
error signal, which has to be compensated. Finally, an application of our
technique in the reliable generation of squeezed coherent states is presented.
|
Glucose biosensors play an important role in the diagnosis and continued
monitoring of the disease, diabetes mellitus. This report proposes the
development of a novel enzymatic electrochemical glucose biosensor based on
TiO$_2$ nanotubes modified by AgO and Prussian blue (PB) nanoparticles (NPs),
which has an additional advantage of possessing antimicrobial properties for
implantable biosensor applications. In this study, we developed two high
performance glucose biosensors based on the immobilization of glucose oxidase
(GOx) onto Prussian blue (PB) modified TiO$_2$ nanotube arrays functionalized
by Au and AgO NPs. AgO-deposited TiO$_2$ nanotubes were synthesized through an
electrochemical anodization process followed by Ag electroplating process in
the same electrolyte. Deposition of PB particles was performed from an acidic
ferricyanide solution. The surface morphology and elemental composition of the
two fabricated biosensors were investigated by scanning electron microscopy
(SEM) and energy-dispersive X-ray spectroscopy (EDS) which indicate the
successful deposition of Au and AgO nanoparticles as well as PB nanocrystals.
Cyclic voltammetry and chronoamperometry were used to investigate the
performance of the modified electrochemical biosensors. The results show that
the developed electrochemical biosensors display excellent properties in terms
of electron transmission, low detection limit as well as high stability for the
determination of glucose. Under the optimized conditions, the amperometric
response shows a linear dependence on the glucose concentration to a detection
limit down to 4.91 $\mu$M with sensitivity of 185.1 mA M$^{-1}$ cm$^{-2]$ in Au
modified biosensor and detection limit of 58.7 $\mu$M with 29.1 mA M$^{-1}$
cm$^{-2]$ sensitivity in AgO modified biosensor.
|
We present a dialogue elicitation study to assess how users envision
conversations with a perfect voice assistant (VA). In an online survey, N=205
participants were prompted with everyday scenarios, and wrote the lines of both
user and VA in dialogues that they imagined as perfect. We analysed the
dialogues with text analytics and qualitative analysis, including number of
words and turns, social aspects of conversation, implied VA capabilities, and
the influence of user personality. The majority envisioned dialogues with a VA
that is interactive and not purely functional; it is smart, proactive, and has
knowledge about the user. Attitudes diverged regarding the assistant's role as
well as it expressing humour and opinions. An exploratory analysis suggested a
relationship with personality for these aspects, but correlations were low
overall. We discuss implications for research and design of future VAs,
underlining the vision of enabling conversational UIs, rather than single
command "Q&As".
|
We establish some limit theorems for quasi-arithmetic means of random
variables. This class of means contains the arithmetic, geometric and harmonic
means. Our feature is that the generators of quasi-arithmetic means are allowed
to be complex-valued, which makes considerations for quasi-arithmetic means of
random variables which could take negative values possible. Our motivation for
the limit theorems is finding simple estimators of the parameters of the Cauchy
distribution. By applying the limit theorems, we obtain some closed-form
unbiased strongly-consistent estimators for the joint of the location and scale
parameters of the Cauchy distribution, which are easy to compute and analyze.
|
Quantum spin-1/2 antiferromagnetic Heisenberg trimerized chain with strong
intradimer and weak monomer-dimer coupling constants is studied using the novel
many-body perturbation expansion, which is developed from the exactly solved
spin-1/2 Ising-Heisenberg diamond chain preserving correlations between all
interacting spins of the trimerized chain unlike the standard perturbation
scheme developed on the grounds of noninteracting spin monomers and dimers. The
Heisenberg trimerized chain shows the intermediate one-third plateau, which was
also observed in the magnetization curve of the polymeric compound
Cu$_3$(P$_2$O$_6$OH)$_2$ affording its experimental realization. Within the
modified strong-coupling method we have obtained the effective Hamiltonians for
the magnetic-field range from zero to one-third plateau, and from one-third
plateau to the saturation magnetization. The second-order perturbation theory
shows reliable results even for the moderate ratio between weaker dimer-monomer
and stronger intradimer coupling constants. We have also examined thermodynamic
properties and recovered the low-temperature peak in the specific heat. The
accuracy of the developed method is tested through a comparison with numerical
density-matrix renormalization group and quantum Monte Carlo simulations. Using
the results for the effective Hamiltonian we suggest straightforward procedure
for finding the microscopic parameters of one-dimensional trimerized magnetic
compounds with strong intradimer and weak monomer-dimer couplings. We found the
refined values for the coupling constants of Cu$_3$(P$_2$O$_6$OH)$_2$ by
matching the theoretical results with the available experimental data for the
magnetization and magnetic susceptibility in a wide range of temperatures and
magnetic fields.
|
In circumstellar disks, the size of dust particles varies from submicron to
several centimeters, while planetesimals have sizes of hundreds of kilometers.
Therefore, various regimes for the aerodynamic drag between solid bodies and
gas can be realized in these disks, depending on the grain sizes and
velocities: Epstein, Stokes, and Newton, as well as transitional regimes
between them. For small bodies moving in the Epstein regime, the time required
to establish the constant relative velocity between the gas and bodies can be
much less than the dynamical time scale for the problem - the time for the
rotation of the disk about the central body. In addition, the dust may be
concentrated in individual regions of the disk, making it necessary to take
into account the transfer of momentum between the dust and gas. It is shown
that, for a system of equations for gas and monodisperse dust, a semi-implicit
first-order approximation scheme in time in which the interphase interaction is
calculated implicitly, while other forces, such as the pressure gradient and
gravity are calculated explicitly, is suitable for stiff problems with intense
interphase interactions and for computations of the drag in non-linear regimes.
The piece-wise drag coefficient widely used in astrophysical simulations has a
discontinuity at some values of the Mach and Knudsen numbers that are realized
in a circumstellar disk. A continuous drag coefficient is presented, which
corresponds to experimental dependences obtained for various drag regimes.
|
Hydrodynamic interactions are crucial for determining the cooperative
behavior of microswimmers at low Reynolds numbers. Here we provide a
comprehensive analysis of the scaling and strength of the interactions in the
case of a pair of three-sphere swimmers with intrinsic elasticity. Both
stroke-based and force-based microswimmers are analyzed using an analytic
perturbative approach. Following a detailed analysis of the passive
interactions, as well as active translations and rotations, we find that the
mapping between the stroke-based and force-based swimmers is only possible in a
low driving frequency regime where the characteristic time scale is smaller
than the viscous one. Furthermore, we find that for swimmers separated by up to
hundreds of swimmer lengths, swimming in pairs speeds up the self propulsion,
due to the dominant quadrupolar hydrodynamic interactions. Finally, we find
that the long term behavior of the swimmers, while sensitive to initial
relative positioning, does not depend on the pusher or puller nature of the
swimmer.
|
Oftentimes, patterns can be represented through different modalities. For
example, leaf data can be in the form of images or contours. Handwritten
characters can also be either online or offline. To exploit this fact, we
propose the use of self-augmentation and combine it with multi-modal feature
embedding. In order to take advantage of the complementary information from the
different modalities, the self-augmented multi-modal feature embedding employs
a shared feature space. Through experimental results on classification with
online handwriting and leaf images, we demonstrate that the proposed method can
create effective embeddings.
|
Previous work has shown that early resolution of issues detected by static
code analyzers can prevent major cost later on. However, developers often
ignore such issues for two main reasons. First, many issues should be
interpreted to determine if they correspond to actual flaws in the program.
Second, static analyzers often do not present the issues in a way that makes it
apparent how to fix them. To address these problems, we present Sorald: a novel
system that adopts a set of predefined metaprogramming templates to transform
the abstract syntax trees of programs to suggest fixes for static issues. Thus,
the burden on the developer is reduced from both interpreting and fixing static
issues, to inspecting and approving solutions for them. Sorald fixes violations
of 10 rules from SonarQube, one of the most widely used static analyzers for
Java. We also implement an effective mechanism to integrate Sorald into
development workflows based on pull requests. We evaluate Sorald on a dataset
of 161 popular repositories on Github. Our analysis shows the effectiveness of
Sorald as it fixes 94\% (1,153/1,223) of the violations that it attempts to
fix. Overall, our experiments show it is possible to automatically fix
violations of static analysis rules produced by the state-of-the-art static
analyzer SonarQube.
|
In this paper we develop an optimisation based approach to multivariate
Chebyshev approximation on a finite grid. We consider two models: multivariate
polynomial approximation and multivariate generalised rational approximation.
In the second case the approximations are ratios of linear forms and the basis
functions are not limited to monomials. It is already known that in the case of
multivariate polynomial approximation on a finite grid the corresponding
optimisation problems can be reduced to solving a linear programming problem,
while the area of multivariate rational approximation is not so well
understood.In this paper we demonstrate that in the case of multivariate
generalised rational approximation the corresponding optimisation problems are
quasiconvex. This statement remains true even when the basis functions are not
limited to monomials. Then we apply a bisection method, which is a general
method for quasiconvex optimisation. This method converges to an optimal
solution with given precision. We demonstrate that the convex feasibility
problems appearing in the bisection method can be solved using linear
programming. Finally, we compare the deviation error and computational time for
multivariate polynomial and generalised rational approximation with the same
number of decision variables.
|
In this study, we investigate the attentiveness exhibited by participants
sourced through Amazon Mechanical Turk (MTurk), thereby discovering a
significant level of inattentiveness amongst the platform's top crowd workers
(those classified as 'Master', with an 'Approval Rate' of 98% or more, and a
'Number of HITS approved' value of 1,000 or more). A total of 564 individuals
from the United States participated in our experiment. They were asked to read
a vignette outlining one of four hypothetical technology products and then
complete a related survey. Three forms of attention check (logic, honesty, and
time) were used to assess attentiveness. Through this experiment we determined
that a total of 126 (22.3%) participants failed at least one of the three forms
of attention check, with most (94) failing the honesty check - followed by the
logic check (31), and the time check (27). Thus, we established that
significant levels of inattentiveness exist even among the most elite MTurk
workers. The study concludes by reaffirming the need for multiple forms of
carefully crafted attention checks, irrespective of whether participant quality
is presumed to be high according to MTurk criteria such as 'Master', 'Approval
Rate', and 'Number of HITS approved'. Furthermore, we propose that researchers
adjust their proposals to account for the effort and costs required to address
participant inattentiveness.
|
A new model of anisotropic compact star is obtained in our present paper by
assuming the pressure anisotropy. The proposed model is singularity free. The
model is obtained by considering a physically reasonable choice for the metric
potential $g_{rr}$ which depends on a dimensionless parameter `n'. The effect
of $n$ is discussed numerically, analytically and through plotting. We have
concentrated a wide range for n ($10\leq n \leq 1000$) for drawing the profiles
of different physical parameters. The maximum allowable mass for different
values of $n$ have been obtained by M-R plot. We have checked that the
stability of the model is increased for larger value of $n$. For the viability
of the model we have considered two compact stars PSR J1614-2230 and EXO
1785-248. We have shown that the expressions for the anisotropy factor and the
metric component may serve as generating functions for uncharged stellar models
in the context of the general theory of relativity.
|
A unified expression for topological invariants has been proposed recently to
describe the topological order in Dirac models belonging to any dimension and
symmetry class. We uncover a correspondence between the curvature function that
integrates to this unified topological invariant and the quantum metric that
measures the distance between properly defined many-body Bloch states in
momentum space. Based on this metric-curvature correspondence, a time-resolved
and angle-resolved photoemission spectroscopy experiment is proposed to measure
the violation of spectral sum rule caused by a pulse electric field to detect
the quantum metric, from which the topological properties of the system may be
extracted.
|
Here we propose a local earthquake tomography method that applies a
structured regularization technique to determine sharp changes in the Earth's
seismic velocity structure with travel time data of direct waves. Our approach
focuses on the ability to better image two common features that are observed
the Earth's seismic velocity structure: velocity jumps that correspond to
material boundaries, such as the Conrad and Moho discontinuities, and gradual
velocity changes that are associated with the pressure and temperature
distributions in the crust and mantle. We employ different penalty terms in the
vertical and horizontal directions to refine the imaging process. We utilize a
vertical-direction (depth) penalty term that takes the form of the l1-sum of
the l2-norm of the second-order differences of the horizontal units in the
vertical direction. This penalty is intended to represent sharp velocity jumps
due to discontinuities by creating a piecewise linear depth profile of the
average velocity structure. We set a horizontal-direction penalty term on the
basis of the l2-norm to express gradual velocity tendencies in the horizontal
direction. We use a synthetic dataset to demonstrate that our method provides
significant improvements over the estimated velocity structures from
conventional methods by obtaining stable estimates of both the velocity jumps
and gradual velocity changes. We also demonstrate that our proposed method is
relatively robust against variations in the amplitude of the velocity jump,
initial velocity model, and the number of observed travel times. Furthermore,
we present a considerable potential for detecting a velocity discontinuity
using the observed travel times from only a small number of direct-wave
observations.
|
Arbitrary-oriented objects exist widely in natural scenes, and thus the
oriented object detection has received extensive attention in recent years. The
mainstream rotation detectors use oriented bounding boxes (OBB) or
quadrilateral bounding boxes (QBB) to represent the rotating objects. However,
these methods suffer from the representation ambiguity for oriented object
definition, which leads to suboptimal regression optimization and the
inconsistency between the loss metric and the localization accuracy of the
predictions. In this paper, we propose a Representation Invariance Loss (RIL)
to optimize the bounding box regression for the rotating objects. Specifically,
RIL treats multiple representations of an oriented object as multiple
equivalent local minima, and hence transforms bounding box regression into an
adaptive matching process with these local minima. Then, the Hungarian matching
algorithm is adopted to obtain the optimal regression strategy. We also propose
a normalized rotation loss to alleviate the weak correlation between different
variables and their unbalanced loss contribution in OBB representation.
Extensive experiments on remote sensing datasets and scene text datasets show
that our method achieves consistent and substantial improvement. The source
code and trained models are available at https://github.com/ming71/RIDet.
|
The production cross section of a top quark pair in association with a photon
is measured in proton-proton collisions at a center-of-mass energy of 13 TeV.
The data set, corresponding to an integrated luminosity of 137 fb$^{-1}$, was
recorded by the CMS experiment during the 2016-2018 data taking of the LHC. The
measurements are performed in a fiducial volume defined at the particle level.
Events with an isolated, highly energetic lepton, at least three jets from the
hadronization of quarks, among which at least one is b tagged, and one isolated
photon are selected. The inclusive fiducial $\mathrm{t\overline{t}}\gamma$
cross section, for a photon with transverse momentum greater than 20 GeV and
pseudorapidity $\lvert \eta\rvert$ $\lt$ 1.4442, is measured to be 798 $\pm$ 7
(stat) $\pm$ 48 (syst) fb, in good agreement with the prediction from the
standard model at next-to-leading order in quantum chromodynamics. The
differential cross sections are also measured as a function of several
kinematic observables and interpreted in the framework of the standard model
effective field theory (EFT), leading to the most stringent direct limits to
date on anomalous electromagnetic dipole moment interactions of the top quark
and the photon.
|
We introduce Platform for Situated Intelligence, an open-source framework
created to support the rapid development and study of multimodal,
integrative-AI systems. The framework provides infrastructure for sensing,
fusing, and making inferences from temporal streams of data across different
modalities, a set of tools that enable visualization and debugging, and an
ecosystem of components that encapsulate a variety of perception and processing
technologies. These assets jointly provide the means for rapidly constructing
and refining multimodal, integrative-AI systems, while retaining the efficiency
and performance characteristics required for deployment in open-world settings.
|
In OTC markets, one of the main tasks of dealers / market makers consists in
providing prices at which they agree to buy and sell the assets and securities
they have in their scope. With ever increasing trading volume, this quoting
task has to be done algorithmically. Over the last ten years, many market
making models have been designed that can be the basis of quoting algorithms in
OTC markets. Nevertheless, in most (if not all) OTC market making models, the
market maker is a pure internalizer, setting quotes and waiting for clients.
However, on many markets such as foreign exchange cash markets, market makers
have access to liquidity pools where they can hedge part of their inventory. In
this paper, we propose a model taking this possibility into account, therefore
allowing market makers to externalize part of their risk by trading in a
liquidity pool. The model displays an important feature well known to
practitioners that within a certain inventory range the market maker
internalizes the flow by appropriately adjusting the quotes and externalize
outside of that range. The larger the market making franchise, the wider is the
inventory range suitable for internalization. The model is illustrated
numerically with realistic parameters for USDCNH spot market.
|
An accurate experimental characterization of finite antiferromagnetic (AF)
spin chains is crucial for controlling and manipulating their magnetic
properties and quantum states for potential applications in spintronics or
quantum computation. In particular, finite AF chains are expected to show a
different magnetic behaviour depending on their length and topology. Molecular
AF rings are able to combine the quantum-magnetic behaviour of AF chains with a
very remarkable tunability of their topological and geometrical properties. In
this work we measure the $^{53}$Cr-NMR spectra of the Cr$_{8}$Cd ring to study
the local spin densities on the Cr sites. Cr$_{8}$Cd can in fact be considered
a model system of a finite AF open chain with an even number of spins. The NMR
resonant frequencies are in good agreement with the theoretical local spin
densities, by assuming a core polarization feld AC = -12.7 T/$\mu_B$. Moreover,
these NMR results confirm the theoretically predicted non-collinear spin
arrangement along the Cr$_{8}$Cd ring, which is typical of an even-open AF spin
chain.
|
We establish semiclassical asymptotics and estimates for the Schwartz kernel
$e_h(x,y;\tau)$ of spectral projector for a second order elliptic operator on
the manifold with a boundary. While such asymptotics for its restriction to the
diagonal $e_h(x,x,\tau)$ and, especially, for its trace $\mathsf{N}_h(\tau)=
\int e_h(x,x,\tau)\,dx$ are well-known, the out-of-diagonal asymptotics are
much less explored.
Our main tools: microlocal methods, improved successive approximations and
geometric optics methods.
Our results would also lead to classical asymptotics of $e_h(x,y,\tau)$ for
fixed $h$ (say, $h=1$) and $\tau\to \infty$.
|
We study the thermal transport properties of three CaF$_{2}$ polymorphs up to
a pressure of 30 GPa using first-principle calculations and an interatomic
potential based on machine learning. The lattice thermal conductivity $\kappa$
is computed by iteratively solving the linearized Boltzmann transport equation
(BTE) and by taking into account three-phonon scattering. Overall, $\kappa$
increases nearly linearly with pressure, and we show that the recently
discovered $\delta$-phase with $P\bar{6}2m$ symmetry and the previously known
$\gamma$-CaF$_{2}$ high-pressure phase have significantly lower lattice thermal
conductivities than the ambient-thermodynamic cubic fluorite ($Fm\bar{3}m$)
structure. We argue that the lower $\kappa$ of these two high-pressure phases
stems mainly due to a lower contribution of acoustic modes to $\kappa$ as a
result of their small group velocities. We further show that the phonon mean
free paths are very short for the $P\bar{6}2m$ and $Pnma$ structures at high
temperatures, and resort to the Cahill-Pohl model to assess the lower limit of
thermal conductivity in these domains.
|
This paper treats a distributed optimal control problem for a tumor growth
model of Cahn-Hilliard type including chemotaxis. The evolution of the tumor
fraction is governed by a variational inequality corresponding to a double
obstacle nonlinearity occurring in the associated potential. In addition, the
control and state variables are nonlinearly coupled and, furthermore, the cost
functional contains a nondifferentiable term like the $L^1$-norm in order to
include sparsity effects which is of utmost relevance, especially time
sparsity, in the context of cancer therapies as applying a control to the
system reflects in exposing the patient to an intensive medical treatment. To
cope with the difficulties originating from the variational inequality in the
state system, we employ the so-called "deep quench approximation" in which the
convex part of the double obstacle potential is approximated by logarithmic
functions. For such functions, first-order necessary conditions of optimality
can be established by invoking recent results. We use these results to derive
corresponding optimality conditions also for the double obstacle case, by
deducing a variational inequality in terms of the associated adjoint state
variables. The resulting variational inequality can be exploited to also obtain
sparsity results for the optimal controls.
|
A new model has been developed to estimate the dielectric constant of the
lunar surface using Synthetic Aperture Radar (SAR) data. Continuous
investigation on the dielectric constant of the lunar surface is a high
priority task due to future lunar mission's goals and possible exploration of
human outposts. For this purpose, derived anisotropy and backscattering
coefficients of SAR images are used. The SAR images are obtained from Miniature
Radio Frequency (MiniRF) radar onboard Lunar Reconnaissance Orbiter (LRO).
These images are available in the form of Stokes parameters, which are used to
derive the coherency matrix. The derived coherency matrix is further
represented in terms of particle anisotropy. This coherency matrix's elements
compared with Cloud's coherency matrix, which results in the new relationship
between particle anisotropy and coherency matrix elements (backscattering
coefficients). Following this, estimated anisotropy is used to determine the
dielectric constant. Our model estimates the dielectric constant of the lunar
surface without parallax error. The produce results are also comparable with
the earlier estimate. As an advantageous, our method estimates the dielectric
constant without any apriori information about the density or composition of
lunar surface materials. The proposed approach can also be useful for
determining the dielectric properties of Mars and other celestial bodies.
|
A multimode optical receiver for free space optical communications (FSOC)
based on a photonic lantern and adaptive optics coherent beam combining (CBC)
of the lantern's single-mode outputs is proposed and demonstrated for the first
time. The use of optical coherent combining in fiber serves to increase the
signal to noise ratio compared to similar receivers based on electrically
combined signals, and represents an all-fiber approach to low-order adaptive
optics. This optical receiver is demonstrated using a photonic lantern with
three outputs, fibre couplers and active phase locking, and further
investigated under atmospheric conditions with and without turbulence.
|
We exhibit a closed aspherical 5-manifold of nonpositive curvature that
fibers over a circle whose fundamental group is hyperbolic relative to abelian
subgroups such that the fiber is a closed aspherical 4-manifold whose
fundamental group is not hyperbolic relative to abelian subgroups.
|
We consider 2d QFTs as relevant deformations of CFTs in the thermodynamic
limit. Using causality and KPZ universality, we place a lower bound on the
timescale characterizing the onset of hydrodynamics. The bound is determined
parametrically in terms of the temperature and the scale associated with the
relevant deformation. This bound is typically much stronger than $\frac{1}{T}$,
the expected quantum equilibration time. Subluminality of sound further allows
us to define a thermodynamic $C$-function, and constrain the sign of the
$\mathcal T\bar{\mathcal T}$ term in EFTs.
|
Robots are widespread across diverse application contexts. Teaching robots to
perform tasks, in their respective contexts, demands a high domain and
programming expertise. However, robot programming faces high entry barriers due
to the complexity of robot programming itself. Even for experts robot
programming is a cumbersome and error-prone task where faulty robot programs
can be created, causing damage when being executed on a real robot. To simplify
the process of robot programming, we combine Augmented Reality (AR) with
principles of end-user development. By combining them, the real environment is
extended with useful virtual artifacts that can enable experts as well as
non-professionals to perform complex robot programming tasks. Therefore, Simple
Programming Environment in Augmented Reality with Enhanced Debugging (SPEARED)
was developed as a prototype for an AR-assisted robot programming environment.
SPEARED makes use of AR to project a robot as well as a programming environment
onto the target working space. To evaluate our approach, expert interviews with
domain experts from the area of industrial automation, robotics, and AR were
performed. The experts agreed that SPEARED has the potential to enrich and ease
current robot programming processes.
|
Using a data sample of 980~fb$^{-1}$ collected with the Belle detector
operating at the KEKB asymmetric-energy $e^+e^-$ collider, we present evidence
for the $\Omega(2012)^-$ in the resonant substructure of $\Omega_{c}^{0} \to
\pi^+ (\bar{K}\Xi)^{-}$ ($(\bar{K}\Xi)^{-}$ = $K^-\Xi^0$ + $\bar{K}^0 \Xi^-$)
decays. The significance of the $\Omega(2012)^-$ signal is 4.2$\sigma$ after
considering the systematic uncertainties. The ratio of the branching fraction
of $\Omega_{c}^{0} \to \pi^{+} \Omega(2012)^- \to \pi^+ (\bar{K}\Xi)^{-}$
relative to that of $\Omega_{c}^{0} \to \pi^{+} \Omega^-$ is calculated to be
0.220 $\pm$ 0.059(stat.) $\pm$ 0.035(syst.). The individual ratios of the
branching fractions of the two isospin modes are also determined, and found to
be ${\cal B}(\Omega_{c}^0 \to \pi^+ \Omega(2012)^-) \times {\cal
B}(\Omega(2012)^- \to K^-\Xi^0)/{\cal B}(\Omega_{c}^0 \to \pi^+ K^- \Xi^0)$ =
(9.6 $\pm$ 3.2(stat.) $\pm$ 1.8(syst.))\% and ${\cal B}(\Omega_{c}^0 \to \pi^+
\Omega(2012)^-) \times {\cal B}(\Omega(2012)^- \to \bar{K}^0 \Xi^-)/{\cal
B}(\Omega_{c}^0 \to \pi^+ \bar{K}^0 \Xi^-)$ = (5.5 $\pm$ 2.8(stat.) $\pm$
0.7(syst.))\%.
|
Writers, poets, singers usually do not create their compositions in just one
breath. Text is revisited, adjusted, modified, rephrased, even multiple times,
in order to better convey meanings, emotions and feelings that the author wants
to express. Amongst the noble written arts, Poetry is probably the one that
needs to be elaborated the most, since the composition has to formally respect
predefined meter and rhyming schemes. In this paper, we propose a framework to
generate poems that are repeatedly revisited and corrected, as humans do, in
order to improve their overall quality. We frame the problem of revising poems
in the context of Reinforcement Learning and, in particular, using Proximal
Policy Optimization. Our model generates poems from scratch and it learns to
progressively adjust the generated text in order to match a target criterion.
We evaluate this approach in the case of matching a rhyming scheme, without
having any information on which words are responsible of creating rhymes and on
how to coherently alter the poem words. The proposed framework is general and,
with an appropriate reward shaping, it can be applied to other text generation
problems.
|
A mesoscale model with molecular resolutions is presented for the
dipalmitoyl-phosphatidylcholine (DPPC) and
1-palmitoyl-2-oleyl-sn-glycero-3-phosphocholine (POPC) monolayer simulations at
the air-water interface using many-body dissipative particle dynamics (MDPD).
The parameterization scheme is rigorously based on reproducing the physical
properties of water and alkane and the interfacial property of the phospholipid
monolayer by comparing with our experimental results. The MDPD model yields a
similar surface pressure-area isotherm as well as the similar pressure-related
morphologies compared with the all-atomistic simulations and experiments.
Moreover, the compressibility modulus, order parameter of lipid tails, and
thickness of the MDPD phospholipid monolayer are quantitatively in line with
the all-atomistic simulations and experiments. This model can also capture the
sensitive changes in the pressure-area isotherms of the mixed DPPC/POPC
monolayers with altered mixed ratios by comparing with the experiments,
indicating that our model scheme is promising in applications for complex
natural phospholipid monolayers. These results demonstrate a significant
improvement on quantitative phospholipid monolayer simulations over the
previous coarse-grained models.
|
Cost functions have the potential to provide compact and understandable
generalizations of motion. The goal of Inverse Optimal Control (IOC) is to
analyze an observed behavior which is assumed to be optimal with respect to an
unknown cost function, and infer this cost function. Here we develop a method
for characterizing cost functions of legged locomotion, with the goal of
representing complex humanoid behavior with simple models. To test this
methodology we simulate walking gaits of a simple 5 link planar walking model
which optimize known cost functions, and assess the ability of our IOC method
to recover them. In particular, the IOC method uses an iterative trajectory
optimization process to infer cost function weightings consistent with those
used to generate a single demonstrated optimal trial. We also explore
sensitivity of the IOC to sensor noise in the observed trajectory, imperfect
knowledge of the model or task, as well as uncertainty in the components of the
cost function used. With appropriate modeling, these methods may help infer
cost functions from human data, yielding a compact and generalizable
representation of human-like motion for use in humanoid robot controllers, as
well as providing a new tool for experimentally exploring human preferences.
|
A moplex is a natural graph structure that arises when lifting Dirac's
classical theorem from chordal graphs to general graphs. However, while every
non-complete graph has at least two moplexes, little is known about structural
properties of graphs with a bounded number of moplexes. The study of these
graphs is motivated by the parallel between moplexes in general graphs and
simplicial modules in chordal graphs: Unlike in the moplex setting, properties
of chordal graphs with a bounded number of simplicial modules are well
understood. For instance, chordal graphs having at most two simplicial modules
are interval. In this work we initiate an investigation of $k$-moplex graphs,
which are defined as graphs containing at most $k$ moplexes. Of particular
interest is the smallest nontrivial case $k=2$, which forms a counterpart to
the class of interval graphs. As our main structural result, we show that the
class of connected $2$-moplex graphs is sandwiched between the classes of
proper interval graphs and cocomparability graphs; moreover, both inclusions
are tight for hereditary classes. From a complexity theoretic viewpoint, this
leads to the natural question of whether the presence of at most two moplexes
guarantees a sufficient amount of structure to efficiently solve problems that
are known to be intractable on cocomparability graphs, but not on proper
interval graphs. We develop new reductions that answer this question negatively
for two prominent problems fitting this profile, namely Graph Isomorphism and
Max-Cut. On the other hand, we prove that every connected $2$-moplex graph
contains a Hamiltonian path, generalising the same property of connected proper
interval graphs. Furthermore, for graphs with a higher number of moplexes, we
lift the previously known result that graphs without asteroidal triples have at
most two moplexes to the more general setting of larger asteroidal sets.
|
Modern sentence encoders are used to generate dense vector representations
that capture the underlying linguistic characteristics for a sequence of words,
including phrases, sentences, or paragraphs. These kinds of representations are
ideal for training a classifier for an end task such as sentiment analysis,
question answering and text classification. Different models have been proposed
to efficiently generate general purpose sentence representations to be used in
pretraining protocols. While averaging is the most commonly used efficient
sentence encoder, Discrete Cosine Transform (DCT) was recently proposed as an
alternative that captures the underlying syntactic characteristics of a given
text without compromising practical efficiency compared to averaging. However,
as with most other sentence encoders, the DCT sentence encoder was only
evaluated in English. To this end, we utilize DCT encoder to generate universal
sentence representation for different languages such as German, French, Spanish
and Russian. The experimental results clearly show the superior effectiveness
of DCT encoding in which consistent performance improvements are achieved over
strong baselines on multiple standardized datasets.
|
Collapsing process is studied in special type of inhomogeneous spherically
symmetric space-time model (known as IFRW model), having no time-like Killing
vector field. The matter field for collapse dynamics is considered to be
perfect fluid with anisotropic pressure. The main issue of the present
investigation is to examine whether the end state of the collapse to be a naked
singularity or a black hole. Finally, null geodesics is studied near the
singularity.
|
Based on a view of current flavour physics and motivated by the hierarchy
problem and by the pattern of quark masses and mixings, I describe a picture of
flavour physics that should give rise in a not too distant future to observable
deviations from the SM in Higgs compositeness and/or in B-decays with
violations of Lepton Flavour Universality, as hinted by current data, or
perhaps even in supersymmetry, depending on the specific realisation.
|
In astronomical spectroscopy, optical fibres are abundantly used for
multiplexing and decoupling the spectrograph from the telescope to provide
stability in a controlled environment. However, fibres are less than perfect
optical components and introduce complex effects that diminish the overall
throughput, efficiency, and stability of the instrument.
We present a novel numerical field propagation model that emulates the
effects of modal noise, scrambling, and focal ratio degradation with a rigorous
treatment of wave optics. We demonstrate that the simulation of the near- and
far-field output of a fiber, injected into a ray-tracing model of the
spectrograph, allows to assess performance at the detector level.
|
Federated Learning is a promising machine learning paradigm when multiple
parties collaborate to build a high-quality machine learning model.
Nonetheless, these parties are only willing to participate when given enough
incentives, such as a fair reward based on their contributions. Many studies
explored Shapley value based methods to evaluate each party's contribution to
the learned model. However, they commonly assume a semi-trusted server to train
the model and evaluate the data owners' model contributions, which lacks
transparency and may hinder the success of federated learning in practice. In
this work, we propose a blockchain-based federated learning framework and a
protocol to transparently evaluate each participant's contribution. Our
framework protects all parties' privacy in the model building phase and
transparently evaluates contributions based on the model updates. The
experiment with the handwritten digits dataset demonstrates that the proposed
method can effectively evaluate the contributions.
|
This paper investigates the problem of impact-time-control and proposes a
learning-based computational guidance algorithm to solve this problem. The
proposed guidance algorithm is developed based on a general
prediction-correction concept: the exact time-to-go under proportional
navigation guidance with realistic aerodynamic characteristics is estimated by
a deep neural network and a biased command to nullify the impact time error is
developed by utilizing the emerging reinforcement learning techniques. The deep
neural network is augmented into the reinforcement learning block to resolve
the issue of sparse reward that has been observed in typical reinforcement
learning formulation. Extensive numerical simulations are conducted to support
the proposed algorithm.
|
We address the task of converting a floorplan and a set of associated photos
of a residence into a textured 3D mesh model, a task which we call Plan2Scene.
Our system 1) lifts a floorplan image to a 3D mesh model; 2) synthesizes
surface textures based on the input photos; and 3) infers textures for
unobserved surfaces using a graph neural network architecture. To train and
evaluate our system we create indoor surface texture datasets, and augment a
dataset of floorplans and photos from prior work with rectified surface crops
and additional annotations. Our approach handles the challenge of producing
tileable textures for dominant surfaces such as floors, walls, and ceilings
from a sparse set of unaligned photos that only partially cover the residence.
Qualitative and quantitative evaluations show that our system produces
realistic 3D interior models, outperforming baseline approaches on a suite of
texture quality metrics and as measured by a holistic user study.
|
In 1888, Heinrich Schroeter provided a ruler construction for points on cubic
curves based on line involutions. Using Chasles' Theorem and the terminology of
elliptic curves, we give a simple proof of Schroeter's construction. In
addition, we show how to construct tangents and additional points on the curve
using another ruler construction which is also based on line involutions.
|
We prove that there exists a function $f(k)=\mathcal{O}(k^2 \log k)$ such
that for every $C_4$-free graph $G$ and every $k \in \mathbb{N}$, $G$ either
contains $k$ vertex-disjoint holes of length at least $6$, or a set $X$ of at
most $f(k)$ vertices such that $G-X$ has no hole of length at least $6$. This
answers a question of Kim and Kwon [Erd\H{o}s-P\'osa property of chordless
cycles and its applications. JCTB 2020].
|
In this paper we examine the concept of complexity as it applies to
generative art and design. Complexity has many different, discipline specific
definitions, such as complexity in physical systems (entropy), algorithmic
measures of information complexity and the field of "complex systems". We apply
a series of different complexity measures to three different generative art
datasets and look at the correlations between complexity and individual
aesthetic judgement by the artist (in the case of two datasets) or the
physically measured complexity of 3D forms. Our results show that the degree of
correlation is different for each set and measure, indicating that there is no
overall "better" measure. However, specific measures do perform well on
individual datasets, indicating that careful choice can increase the value of
using such measures. We conclude by discussing the value of direct measures in
generative and evolutionary art, reinforcing recent findings from neuroimaging
and psychology which suggest human aesthetic judgement is informed by many
extrinsic factors beyond the measurable properties of the object being judged.
|
There is a growing body of malware samples that evade automated analysis and
detection tools. Malware may measure fingerprints ("artifacts") of the
underlying analysis tool or environment and change their behavior when
artifacts are detected. While analysis tools can mitigate artifacts to reduce
exposure, such concealment is expensive. However, not every sample checks for
every type of artifact-analysis efficiency can be improved by mitigating only
those artifacts most likely to be used by a sample. Using that insight, we
propose MIMOSA, a system that identifies a small set of "covering" tool
configurations that collectively defeat most malware samples with increased
efficiency. MIMOSA identifies a set of tool configurations that maximize
analysis throughput and detection accuracy while minimizing manual effort,
enabling scalable automation to analyze stealthy malware. We evaluate our
approach against a benchmark of 1535 labeled stealthy malware samples. Our
approach increases analysis throughput over state of the art on over 95% of
these samples. We also investigate cost-benefit tradeoffs between the fraction
of successfully-analyzed samples and computing resources required. MIMOSA
provides a practical, tunable method for efficiently deploying analysis
resources.
|
Efficient video processing is a critical component in many IoMT applications
to detect events of interest. Presently, many window optimization techniques
have been proposed in event processing with an underlying assumption that the
incoming stream has a structured data model. Videos are highly complex due to
the lack of any underlying structured data model. Video stream sources such as
CCTV cameras and smartphones are resource-constrained edge nodes. At the same
time, video content extraction is expensive and requires computationally
intensive Deep Neural Network (DNN) models that are primarily deployed at
high-end (or cloud) nodes. This paper presents VID-WIN, an adaptive 2-stage
allied windowing approach to accelerate video event analytics in an edge-cloud
paradigm. VID-WIN runs parallelly across edge and cloud nodes and performs the
query and resource-aware optimization for state-based complex event matching.
VID-WIN exploits the video content and DNN input knobs to accelerate the video
inference process across nodes. The paper proposes a novel content-driven
micro-batch resizing, queryaware caching and micro-batch based utility
filtering strategy of video frames under resource-constrained edge nodes to
improve the overall system throughput, latency, and network usage. Extensive
evaluations are performed over five real-world datasets. The experimental
results show that VID-WIN video event matching achieves ~2.3X higher throughput
with minimal latency and ~99% bandwidth reduction compared to other baselines
while maintaining query-level accuracy and resource bounds.
|
We propose DeepMetaHandles, a 3D conditional generative model based on mesh
deformation. Given a collection of 3D meshes of a category and their
deformation handles (control points), our method learns a set of meta-handles
for each shape, which are represented as combinations of the given handles. The
disentangled meta-handles factorize all the plausible deformations of the
shape, while each of them corresponds to an intuitive deformation. A new
deformation can then be generated by sampling the coefficients of the
meta-handles in a specific range. We employ biharmonic coordinates as the
deformation function, which can smoothly propagate the control points'
translations to the entire mesh. To avoid learning zero deformation as
meta-handles, we incorporate a target-fitting module which deforms the input
mesh to match a random target. To enhance deformations' plausibility, we employ
a soft-rasterizer-based discriminator that projects the meshes to a 2D space.
Our experiments demonstrate the superiority of the generated deformations as
well as the interpretability and consistency of the learned meta-handles.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.