abstract
stringlengths 42
2.09k
|
---|
Thanks to the great progress of machine learning in the last years, several
Artificial Intelligence (AI) techniques have been increasingly moving from the
controlled research laboratory settings to our everyday life. AI is clearly
supportive in many decision-making scenarios, but when it comes to sensitive
areas such as health care, hiring policies, education, banking or justice, with
major impact on individuals and society, it becomes crucial to establish
guidelines on how to design, develop, deploy and monitor this technology.
Indeed the decision rules elaborated by machine learning models are data-driven
and there are multiple ways in which discriminatory biases can seep into data.
Algorithms trained on those data incur the risk of amplifying prejudices and
societal stereotypes by over associating protected attributes such as gender,
ethnicity or disabilities with the prediction task. Starting from the extensive
experience of the National Metrology Institute on measurement standards and
certification roadmaps, and of Politecnico di Torino on machine learning as
well as methods for domain bias evaluation and mastering, we propose a first
joint effort to define the operational steps needed for AI fairness
certification. Specifically we will overview the criteria that should be met by
an AI system before coming into official service and the conformity assessment
procedures useful to monitor its functioning for fair decisions.
|
This paper studies the canonical symmetric connection $\nabla$ associated to
any Lie group $G$. The salient properties of $\nabla$ are stated and proved.
The Lie symmetries of the geodesic system of a general linear connection are
formulated. The results are then applied to $\nabla$ in the special case where
the Lie algebra $\g$ of $G$, has a codimension one abelian nilradical. The
conditions that determine a Lie symmetry in such a case are completely
integrated. Finally the results obtained are compared with some
four-dimensional Lie groups whose Lie algebras have three-dimensional abelian
nilradicals, for which the calculations were performed by MAPLE.
|
Presolar silicon carbide (SiC) grains in meteoritic samples can help
constrain circumstellar condensation processes and conditions in C-rich stars
and core-collapse supernovae. This study presents our findings on eight
presolar SiC grains from AGB stars (four mainstream and one Y grain) and
core-collapse supernovae (three X grains), chosen on the basis of {\mu}-Raman
spectral features that were indicative of their having unusual non-3C polytypes
and/or high degrees of crystal disorder. Analytical transmission electron
microscopy (TEM), which provides elemental compositional and structural
information, shows evidence for complex histories for the grains. Our TEM
results confirm the presence of non-3C,2H crystal domains. Minor element
heterogeneities and/or subgrains were observed in all grains analyzed for their
compositions. The C/O ratios inferred for the parent stars varied from 0.98 to
greater than or equal to 1.03. Our data show that SiC condensation can occur
under a wide range of conditions, in which environmental factors other than
temperature (e.g., pressure, gas composition, heterogeneous nucleation on
pre-condensed phases) play a significant role. Based on previous {\mu}-Raman
studies, about 10% of SiC grains may have infrared (IR) spectral features that
are influenced by crystal defects, porosity, and/or subgrains. Future
sub-diffraction limited IR measurements of complex SiC grains might shed
further light on the relative contributions of each of these features to the
shape and position of the characteristic IR 11-{\mu}m SiC feature and thus
improve the interpretation of IR spectra of AGB stars like those that produced
the presolar SiC grains.
|
The search for an ideal single-photon source has generated significant
interest in discovering novel emitters in materials as well as developing new
manipulation techniques to gain better control over the emitters' properties.
Quantum emitters in atomically thin two-dimensional (2D) materials have proven
very attractive with high brightness, operation under ambient conditions, and
the ability to be integrated with a wide range of electronic and photonic
platforms. This perspective highlights some of the recent advances in quantum
light generation from 2D materials, focusing on hexagonal boron nitride and
transition metal dichalcogenides (TMDs). Efforts in engineering and
deterministically creating arrays of quantum emitters in 2D materials, their
electrical excitation, and their integration with photonic devices are
discussed. Lastly, we address some of the challenges the field is facing and
the near-term efforts to tackle them. We provide an outlook towards efficient
and scalable quantum light generation from 2D materials towards controllable
and addressable on-chip quantum sources.
|
$J/\psi$ production in p-p ultra-peripheral collisions through the elastic
and inelastic photoproduction processes, where the virtual photons emitted from
the projectile interact with the target, are studied. The comparisions between
the exact treatment results and the ones of equivalent photon approximation are
expressed as $Q^{2}$ (virtuality of photon), $z$ and $p_{T}$ distributions, and
the total cross sections are also estimated. The method developed by Martin and
Ryskin is employed to avoid double counting when the different production
mechanism are considered simultaneously. The numerical results indicate that,
the equivalent photon approximation can be only applied to the coherent or
elastic electromagnetic process, the improper choice of $Q^{2}_{\mathrm{max}}$
and $y_{\mathrm{max}}$ will cause obvious errors. And the exact treatment is
needed to deal accurately with the $J/\psi$ photoproduction.
|
When a finite order vector autoregressive model is fitted to VAR($\infty$)
data the asymptotic distribution of statistics obtained via smooth functions of
least-squares estimates requires care. L\"utkepohl and Poskitt (1991) provide a
closed-form expression for the limiting distribution of (structural) impulse
responses for sieve VAR models based on the Delta method. Yet, numerical
simulations have shown that confidence intervals built in such way appear
overly conservative. In this note I argue that these results stem naturally
from the limit arguments used in L\"utkepohl and Poskitt (1991), that they
manifest when sieve inference is improperly applied, and that they can be
"remedied" by either using bootstrap resampling or, simply, by using standard
(non-sieve) asymptotics.
|
Prior research on self-supervised learning has led to considerable progress
on image classification, but often with degraded transfer performance on object
detection. The objective of this paper is to advance self-supervised pretrained
models specifically for object detection. Based on the inherent difference
between classification and detection, we propose a new self-supervised pretext
task, called instance localization. Image instances are pasted at various
locations and scales onto background images. The pretext task is to predict the
instance category given the composited images as well as the foreground
bounding boxes. We show that integration of bounding boxes into pretraining
promotes better task alignment and architecture alignment for transfer
learning. In addition, we propose an augmentation method on the bounding boxes
to further enhance the feature alignment. As a result, our model becomes weaker
at Imagenet semantic classification but stronger at image patch localization,
with an overall stronger pretrained model for object detection. Experimental
results demonstrate that our approach yields state-of-the-art transfer learning
results for object detection on PASCAL VOC and MSCOCO.
|
Powered lower-limb exoskeletons provide assistive torques to coordinate limb
motion during walking in individuals with movement disorders. Advances in
sensing and actuation have improved the wearability and portability of
state-of-the-art exoskeletons for walking. Cable-driven exoskeletons offload
the actuators away from the user, thus rendering light-weight devices to
facilitate locomotion training. However, cable-driven mechanisms experience a
slacking behavior if tension is not accurately controlled. Moreover,
counteracting forces can arise between the agonist and antagonist motors
yielding undesired joint motion. In this paper, the strategy is to develop two
control layers to improve the performance of a cable-driven exoskeleton. First,
a joint tracking controller is designed using a high-gain robust approach to
track desired knee and hip trajectories. Second, a motor synchronization
objective is developed to mitigate the effects of cable slacking for a pair of
electric motors that actuate each joint. A sliding-mode robust controller is
designed for the motor synchronization objective. A Lyapunov-based stability
analysis is developed to guarantee a uniformly ultimately bounded result for
joint tracking and exponential tracking for the motor synchronization
objective. Moreover, an average dwell time analysis provides a bound on the
number of motor switches when allocating the control between motors that
actuate each joint. An experimental result with an able-bodied individual
illustrates the feasibility of the developed control methods.
|
Hot Jupiters are predicted to have hot, clear daysides and cooler, cloudy
nightsides. Recently, an asymmetric signature of iron absorption has been
resolved in the transmission spectrum of WASP-76b using ESPRESSO on ESO's Very
large Telescope. This feature is interpreted as being due to condensation of
iron on the nightside, resulting in a different absorption signature from the
evening than from the morning limb of the planet. It represents the first time
that a chemical gradient has been observed across the surface of a single
exoplanet. In this work, we confirm the presence of the asymmetric iron feature
using archival HARPS data of four transits. The detection shows that such
features can also be resolved by observing multiple transits on smaller
telescopes. By increasing the number of planets where these condensation
features are detected, we can make chemical comparisons between exoplanets and
map condensation across a range of parameters for the first time.
|
We propose a primal-dual interior-point method (IPM) with convergence to
second-order stationary points (SOSPs) of nonlinear semidefinite optimization
problems, abbreviated as NSDPs. As far as we know, the current algorithms for
NSDPs only ensure convergence to first-order stationary points such as
Karush-Kuhn-Tucker points. The proposed method generates a sequence
approximating SOSPs while minimizing a primal-dual merit function for NSDPs by
using scaled gradient directions and directions of negative curvature. Under
some assumptions, the generated sequence accumulates at an SOSP with a
worst-case iteration complexity. This result is also obtained for a primal IPM
with slight modification. Finally, our numerical experiments show the benefits
of using directions of negative curvature in the proposed method.
|
We present a control design for semilinear and quasilinear 2x2 hyperbolic
partial differential equations with the control input at one boundary and a
nonlinear ordinary differential equation coupled to the other. The controller
can be designed to asymptotically stabilize the system at an equilibrium or
relative to a reference signal. Two related but different controllers for
semilinear and general quasilinear systems are presented and the additional
challenges in quasilinear systems are discussed. Moreover, we present an
observer that estimates the distributed PDE state and the unmeasured ODE state
from measurements at the actuated boundary only, which can be used to also
solve the output feedback control problem.
|
Sentiment analysis is an important task in natural language processing (NLP).
Most of existing state-of-the-art methods are under the supervised learning
paradigm. However, human annotations can be scarce. Thus, we should leverage
more weak supervision for sentiment analysis. In this paper, we propose a
posterior regularization framework for the variational approach to the weakly
supervised sentiment analysis to better control the posterior distribution of
the label assignment. The intuition behind the posterior regularization is that
if extracted opinion words from two documents are semantically similar, the
posterior distributions of two documents should be similar. Our experimental
results show that the posterior regularization can improve the original
variational approach to the weakly supervised sentiment analysis and the
performance is more stable with smaller prediction variance.
|
Locality Sensitive Hashing (LSH) is an effective method of indexing a set of
items to support efficient nearest neighbors queries in high-dimensional
spaces. The basic idea of LSH is that similar items should produce hash
collisions with higher probability than dissimilar items.
We study LSH for (not necessarily convex) polygons, and use it to give
efficient data structures for similar shape retrieval. Arkin et al. represent
polygons by their "turning function" - a function which follows the angle
between the polygon's tangent and the $ x $-axis while traversing the perimeter
of the polygon. They define the distance between polygons to be variations of
the $ L_p $ (for $p=1,2$) distance between their turning functions. This metric
is invariant under translation, rotation and scaling (and the selection of the
initial point on the perimeter) and therefore models well the intuitive notion
of shape resemblance.
We develop and analyze LSH near neighbor data structures for several
variations of the $ L_p $ distance for functions (for $p=1,2$). By applying our
schemes to the turning functions of a collection of polygons we obtain
efficient near neighbor LSH-based structures for polygons. To tune our
structures to turning functions of polygons, we prove some new properties of
these turning functions that may be of independent interest.
As part of our analysis, we address the following problem which is of
independent interest. Find the vertical translation of a function $ f $ that is
closest in $ L_1 $ distance to a function $ g $. We prove tight bounds on the
approximation guarantee obtained by the translation which is equal to the
difference between the averages of $ g $ and $ f $.
|
The weak-field Schwarzschild and NUT solutions of general relativity are
gravitoelectromagnetically dual to each other, except on the positive $z$-axis.
The presence of non-locality weakens this duality and violates it within a
smeared region around the positive $z$-axis, whose typical transverse size is
given by the scale of non-locality. We restore an exact non-local
gravitoelectromagnetic duality everywhere via a manifestly dual modification of
the linearized non-local field equations. In the limit of vanishing
non-locality we recover the well-known results from weak-field general
relativity.
|
In this paper, we determine the 4-adic complexity of the balanced quaternary
sequences of period $2p$ and $2(2^n-1)$ with ideal autocorrelation defined by
Kim et al. (ISIT, pp. 282-285, 2009) and Jang et al. (ISIT, pp. 278-281, 2009),
respectively. Our results show that the 4-adic complexity of the quaternary
sequences defined in these two papers is large enough to resist the attack of
the rational approximation algorithm.
|
GPUs are now used for a wide range of problems within HPC. However, making
efficient use of the computational power available with multiple GPUs is
challenging. The main challenges in achieving good performance are memory
layout, affecting memory bandwidth, effective use of the memory spaces with a
GPU, inter-GPU communication, and synchronization. We address these problems
with the Ripple library, which provides a unified view of the computational
space across multiple dimensions and multiple GPUs, allows polymorphic data
layout, and provides a simple graph interface to describe an algorithm from
which inter-GPU data transfers can be optimally scheduled. We describe the
abstractions provided by Ripple to allow complex computations to be described
simply, and to execute efficiently across many GPUs with minimal overhead. We
show performance results for a number of examples, from particle motion to
finite-volume methods and the eikonal equation, as well as showing good strong
and weak scaling results across multiple GPUs.
|
We construct a structure-preserving finite element method and time-stepping
scheme for compressible barotropic magnetohydrodynamics (MHD) both in the ideal
and resistive cases, and in the presence of viscosity. The method is deduced
from the geometric variational formulation of the equations. It preserves the
balance laws governing the evolution of total energy and magnetic helicity, and
preserves mass and the constraint $ \operatorname{div}B = 0$ to machine
precision, both at the spatially and temporally discrete levels. In particular,
conservation of energy and magnetic helicity hold at the discrete levels in the
ideal case. It is observed that cross helicity is well conserved in our
simulation in the ideal case.
|
The main result of this paper is to study the local deformations of
Calabi-Yau $\partial\bar{\partial}$-manifold that are co-polarised by the
Gauduchon metric by considering the subfamily of co-polarised fibres by the
class of Aeppli/De Rham-Gauduchon cohomology of Gauduchon metric given at the
beginning on the central fibre. In the latter part, we prove that the $p$-SKT
$h$-$\partial\bar{\partial}$-property is deformation open by constructing and
studying a new notion called $hp$-Hermitian symplectic ($hp$-HS) form.
|
Disentangled representations can be useful in many downstream tasks, help to
make deep learning models more interpretable, and allow for control over
features of synthetically generated images that can be useful in training other
models that require a large number of labelled or unlabelled data. Recently,
flow-based generative models have been proposed to generate realistic images by
directly modeling the data distribution with invertible functions. In this
work, we propose a new flow-based generative model framework, named GLOWin,
that is end-to-end invertible and able to learn disentangled representations.
Feature disentanglement is achieved by factorizing the latent space into
components such that each component learns the representation for one
generative factor. Comprehensive experiments have been conducted to evaluate
the proposed method on a public brain tumor MR dataset. Quantitative and
qualitative results suggest that the proposed method is effective in
disentangling the features from complex medical images.
|
This paper considers the problem of decentralized monitoring of a class of
non-functional properties (NFPs) with quantitative operators, namely cumulative
cost properties. The decentralized monitoring of NFPs can be a non-trivial task
for several reasons: (i) they are typically expressed at a high abstraction
level where inter-event dependencies are hidden, (ii) NFPs are difficult to be
monitored in a decentralized way, and (iii) lack of effective decomposition
techniques. We address these issues by providing a formal framework for
decentralised monitoring of LTL formulas with quantitative operators. The
presented framework employs the tableau construction and a formula unwinding
technique (i.e., a transformation technique that preserves the semantics of the
original formula) to split and distribute the input LTL formula and the
corresponding quantitative constraint in a way such that monitoring can be
performed in a decentralised manner. The employment of these techniques allows
processes to detect early violations of monitored properties and perform some
corrective or recovery actions. We demonstrate the effectiveness of the
presented framework using a case study based on a Fischertechnik training
model,a sorting line which sorts tokens based on their color into storage bins.
The analysis of the case study shows the effectiveness of the presented
framework not only in early detection of violations, but also in developing
failure recovery plans that can help to avoid serious impact of failures on the
performance of the system.
|
A card guessing game is played between two players, Guesser and Dealer. At
the beginning of the game, the Dealer holds a deck of $n$ cards (labeled $1,
..., n$). For $n$ turns, the Dealer draws a card from the deck, the Guesser
guesses which card was drawn, and then the card is discarded from the deck. The
Guesser receives a point for each correctly guessed card. With perfect memory,
a Guesser can keep track of all cards that were played so far and pick at
random a card that has not appeared so far, yielding in expectation $\ln n$
correct guesses. With no memory, the best a Guesser can do will result in a
single guess in expectation. We consider the case of a memory bounded Guesser
that has $m < n$ memory bits. We show that the performance of such a memory
bounded Guesser depends much on the behavior of the Dealer. In more detail, we
show that there is a gap between the static case, where the Dealer draws cards
from a properly shuffled deck or a prearranged one, and the adaptive case,
where the Dealer draws cards thoughtfully, in an adversarial manner.
Specifically:
1. We show a Guesser with $O(\log^2 n)$ memory bits that scores a near
optimal result against any static Dealer.
2. We show that no Guesser with $m$ bits of memory can score better than
$O(\sqrt{m})$ correct guesses, thus, no Guesser can score better than $\min
\{\sqrt{m}, \ln n\}$, i.e., the above Guesser is optimal.
3. We show an efficient adaptive Dealer against which no Guesser with $m$
memory bits can make more than $\ln m + 2 \ln \log n + O(1)$ correct guesses in
expectation.
These results are (almost) tight, and we prove them using compression
arguments that harness the guessing strategy for encoding.
|
Chromium iodide monolayers, which have different magnetic properties in
comparison to the bulk chromium iodide, have been shown to form skyrmionic
states in applied electromagnetic fields or in Janus-layer devices. In this
work, we demonstrate that spin-canted solutions can be induced into monolayer
chromium iodide by select substitution of iodide atoms with isovalent
impurities. Several concentrations and spatial configurations of halide
substitutional defects are selected to probe the coupling between the local
defect-induced geometric distortions and orientation of chromium magnetic
moments. This work provides atomic-level insight into how atomically precise
strain-engineering can be used to create and control complex magnetic patterns
in chromium iodide layers and lays out the foundation for investigating the
field- and geometric-dependent magnetic properties in similar two-dimensional
materials.
|
We formalize the concept of the modular energy operator within the Page and
Wooters timeless framework. As a result, this operator is elevated to the same
status as the more studied modular operators of position and momentum. In
analogy with dynamical nonlocality in space associated with the modular
momentum, we introduce and analyze the nonlocality in time associated with the
modular energy operator. Some applications of our formalization are provided
through illustrative examples.
|
Models that top leaderboards often perform unsatisfactorily when deployed in
real world applications; this has necessitated rigorous and expensive
pre-deployment model testing. A hitherto unexplored facet of model performance
is: Are our leaderboards doing equitable evaluation? In this paper, we
introduce a task-agnostic method to probe leaderboards by weighting samples
based on their `difficulty' level. We find that leaderboards can be
adversarially attacked and top performing models may not always be the best
models. We subsequently propose alternate evaluation metrics. Our experiments
on 10 models show changes in model ranking and an overall reduction in
previously reported performance -- thus rectifying the overestimation of AI
systems' capabilities. Inspired by behavioral testing principles, we further
develop a prototype of a visual analytics tool that enables leaderboard
revamping through customization, based on an end user's focus area. This helps
users analyze models' strengths and weaknesses, and guides them in the
selection of a model best suited for their application scenario. In a user
study, members of various commercial product development teams, covering 5
focus areas, find that our prototype reduces pre-deployment development and
testing effort by 41% on average.
|
The nuclides inhaled during nuclear accidents usually cause internal
contamination of the lungs with low activity. Although a parallel-hole imaging
system, which is widely used in medical gamma cameras, has a high resolution
and good image quality, owing to its extremely low detection efficiency, it
remains difficult to obtain images of inhaled lung contamination. In this
study, the Monte Carlo method was used to study the internal lung contamination
imaging using the MPA-MURA coded-aperture collimator. The imaging system
consisted of an adult male lung model, with a mosaicked, pattern-centered, and
anti-symmetric MURA coded-aperture collimator model and a CsI(Tl) detector
model. The MLEM decoding algorithm was used to reconstruct the internal
contamination image, and the complementary imaging method was used to reduce
the number of artifacts. The full width at half maximum of the I-131 point
source image reconstructed by the mosaicked, pattern-centered, and
anti-symmetric Modified uniformly redundant array (MPA-MURA) coded-aperture
imaging reached 2.51 mm, and the signal-to-noise ratio of the simplified
respiratory tract source (I-131) image reconstructed through MPA-MURA
coded-aperture imaging was 3.98 dB. Although the spatial resolution of MPA-MURA
coded aperture imaging is not as good as that of parallel-hole imaging, the
detection efficiency of PMA-MURA coded-aperture imaging is two orders of
magnitude higher than that of parallel hole collimator imaging. Considering the
low activity level of internal lung contamination caused by nuclear accidents,
PMA-MURA coded-aperture imaging has significant potential for the development
of lung contamination imaging.
|
In 2018, Renes [IEEE Trans. Inf. Theory, vol. 64, no. 1, pp. 577-592 (2018)]
(arXiv:1701.05583) developed a general theory of channel duality for
classical-input quantum-output (CQ) channels. That result showed that a number
of well-known duality results for linear codes on the binary erasure channel
could be extended to general classical channels at the expense of using dual
problems which are intrinsically quantum mechanical. One special case of this
duality is a connection between coding for error correction (resp. wire-tap
secrecy) on the quantum pure-state channel (PSC) and coding for wire-tap
secrecy (resp. error correction) on the classical binary symmetric channel
(BSC). While this result has important implications for classical coding, the
machinery behind the general duality result is rather challenging for
researchers without a strong background in quantum information theory. In this
work, we leverage prior results for linear codes on PSCs to give an alternate
derivation of the aforementioned special case by computing closed-form
expressions for the performance metrics. The noted prior results include
optimality of the square-root measurement (SRM) for linear codes on the PSC and
the Fourier duality of linear codes. We also show that the SRM forms a
suboptimal measurement for channel coding on the BSC (when interpreted as a CQ
problem) and secret communications on the PSC. Our proofs only require linear
algebra and basic group theory, though we use the quantum Dirac notation for
convenience.
|
When writing source code, programmers have varying levels of freedom when it
comes to the creation and use of identifiers. Do they habitually use the same
identifiers, names that are different to those used by others? Is it then
possible to tell who the author of a piece of code is by examining these
identifiers? If so, can we use the presence or absence of identifiers to assist
in correctly classifying programs to authors? Is it possible to hide the
provenance of programs by identifier renaming? In this study, we assess the
importance of three types of identifiers in source code author classification
for two different Java program data sets. We do this through a sequence of
experiments in which we disguise one type of identifier at a time. These
experiments are performed using as a tool the Source Code Author Profiles
(SCAP) method. The results show that, although identifiers when examined as a
whole do not seem to reflect program authorship for these data sets, when
examined separately there is evidence that class names do signal the author of
the program. In contrast, simple variables and method names used in Java
programs do not appear to reflect program authorship. On the contrary, our
analysis suggests that such identifiers are so common as to mask authorship. We
believe that these results have applicability in relation to the robustness of
code plagiarism analysis and that the underlying methods could be valuable in
cases of litigation arising from disputes over program authorship.
|
We propose a hybrid architecture composed of a fully convolutional network
(FCN) and a Dempster-Shafer layer for image semantic segmentation. In the
so-called evidential FCN (E-FCN), an encoder-decoder architecture first
extracts pixel-wise feature maps from an input image. A Dempster-Shafer layer
then computes mass functions at each pixel location based on distances to
prototypes. Finally, a utility layer performs semantic segmentation from mass
functions and allows for imprecise classification of ambiguous pixels and
outliers. We propose an end-to-end learning strategy for jointly updating the
network parameters, which can make use of soft (imprecise) labels. Experiments
using three databases (Pascal VOC 2011, MIT-scene Parsing and SIFT Flow) show
that the proposed combination improves the accuracy and calibration of semantic
segmentation by assigning confusing pixels to multi-class sets.
|
In this paper, we propose panoramic annular simultaneous localization and
mapping (PA-SLAM), a visual SLAM system based on panoramic annular lens. A
hybrid point selection strategy is put forward in the tracking front-end, which
ensures repeatability of keypoints and enables loop closure detection based on
the bag-of-words approach. Every detected loop candidate is verified
geometrically and the $Sim(3)$ relative pose constraint is estimated to perform
pose graph optimization and global bundle adjustment in the back-end. A
comprehensive set of experiments on real-world datasets demonstrates that the
hybrid point selection strategy allows reliable loop closure detection, and the
accumulated error and scale drift have been significantly reduced via global
optimization, enabling PA-SLAM to reach state-of-the-art accuracy while
maintaining high robustness and efficiency.
|
Parker Solar Probe (PSP) is providing an unprecedented view of the Sun's
corona as it progressively dips closer into the solar atmosphere with each
solar encounter. Each set of observations provides a unique opportunity to test
and constrain global models of the solar corona and inner heliosphere and, in
turn, use the model results to provide a global context for interpreting such
observations. In this study, we develop a set of global magnetohydrodynamic
(MHD) model solutions of varying degrees of sophistication for PSP's first four
encounters and compare the results with in situ measurements from PSP,
Stereo-A, and Earth-based spacecraft, with the objective of assessing which
models perform better or worse. All models were primarily driven by the
observed photospheric magnetic field using data from Solar Dynamics
Observatory's Helioseismic and Magnetic Imager (HMI) instrument. Overall, we
find that there are substantial differences between the model results, both in
terms of the large-scale structure of the inner heliosphere during these time
periods, as well as in the inferred time-series at various spacecraft. The
"thermodynamic" model, which represents the "middle ground", in terms of model
complexity, appears to reproduce the observations most closely for all four
encounters. Our results also contradict an earlier study that had hinted that
the open flux problem may disappear nearer the Sun. Instead, our results
suggest that this "missing" solar flux is still missing even at 26.9 Rs, and
thus it cannot be explained by interplanetary processes. Finally, the model
results were also used to provide a global context for interpreting the
localized in situ measurements.
|
Peer code review is a widely adopted software engineering practice to ensure
code quality and ensure software reliability in both the commercial and
open-source software projects. Due to the large effort overhead associated with
practicing code reviews, project managers often wonder, if their code reviews
are effective and if there are improvement opportunities in that respect. Since
project managers at Samsung Research Bangladesh (SRBD) were also intrigued by
these questions, this research developed, deployed, and evaluated a
production-ready solution using the Balanced SCorecard (BSC) strategy that SRBD
managers can use in their day-to-day management to monitor individual
developer's, a particular project's or the entire organization's code review
effectiveness. Following the four-step framework of the BSC strategy, we-- 1)
defined the operation goals of this research, 2) defined a set of metrics to
measure the effectiveness of code reviews, 3) developed an automated mechanism
to measure those metrics, and 4) developed and evaluated a monitoring
application to inform the key stakeholders. Our automated model to identify
useful code reviews achieves 7.88% and 14.39% improvement in terms of accuracy
and minority class F1 score respectively over the models proposed in prior
studies. It also outperforms human evaluators from SRBD, that the model
replaces, by a margin of 25.32% and 23.84% respectively in terms of accuracy
and minority class F1 score. In our post-deployment survey, SRBD developers and
managers indicated that they found our solution as useful and it provided them
with important insights to help their decision makings.
|
The community detection problem requires to cluster the nodes of a network
into a small number of well-connected "communities". There has been substantial
recent progress in characterizing the fundamental statistical limits of
community detection under simple stochastic block models. However, in
real-world applications, the network structure is typically dynamic, with nodes
that join over time. In this setting, we would like a detection algorithm to
perform only a limited number of updates at each node arrival. While standard
voting approaches satisfy this constraint, it is unclear whether they exploit
the network information optimally. We introduce a simple model for networks
growing over time which we refer to as streaming stochastic block model
(StSBM). Within this model, we prove that voting algorithms have fundamental
limitations. We also develop a streaming belief-propagation (StreamBP)
approach, for which we prove optimality in certain regimes. We validate our
theoretical findings on synthetic and real data.
|
We address the problems of identifying malware in network telemetry logs and
providing \emph{indicators of compromise} -- comprehensible explanations of
behavioral patterns that identify the threat. In our system, an array of
specialized detectors abstracts network-flow data into comprehensible
\emph{network events} in a first step. We develop a neural network that
processes this sequence of events and identifies specific threats, malware
families and broad categories of malware. We then use the
\emph{integrated-gradients} method to highlight events that jointly constitute
the characteristic behavioral pattern of the threat. We compare network
architectures based on CNNs, LSTMs, and transformers, and explore the efficacy
of unsupervised pre-training experimentally on large-scale telemetry data. We
demonstrate how this system detects njRAT and other malware based on behavioral
patterns.
|
Nine point sources appeared within half an hour on a region within $\sim$ 10
arcmin of a red-sensitive photographic plate taken in April 1950 as part of the
historic Palomar Sky Survey. All nine sources are absent on both previous and
later photographic images, and absent in modern surveys with CCD detectors
which go several magnitudes deeper. We present deep CCD images with the
10.4-meter Gran Telescopio Canarias (GTC), reaching brightness $r \sim 26$ mag,
that reveal possible optical counterparts, although these counterparts could
equally well be just chance projections. The incidence of transients in the
investigated photographic plate is far higher than expected from known
detection rates of optical counterparts to e.g.\ flaring dwarf stars, Fast
Radio Bursts (FRBs), Gamma Ray Bursts (GRBs) or microlensing events. One
possible explanation is that the plates have been subjected to an unknown type
of contamination producing mainly point sources with of varying intensities
along with some mechanism of concentration within a radius of $\sim$ 10 arcmin
on the plate. If contamination as an explanation can be fully excluded, another
possibility is fast (t $<0.5$ s) solar reflections from objects near
geosynchronous orbits. An alternative route to confirm the latter scenario is
by looking for images from the First Palomar Sky Survey where multiple
transients follow a line.
|
Inverse problems constrained by partial differential equations (PDEs) play a
critical role in model development and calibration. In many applications, there
are multiple uncertain parameters in a model which must be estimated. Although
the Bayesian formulation is attractive for such problems, computational cost
and high dimensionality frequently prohibit a thorough exploration of the
parametric uncertainty. A common approach is to reduce the dimension by fixing
some parameters (which we will call auxiliary parameters) to a best estimate
and using techniques from PDE-constrained optimization to approximate
properties of the Bayesian posterior distribution. For instance, the maximum a
posteriori probability (MAP) and the Laplace approximation of the posterior
covariance can be computed. In this article, we propose using
hyper-differential sensitivity analysis (HDSA) to assess the sensitivity of the
MAP point to changes in the auxiliary parameters. We establish an
interpretation of HDSA as correlations in the posterior distribution.
Foundational assumptions for HDSA require satisfaction of the optimality
conditions which are not always feasible or appropriate as a result of
ill-posedness in the inverse problem. We introduce novel theoretical and
computational approaches to justify and enable HDSA for ill-posed inverse
problems by projecting the sensitivities on likelihood informed subspaces and
defining a posteriori updates. Our proposed framework is demonstrated on a
nonlinear multi-physics inverse problem motivated by estimation of spatially
heterogenous material properties in the presence of spatially distributed
parametric modeling uncertainties.
|
In this paper, we consider the complex flows when all three regimes
pre-Darcy, Darcy and post-Darcy may be present in different portions of a same
domain. We unify all three flow regimes under mathematics formulation. We
describe the flow of a single-phase fluid in $\R^d, d\ge 2$ by a nonlinear
degenerate system of density and momentum. A mixed finite element method is
proposed for the approximation of the solution of the above system. The
stability of the approximations are proved; the error estimates are derived for
the numerical approximations for both continuous and discrete time procedures.
The continuous dependence of numerical solutions on physical parameters are
demonstrated. Experimental studies are presented regarding convergence rates
and showing the dependence of the solution on the physical parameters.
|
To classify images based on their content is one of the most studied topics
in the field of computer vision. Nowadays, this problem can be addressed using
modern techniques such as Convolutional Neural Networks (CNN), but over the
years different classical methods have been developed. In this report, we
implement an image classifier using both classic computer vision and deep
learning techniques. Specifically, we study the performance of a Bag of Visual
Words classifier using Support Vector Machines, a Multilayer Perceptron, an
existing architecture named InceptionV3 and our own CNN, TinyNet, designed from
scratch. We evaluate each of the cases in terms of accuracy and loss, and we
obtain results that vary between 0.6 and 0.96 depending on the model and
configuration used.
|
We present the first linear polarization measurements from the 2015
long-duration balloon flight of SPIDER, an experiment designed to map the
polarization of the cosmic microwave background (CMB) on degree angular scales.
Results from these measurements include maps and angular power spectra from
observations of 4.8% of the sky at 95 and 150 GHz, along with the results of
internal consistency tests on these data. While the polarized CMB anisotropy
from primordial density perturbations is the dominant signal in this region of
sky, Galactic dust emission is also detected with high significance; Galactic
synchrotron emission is found to be negligible in the SPIDER bands. We employ
two independent foreground-removal techniques in order to explore the
sensitivity of the cosmological result to the assumptions made by each. The
primary method uses a dust template derived from Planck data to subtract the
Galactic dust signal. A second approach, employing a joint analysis of SPIDER
and Planck data in the harmonic domain, assumes a modified-blackbody model for
the spectral energy distribution of the dust with no constraint on its spatial
morphology. Using a likelihood that jointly samples the template amplitude and
$r$ parameter space, we derive 95% upper limits on the primordial
tensor-to-scalar ratio from Feldman-Cousins and Bayesian constructions, finding
$r<0.11$ and $r<0.19$, respectively. Roughly half the uncertainty in $r$
derives from noise associated with the template subtraction. New data at 280
GHz from SPIDER's second flight will complement the Planck polarization maps,
providing powerful measurements of the polarized Galactic dust emission.
|
For the High Luminosity upgrade of the Large Hadron Collider the current
ATLAS Inner Detector will be replaced by an all-silicon Inner Tracker. The
pixel detector will consist of five barrel layers and a number of rings,
resulting in about 13 m^2 of instrumented area. Due to the huge non-ionising
fluence (1e16 neq/cm^2) and ionising dose (5 MGy), the two innermost layers,
instrumented with 3D pixel sensors and 100 um thin planar sensors, will be
replaced after about five years of operation. Each pixel layer comprises hybrid
detector modules that will be read out by novel ASICs, implemented in 65 nm
CMOS technology, with a bandwidth of up to 5 Gbit/s. Data will be transmitted
optically to the off-detector readout system. To save material in the servicing
cables, serial powering is employed for the supply voltage of the readout
ASICs. Large scale prototyping programmes are being carried out by all
subsystems.
This paper will give an overview of the layout and current status of the
development of the ITk Pixel Detector.
|
We construct the first smooth bubbling geometries using the Weyl formalism.
The solutions are obtained from Einstein theory coupled to a two-form gauge
field in six dimensions with two compact directions. We classify the charged
Weyl solutions in this framework. Smooth solutions consist of a chain of
Kaluza-Klein bubbles that can be neutral or wrapped by electromagnetic fluxes,
and are free of curvature and conical singularities. We discuss how such
topological structures are prevented from gravitational collapse without
struts. When embedded in type IIB, the class of solutions describes D1-D5-KKm
solutions in the non-BPS regime, and the smooth bubbling solutions have the
same conserved charges as a static four-dimensional non-extremal Cvetic-Youm
black hole.
|
Metaphorical expressions are difficult linguistic phenomena, challenging
diverse Natural Language Processing tasks. Previous works showed that
paraphrasing a metaphor as its literal counterpart can help machines better
process metaphors on downstream tasks. In this paper, we interpret metaphors
with BERT and WordNet hypernyms and synonyms in an unsupervised manner, showing
that our method significantly outperforms the state-of-the-art baseline. We
also demonstrate that our method can help a machine translation system improve
its accuracy in translating English metaphors to 8 target languages.
|
The transfer of tasks with sometimes far-reaching moral implications to
autonomous systems raises a number of ethical questions. In addition to
fundamental questions about the moral agency of these systems, behavioral
issues arise. This article focuses on the responsibility of agents who decide
on our behalf. We investigate the empirically accessible question of whether
the production of moral outcomes by an agent is systematically judged
differently when the agent is artificial and not human. The results of a
laboratory experiment suggest that decision-makers can actually rid themselves
of guilt more easily by delegating to machines than by delegating to other
people. Our results imply that the availability of artificial agents could
provide stronger incentives for decision makers to delegate morally sensitive
decisions.
|
Background: Prokaryotic viruses, which infect bacteria and archaea, are the
most abundant and diverse biological entities in the biosphere. To understand
their regulatory roles in various ecosystems and to harness the potential of
bacteriophages for use in therapy, more knowledge of viral-host relationships
is required. High-throughput sequencing and its application to the microbiome
have offered new opportunities for computational approaches for predicting
which hosts particular viruses can infect. However, there are two main
challenges for computational host prediction. First, the empirically known
virus-host relationships are very limited. Second, although sequence similarity
between viruses and their prokaryote hosts have been used as a major feature
for host prediction, the alignment is either missing or ambiguous in many
cases. Thus, there is still a need to improve the accuracy of host prediction.
Results: In this work, we present a semi-supervised learning model, named
HostG, to conduct host prediction for novel viruses. We construct a knowledge
graph by utilizing both virus-virus protein similarity and virus-host DNA
sequence similarity. Then graph convolutional network (GCN) is adopted to
exploit viruses with or without known hosts in training to enhance the learning
ability. During the GCN training, we minimize the expected calibrated error
(ECE) to ensure the confidence of the predictions. We tested HostG on both
simulated and real sequencing data and compared its performance with other
state-of-the-art methods specifcally designed for virus host classification
(VHM-net, WIsH, PHP, HoPhage, RaFAH, vHULK, and VPF-Class). Conclusion: HostG
outperforms other popular methods, demonstrating the efficacy of using a
GCN-based semi-supervised learning approach. A particular advantage of HostG is
its ability to predict hosts from new taxa.
|
We propose a method to generate cutting-planes from multiple covers of
knapsack constraints. The covers may come from different knapsack inequalities
if the weights in the inequalities form a totally-ordered set. Thus, we
introduce and study the structure of a totally-ordered multiple knapsack set.
The valid multi-cover inequalities we derive for its convex hull have a number
of interesting properties. First, they generalize the well-known (1,
k)-configuration inequalities. Second, they are not aggregation cuts. Third,
they cannot be generated as a rank-1 Chvatal-Gomory cut from the inequality
system consisting of the knapsack constraints and all their minimal cover
inequalities. We also provide conditions under which the inequalities are
facets for the convex hull of the totally-ordered knapsack set, as well as
conditions for those inequalities to fully characterize its convex hull. We
give an integer program to solve the separation and provide numerical
experiments that showcase the strength of these new inequalities.
|
The latest Industrial revolution has helped industries in achieving very high
rates of productivity and efficiency. It has introduced data aggregation and
cyber-physical systems to optimize planning and scheduling. Although,
uncertainty in the environment and the imprecise nature of human operators are
not accurately considered for into the decision making process. This leads to
delays in consignments and imprecise budget estimations. This widespread
practice in the industrial models is flawed and requires rectification. Various
other articles have approached to solve this problem through stochastic or
fuzzy set model methods. This paper presents a comprehensive method to
logically and realistically quantify the non-deterministic uncertainty through
probabilistic uncertainty modelling. This method is applicable on virtually all
Industrial data sets, as the model is self adjusting and uses
epsilon-contamination to cater to limited or incomplete data sets. The results
are numerically validated through an Industrial data set in Flanders, Belgium.
The data driven results achieved through this robust scheduling method
illustrate the improvement in performance.
|
We consider the problem of learning to simplify medical texts. This is
important because most reliable, up-to-date information in biomedicine is dense
with jargon and thus practically inaccessible to the lay audience. Furthermore,
manual simplification does not scale to the rapidly growing body of biomedical
literature, motivating the need for automated approaches. Unfortunately, there
are no large-scale resources available for this task. In this work we introduce
a new corpus of parallel texts in English comprising technical and lay
summaries of all published evidence pertaining to different clinical topics. We
then propose a new metric based on likelihood scores from a masked language
model pretrained on scientific texts. We show that this automated measure
better differentiates between technical and lay summaries than existing
heuristics. We introduce and evaluate baseline encoder-decoder Transformer
models for simplification and propose a novel augmentation to these in which we
explicitly penalize the decoder for producing "jargon" terms; we find that this
yields improvements over baselines in terms of readability.
|
We say that $M$ and $S$ form a \textsl{splitting} of $G$ if every nonzero
element $g$ of $G$ has a unique representation of the form $g=ms$ with $m\in M$
and $s\in S$, while $0$ has no such representation. The splitting is called
{\it nonsingular} if $\gcd(|G|, a) = 1$ for any $a\in M$.
In this paper, we focus our study on nonsingular splittings of cyclic groups.
We introduce a new notation --direct KM logarithm and we prove that if there is
a prime $q$ such that $M$ splits $\mathbb{Z}_q$, then there are infinitely many
primes $p$ such that $M$ splits $\mathbb{Z}_p$.
|
This paper presents a simple technique of multifractal traffic modeling. It
proposes a method of fitting model to a given traffic trace. A comparison of
simulation results obtained for an exemplary trace, multifractal model and
Markov Modulated Poisson Process models has been performed.
|
This paper explores a novel connection between two areas: shape analysis of
surfaces and unbalanced optimal transport. Specifically, we characterize the
square root normal field (SRNF) shape distance as the pullback of the
Wasserstein-Fisher-Rao (WFR) unbalanced optimal transport distance. In
addition, we propose a new algorithm for computing the WFR distance and present
numerical results that highlight the effectiveness of this algorithm. As a
consequence of our results we obtain a precise method for computing the SRNF
shape distance directly on piecewise linear surfaces and gain new insights
about the degeneracy of this distance.
|
In this paper, we build up a scaled homology theory, $lc$-homology, for
metric spaces such that every metric space can be visually regarded as "locally
contractible" with this newly-built homology. We check that $lc$-homology
satisfies all Eilenberg-Steenrod axioms except exactness axiom whereas its
corresponding $lc$-cohomology satisfies all axioms for cohomology. This
homology can relax the smooth manifold restrictions on the compact metric space
such that the entropy conjecture will hold for the first $lc$-homology group.
|
We present multispectral analysis (radio, H$\alpha$, UV/EUV, and hard X-ray)
of a confined flare from 2015 March 12. This flare started within the active
region NOAA 12 297 and then it expanded into a large preexisting magnetic rope
embedded with a cold filament. The expansion started with several brightenings
located along the rope. This process was accompanied by a group of slowly
positively drifting bursts in the 0.8--2 GHz range. The frequency drift of
these bursts was 45 -- 100 MHz s$^{-1}$. One of the bursts had an S-like form.
During the brightening of the rope we observed an unique bright EUV structure
transverse to the rope axis. The structure was observed in a broad range of
temperatures and it moved along the rope with the velocity of about 240 km
s$^{-1}$. When the structure dissipated, we saw a plasma further following
twisted threads in the rope. The observed slowly positively drifting bursts
were interpreted considering particle beams and we show that one with the
S-like form could be explained by the beam propagating through the helical
structure of the magnetic rope. The bright structure transverse to the rope
axis was interpreted considering line-of-sight effects and the
dissipation-spreading process, which we found to be more likely.
|
Coupled-cluster theory with single and double excitations (CCSD) is a
promising ab initio method for the electronic structure of three-dimensional
metals, for which second-order perturbation theory (MP2) diverges in the
thermodynamic limit. However, due to the high cost and poor convergence of CCSD
with respect to basis size, applying CCSD to periodic systems often leads to
large basis set errors. In a common "composite" method, MP2 is used to recover
the missing dynamical correlation energy through a focal-point correction, but
the inadequacy of MP2 for metals raises questions about this approach. Here we
describe how high-energy excitations treated by MP2 can be "downfolded" into a
low-energy active space to be treated by CCSD. Comparing how the composite and
downfolding approaches perform for the uniform electron gas, we find that the
latter converges more quickly with respect to the basis set size. Nonetheless,
the composite approach is surprisingly accurate because it removes the
problematic MP2 treatment of double excitations near the Fermi surface. Using
the method to estimate the CCSD correlation energy in the combined complete
basis set and thermodynamic limits, we find CCSD recovers over 90% of the exact
correlation energy at $r_s=4$. We also test the composite and downfolding
approaches with the random-phase approximation used in place of MP2, yielding a
method that is more effective but more expensive.
|
Recently, low-dimensional vector space representations of knowledge graphs
(KGs) have been applied to find answers to conjunctive queries (CQs) over
incomplete KGs. However, the current methods only focus on inductive reasoning,
i.e. answering CQs by predicting facts based on patterns learned from the data,
and lack the ability of deductive reasoning by applying external domain
knowledge. Such (expert or commonsense) domain knowledge is an invaluable
resource which can be used to advance machine intelligence. To address this
shortcoming, we introduce a neural-symbolic method for ontology-mediated CQ
answering over incomplete KGs that operates in the embedding space. More
specifically, we propose various data augmentation strategies to generate
training queries using query-rewriting based methods and then exploit a novel
loss function for training the model. The experimental results demonstrate the
effectiveness of our training strategies and the new loss function, i.e., our
method significantly outperforms the baseline in the settings that require both
inductive and deductive reasoning.
|
In digital signal processing time-frequency transforms are used to analyze
time-varying signals with respect to their spectral contents over time. Apart
from the commonly used short-time Fourier transform, other methods exist in
literature, such as the Wavelet, Stockwell or Wigner-Ville transform.
Consequently, engineers working on digital signal processing tasks are often
faced with the question which transform is appropriate for a specific
application. To address this question, this paper first briefly introduces the
different transforms. Then it compares them with respect to the achievable
resolution in time and frequency and possible artifacts. Finally, the paper
contains a gallery of time-frequency representations of numerous signals from
different fields of applications to allow for visual comparison.
|
Partial differential equations (PDEs) play a fundamental role in modeling and
simulating problems across a wide range of disciplines. Recent advances in deep
learning have shown the great potential of physics-informed neural networks
(PINNs) to solve PDEs as a basis for data-driven modeling and inverse analysis.
However, the majority of existing PINN methods, based on fully-connected NNs,
pose intrinsic limitations to low-dimensional spatiotemporal parameterizations.
Moreover, since the initial/boundary conditions (I/BCs) are softly imposed via
penalty, the solution quality heavily relies on hyperparameter tuning. To this
end, we propose the novel physics-informed convolutional-recurrent learning
architectures (PhyCRNet and PhyCRNet-s) for solving PDEs without any labeled
data. Specifically, an encoder-decoder convolutional long short-term memory
network is proposed for low-dimensional spatial feature extraction and temporal
evolution learning. The loss function is defined as the aggregated discretized
PDE residuals, while the I/BCs are hard-encoded in the network to ensure
forcible satisfaction (e.g., periodic boundary padding). The networks are
further enhanced by autoregressive and residual connections that explicitly
simulate time marching. The performance of our proposed methods has been
assessed by solving three nonlinear PDEs (e.g., 2D Burgers' equations, the
$\lambda$-$\omega$ and FitzHugh Nagumo reaction-diffusion equations), and
compared against the start-of-the-art baseline algorithms. The numerical
results demonstrate the superiority of our proposed methodology in the context
of solution accuracy, extrapolability and generalizability.
|
A limiting factor towards the wide routine use of wearables devices for
continuous healthcare monitoring is their cumbersome and obtrusive nature. This
is particularly true for electroencephalography (EEG) recordings, which require
the placement of multiple electrodes in contact with the scalp. In this work,
we propose to identify the optimal wearable EEG electrode set-up, in terms of
minimal number of electrodes, comfortable location and performance, for
EEG-based event detection and monitoring. By relying on the demonstrated power
of autoencoder (AE) networks to learn latent representations from
high-dimensional data, our proposed strategy trains an AE architecture in a
one-class classification setup with different electrode set-ups as input data.
The resulting models are assessed using the F-score and the best set-up is
chosen according to the established optimal criteria. Using alpha wave
detection as use case, we demonstrate that the proposed method allows to detect
an alpha state from an optimal set-up consisting of electrodes in the forehead
and behind the ear, with an average F-score of 0.78. Our results suggest that a
learning-based approach can be used to enable the design and implementation of
optimized wearable devices for real-life healthcare monitoring.
|
The hybrid electric system has good potential for unmanned tracked vehicles
due to its excellent power and economy. Due to unmanned tracked vehicles have
no traditional driving devices, and the driving cycle is uncertain, it brings
new challenges to conventional energy management strategies. This paper
proposes a novel energy management strategy for unmanned tracked vehicles based
on local speed planning. The contributions are threefold. Firstly, a local
speed planning algorithm is adopted for the input of driving cycle prediction
to avoid the dependence of traditional vehicles on driver's operation.
Secondly, a prediction model based on Convolutional Neural Networks and Long
Short-Term Memory (CNN-LSTM) is proposed, which is used to process both the
planned and the historical velocity series to improve the prediction accuracy.
Finally, based on the prediction results, the model predictive control
algorithm is used to realize the real-time optimization of energy management.
The validity of the method is verified by simulation using collected data from
actual field experiments of our unmanned tracked vehicle. Compared with
multi-step neural networks, the prediction model based on CNN-LSTM improves the
prediction accuracy by 20%. Compared with the traditional regular energy
management strategy, the energy management strategy based on model predictive
control reduces fuel consumption by 7%.
|
Suppose $\phi$ is a $\mathbb{Z}/4$-cover of a curve over an algebraically
closed field $k$ of characteristic $2$, and $\phi_1$ is its
$\mathbb{Z}/2$-sub-cover. Suppose, moreover, that $\Phi_1$ is a lift of
$\phi_1$ to a complete discrete valuation ring $R$ that is a finite extension
of the ring of Witt vectors $W(k)$ (hence in characteristic zero). We show that
there exists a finite extension $R'$ of $R$, and a lift $\Phi$ of $\phi$ to
$R'$ with a sub-cover isomorphic to $\Phi_1 \otimes_k R'$. This gives the first
non-trivial family of cyclic covers where Sa\"idi's refined lifting conjecture
holds.
|
Given a sequence $(\xi_n)$ of standard i.i.d complex Gaussian random
variables, Peres and Vir\'ag (in the paper ``Zeros of the i.i.d. Gaussian power
series: a conformally invariant determinantal process'' {\it Acta Math.} (2005)
194, 1-35) discovered the striking fact that the zeros of the random power
series $f(z) = \sum_{n=1}^\infty \xi_n z^{n-1}$ in the complex unit disc
$\mathbb{D}$ constitute a determinantal point process. The study of the zeros
of the general random series
$f(z)$ where the restriction of independence is relaxed upon the random
variables $(\xi_n)$ is an important open problem. This paper proves that if
$(\xi_n)$ is an infinite sequence of complex Gaussian random variables such
that their covariance matrix is invertible and its inverse is a Toeplitz
matrix, then the zero set of $f(z)$ constitutes a determinantal point process
with the same distribution as the case of i.i.d variables studied by Peres and
Vir\'ag. The arguments are based on some interplays between Hardy spaces and
reproducing kernels. Illustrative examples are constructed from classical
Toeplitz matrices and the classical fractional Gaussian noise.
|
Referring Expression Comprehension (REC) has become one of the most important
tasks in visual reasoning, since it is an essential step for many
vision-and-language tasks such as visual question answering. However, it has
not been widely used in many downstream tasks because it suffers 1) two-stage
methods exist heavy computation cost and inevitable error accumulation, and 2)
one-stage methods have to depend on lots of hyper-parameters (such as anchors)
to generate bounding box. In this paper, we present a proposal-free one-stage
(PFOS) model that is able to regress the region-of-interest from the image,
based on a textual query, in an end-to-end manner. Instead of using the
dominant anchor proposal fashion, we directly take the dense-grid of an image
as input for a cross-attention transformer that learns grid-word
correspondences. The final bounding box is predicted directly from the image
without the time-consuming anchor selection process that previous methods
suffer. Our model achieves the state-of-the-art performance on four referring
expression datasets with higher efficiency, comparing to previous best
one-stage and two-stage methods.
|
In this paper a martingale problem for super-Brownian motion with interactive
branching is derived. The uniqueness of the solution to the martingale problem
is obtained by using the pathwise uniqueness of the solution to a corresponding
system of SPDEs with proper boundary conditions. The existence of the solution
to the martingale problem and the Holder continuity of the density process are
also studied.
|
Diffusion pore imaging is an extension of diffusion-weighted nuclear magnetic
resonance imaging enabling the direct measurement of the shape of arbitrarily
formed, closed pores by probing diffusion restrictions using the motion of
spin-bearing particles. Examples of such pores comprise cells in biological
tissue or oil containing cavities in porous rocks. All pores contained in the
measurement volume contribute to one reconstructed image, which reduces the
problem of vanishing signal at increasing resolution present in conventional
magnetic resonance imaging. It has been previously experimentally demonstrated
that pore imaging using a combination of a long and a narrow magnetic field
gradient pulse is feasible. In this work, an experimental verification is
presented showing that pores can be imaged using short gradient pulses only.
Experiments were carried out using hyperpolarized xenon gas in well-defined
pores. The phase required for pore image reconstruction was retrieved from
double diffusion encoded (DDE) measurements, while the magnitude could either
be obtained from DDE signals or classical diffusion measurements with single
encoding. The occurring image artifacts caused by restrictions of the gradient
system, insufficient diffusion time, and by the phase reconstruction approach
were investigated. Employing short gradient pulses only is advantageous
compared to the initial long-narrow approach due to a more flexible sequence
design when omitting the long gradient and due to faster convergence to the
diffusion long-time limit, which may enable application to larger pores.
|
This work presents a simulation framework to generate human micro-Dopplers in
WiFi based passive radar scenarios, wherein we simulate IEEE 802.11g complaint
WiFi transmissions using MATLAB's WLAN toolbox and human animation models
derived from a marker-based motion capture system. We integrate WiFi
transmission signals with the human animation data to generate the
micro-Doppler features that incorporate the diversity of human motion
characteristics, and the sensor parameters. In this paper, we consider five
human activities. We uniformly benchmark the classification performance of
multiple machine learning and deep learning models against a common dataset.
Further, we validate the classification performance using the real radar data
captured simultaneously with the motion capture system. We present experimental
results using simulations and measurements demonstrating good classification
accuracy of $\geq$ 95\% and $\approx$ 90\%, respectively.
|
The behavior of colloidal particles with a hard core and a soft shell has
attracted the attention for researchers in the physical-chemistry interface not
only due the large number of applications, but due the unique properties of
these systems in bulk and at interfaces. The adsorption at the boundary of two
phases can provide information about the molecular arrangement. In this way, we
perform Langevin Dynamics simulations of polymer-grafted nanoparticles. We
employed a recently obtained core-softened potential to analyze the relation
between adsorption, structure and dynamic properties of the nanoparticles near
a solid repulsive surface. Two cases were considered: flat or structured walls.
At low temperatures, a maxima is observed in the adsorption. It is related to a
fluid to clusters transition and with a minima in the contact layer diffusion -
and is explained by the competition between the scales in the core-softened
interaction. Due the long range repulsion, the particles stay at the distance
correspondent to this length scale at low densities, and overcome the repulsive
barrier as the packing increases, However, increasing the temperature, the gain
in kinetic energy allows the colloids to overcome the long range repulsion
barrier even at low densities. As consequence, there is no competition and no
maxima was observed in the adsorption.
|
We examine stability of summation by parts (SBP) numerical schemes that use
hyperboloidal slices to include future null infinity in the computational
domain. This inclusion serves to mitigate outer boundary effects and, in the
future, will help reduce systematic errors in gravitational waveform
extraction. We also study a setup with truncation error matching. Our
SBP-Stable scheme guarantees energy-balance for a class of linear wave
equations at the semidiscrete level. We develop also specialized dissipation
operators. The whole construction is made at second order accuracy in spherical
symmetry, but could be straightforwardly generalized to higher order or
spectral accuracy without symmetry. In a practical implementation we evolve
first a scalar field obeying the linear wave equation and observe, as expected,
long term stability and norm convergence. We obtain similar results with a
potential term. To examine the limitations of the approach we consider a
massive field, whose equations of motion do not regularize, and whose dynamics
near null infinity, which involve excited incoming pulses that can not be
resolved by the code, is very different to that in the massless setting. We
still observe excellent energy conservation, but convergence is not
satisfactory. Overall our results suggest that compactified hyperboloidal
slices are likely to be provably effective whenever the asymptotic solution
space is close to that of the wave equation.
|
In this paper we count the number $N_n^{\text{tor}}(X)$ of $n$-dimensional
algebraic tori over $\mathbb{Q}$ whose Artin conductor of the associated
character is bounded by $X$. This can be understood as a generalization of
counting number fields of given degree by discriminant. We suggest a conjecture
on the asymptotics of $N_n^{\text{tor}}(X)$ and prove that this conjecture
follows from Malle's conjecture for tori over $\mathbb{Q}$. We also prove that
$N_2^{\text{tor}}(X) \ll_{\varepsilon} X^{1 + \varepsilon}$, and this upper
bound can be improved to $N_2^{\text{tor}}(X) \ll_{\varepsilon} X (\log X)^{1 +
\varepsilon}$ under the assumption of the Cohen-Lenstra heuristics for $p=3$.
|
We present herein an introduction to implementing 2-color cellular automata
on quantum annealing systems, such as the D-Wave quantum computer. We show that
implementing nearest-neighbor cellular automata is possible. We present an
implementation of Wolfram's cellular automata Rule 110, which has previously
been shown to be a universal Turing machine, as a QUBO suitable for use on
quantum annealing systems. We demonstrate back-propagation of cellular automata
rule sets to determine initial cell states for a desired later system state. We
show 2-D 2-color cellular automata, such as Conway's Game of Life, can be
expressed for quantum annealing systems.
|
Low-dose computed tomography (CT) allows the reduction of radiation risk in
clinical applications at the expense of image quality, which deteriorates the
diagnosis accuracy of radiologists. In this work, we present a High-Quality
Imaging network (HQINet) for the CT image reconstruction from Low-dose computed
tomography (CT) acquisitions. HQINet was a convolutional encoder-decoder
architecture, where the encoder was used to extract spatial and temporal
information from three contiguous slices while the decoder was used to recover
the spacial information of the middle slice. We provide experimental results on
the real projection data from low-dose CT Image and Projection Data
(LDCT-and-Projection-data), demonstrating that the proposed approach yielded a
notable improvement of the performance in terms of image quality, with a rise
of 5.5dB in terms of peak signal-to-noise ratio (PSNR) and 0.29 in terms of
mutual information (MI).
|
Automatic network management driven by Artificial Intelligent technologies
has been heatedly discussed over decades. However, current reports mainly focus
on theoretic proposals and architecture designs, works on practical
implementations on real-life networks are yet to appear. This paper proposes
our effort toward the implementation of knowledge graph driven approach for
autonomic network management in software defined networks (SDNs), termed as
SeaNet. Driven by the ToCo ontology, SeaNet is reprogrammed based on Mininet (a
SDN emulator). It consists three core components, a knowledge graph generator,
a SPARQL engine, and a network management API. The knowledge graph generator
represents the knowledge in the telecommunication network management tasks into
formally represented ontology driven model. Expert experience and network
management rules can be formalized into knowledge graph and by automatically
inferenced by SPARQL engine, Network management API is able to packet
technology-specific details and expose technology-independent interfaces to
users. The Experiments are carried out to evaluate proposed work by comparing
with a commercial SDN controller Ryu implemented by the same language Python.
The evaluation results show that SeaNet is considerably faster in most
circumstances than Ryu and the SeaNet code is significantly more compact.
Benefit from RDF reasoning, SeaNet is able to achieve O(1) time complexity on
different scales of the knowledge graph while the traditional database can
achieve O(nlogn) at its best. With the developed network management API, SeaNet
enables researchers to develop semantic-intelligent applications on their own
SDNs.
|
This paper introduces a deep learning method for solving an elliptic
hemivariational inequality (HVI). In this method, an expectation minimization
problem is first formulated based on the variational principle of underlying
HVI, which is solved by stochastic optimization algorithms using three
different training strategies for updating network parameters. The method is
applied to solve two practical problems in contact mechanics, one of which is a
frictional bilateral contact problem and the other of which is a frictionless
normal compliance contact problem. Numerical results show that the deep
learning method is efficient in solving HVIs and the adaptive mesh-free
multigrid algorithm can provide the most accurate solution among the three
learning methods.
|
Autonomous cars can reduce road traffic accidents and provide a safer mode of
transport. However, key technical challenges, such as safe navigation in
complex urban environments, need to be addressed before deploying these
vehicles on the market. Teleoperation can help smooth the transition from human
operated to fully autonomous vehicles since it still has human in the loop
providing the scope of fallback on driver. This paper presents an Active Safety
System (ASS) approach for teleoperated driving. The proposed approach helps the
operator ensure the safety of the vehicle in complex environments, that is,
avoid collisions with static or dynamic obstacles. Our ASS relies on a model
predictive control (MPC) formulation to control both the lateral and
longitudinal dynamics of the vehicle. By exploiting the ability of the MPC
framework to deal with constraints, our ASS restricts the controller's
authority to intervene for lateral correction of the human operator's commands,
avoiding counter-intuitive driving experience for the human operator. Further,
we design a visual feedback to enhance the operator's trust over the ASS. In
addition, we propose an MPC's prediction horizon data based novel predictive
display to mitigate the effects of large latency in the teleoperation system.
We tested the performance of the proposed approach on a high-fidelity vehicle
simulator in the presence of dynamic obstacles and latency.
|
Recent enhancements in neuroscience, like the development of new and powerful
recording techniques of the brain activity combined with the increasing
anatomical knowledge provided by atlases and the growing understanding of
neuromodulation principles, allow studying the brain at a whole new level,
paving the way to the creation of extremely detailed effective network models
directly from observed data. Leveraging the advantages of this integrated
approach, we propose a method to infer models capable of reproducing the
complex spatio-temporal dynamics of the slow waves observed in the experimental
recordings of the cortical hemisphere of a mouse under anesthesia. To reliably
claim the good match between data and simulations, we implemented a versatile
ensemble of analysis tools, applicable to both experimental and simulated data
and capable to identify and quantify the spatio-temporal propagation of waves
across the cortex. In order to reproduce the observed slow wave dynamics, we
introduced an inference procedure composed of two steps: the inner and the
outer loop. In the inner loop, the parameters of a mean-field model are
optimized by likelihood maximization, exploiting the anatomical knowledge to
define connectivity priors. The outer loop explores "external" parameters,
seeking for an optimal match between the simulation outcome and the data,
relying on observables (speed, directions, and frequency of the waves) apt for
the characterization of cortical slow waves; the outer loop includes a periodic
neuro-modulation for better reproduction of the experimental recordings. We
show that our model is capable to reproduce most of the features of the
non-stationary and non-linear dynamics displayed by the biological network.
Also, the proposed method allows to infer which are the relevant modifications
of parameters when the brain state changes, e.g. according to anesthesia
levels.
|
To detect and segment objects in images based on their content is one of the
most active topics in the field of computer vision. Nowadays, this problem can
be addressed using Deep Learning architectures such as Faster R-CNN or YOLO,
among others. In this paper, we study the behaviour of different configurations
of RetinaNet, Faster R-CNN and Mask R-CNN presented in Detectron2. First, we
evaluate qualitatively and quantitatively (AP) the performance of the
pre-trained models on KITTI-MOTS and MOTSChallenge datasets. We observe a
significant improvement in performance after fine-tuning these models on the
datasets of interest and optimizing hyperparameters. Finally, we run inference
in unusual situations using out of context datasets, and present interesting
results that help us understanding better the networks.
|
Discovering the partial differential equations underlying spatio-temporal
datasets from very limited and highly noisy observations is of paramount
interest in many scientific fields. However, it remains an open question to
know when model discovery algorithms based on sparse regression can actually
recover the underlying physical processes. In this work, we show the design
matrices used to infer the equations by sparse regression can violate the
irrepresentability condition (IRC) of the Lasso, even when derived from
analytical PDE solutions (i.e. without additional noise). Sparse regression
techniques which can recover the true underlying model under violated IRC
conditions are therefore required, leading to the introduction of the
randomised adaptive Lasso. We show once the latter is integrated within the
deep learning model discovery framework DeepMod, a wide variety of nonlinear
and chaotic canonical PDEs can be recovered: (1) up to $\mathcal{O}(2)$ higher
noise-to-sample ratios than state-of-the-art algorithms, (2) with a single set
of hyperparameters, which paves the road towards truly automated model
discovery.
|
Stars originate from the dense interstellar medium, which exhibits
filamentary structure to scales of $\sim 1$ kpc in galaxies like our Milky Way.
We explore quantitatively how much resulting large-scale correlation there is
among different stellar clusters and associations in $\textit{orbit phase
space}$, characterized here by actions and angles. As a starting point, we
identified 55 prominent stellar overdensities in the 6D space of orbit
(actions) and orbital phase (angles), among the $\sim$ 2.8 million stars with
radial velocities from Gaia EDR3 and $d \leq 800$ pc. We then explored the
orbital $\textit{phase}$ distribution of all sample stars in the same
$\textit{orbit}$ patch as any one of these 55 overdensities. We find that very
commonly numerous other distinct orbital phase overdensities exist along these
same orbits, like pearls on a string. These `pearls' range from known stellar
clusters to loose, unrecognized associations. Among orbit patches defined by
one initial orbit-phase overdensity 50% contain at least 8 additional
orbital-phase pearls of 10 cataloged members; 20% of them contain 20 additional
pearls. This is in contrast to matching orbit patches sampled from a smooth
mock catalog, or random nearby orbit patches, where there are only 2 (or 5,
respectively) comparable pearls. Our findings quantify for the first time how
common it is for star clusters and associations to form at distinct orbital
phases of nearly the same orbit. This may eventually offer a new way to probe
the 6D orbit structure of the filamentary interstellar medium.
|
Statistical models are inherently uncertain. Quantifying or at least
upper-bounding their uncertainties is vital for safety-critical systems such as
autonomous vehicles. While standard neural networks do not report this
information, several approaches exist to integrate uncertainty estimates into
them. Assessing the quality of these uncertainty estimates is not
straightforward, as no direct ground truth labels are available. Instead,
implicit statistical assessments are required. For regression, we propose to
evaluate uncertainty realism -- a strict quality criterion -- with a
Mahalanobis distance-based statistical test. An empirical evaluation reveals
the need for uncertainty measures that are appropriate to upper-bound
heavy-tailed empirical errors. Alongside, we transfer the variational U-Net
classification architecture to standard supervised image-to-image tasks. We
adopt it to the automotive domain and show that it significantly improves
uncertainty realism compared to a plain encoder-decoder model.
|
Since thin-film silicon solar cells have limited optical absorption, we
explore the effect of a nanostructured back reflector to recycle the unabsorbed
light. As a back reflector we investigate a 3D photonic band gap crystal made
from silicon that is readily integrated with the thin films. We numerically
obtain the optical properties by solving the 3D time-harmonic Maxwell equations
using the finite-element method, and model silicon with experimentally
determined optical constants. The absorption enhancement relevant for
photovoltaics is obtained by weighting the absorption spectra with the AM 1.5
standard solar spectrum. We study thin films either thicker ($L_{Si} = 2400$
nm) or much thinner ($L_{Si} = 80$ nm) than the wavelength of light. At $L_{Si}
= 2400$ nm, the 3D photonic band gap crystal enhances the spectrally averaged
($\lambda = 680$ nm to $880$ nm) silicon absorption by $2.22$x (s-pol.) to
$2.45$x (p-pol.), which exceeds the enhancement of a perfect metal back
reflector ($1.47$ to $1.56$x). The absorption is enhanced by the (i) broadband
angle and polarization-independent reflectivity in the 3D photonic band gap,
and (ii) the excitation of many guided modes in the film by the crystal's
surface diffraction leading to enhanced path lengths. At $L_{Si} = 80$ nm, the
photonic crystal back reflector yields a striking average absorption
enhancement of $9.15$x, much more than $0.83$x for a perfect metal, which is
due to a remarkable guided mode confined within the combined thickness of the
thin film and the photonic crystal's Bragg attenuation length. The broad
bandwidth of the 3D photonic band gap leads to the back reflector's Bragg
attenuation length being much shorter than the silicon absorption length.
Consequently, light is confined inside the thin film and the absorption
enhancements are not due to the additional thickness of the photonic crystal
back reflector.
|
Technologies that augment face-to-face interactions with a digital sense of
self have been used to support conversations. That work has employed one
homogenous technology, either 'off-the-shelf' or with a bespoke prototype,
across all participants. Beyond speculative instances, it is unclear what
technology individuals themselves would choose, if any, to augment their social
interactions; what influence it may exert; or how use of heterogeneous devices
may affect the value of this augmentation. This is important, as the devices
that we use directly affect our behaviour, influencing affordances and how we
engage in social interactions. Through a study of 28 participants, we compared
head-mounted display, smartphones, and smartwatches to support digital
augmentation of self during face-to-face interactions within a group. We
identified a preference among participants for head-mounted displays to support
privacy, while smartwatches and smartphones better supported conversational
events (such as grounding and repair), along with group use through
screen-sharing. Accordingly, we present software and hardware design
recommendations and user interface guidelines for integrating a digital form of
self into face-to-face conversations.
|
We briefly report the modern status of heavy quark sum rules (HQSR) based on
stability criteria by emphasizing the recent progresses for determining the QCD
parameters (alpha_s, m_{c,b} and gluon condensates)where their correlations
have been taken into account. The results: alpha_s(M_Z)=0.1181(16)(3),
m_c(m_c)=1286(16) MeV, m_b(m_b)=4202(7) MeV,<alpha_s G^2> = (6.49+-0.35)10^-2
GeV^4, < g^3 G^3 >= (8.2+-1.0) GeV^2 <alpha_s G^2> and the ones from recent
light quark sum rules are summarized in Table 2. One can notice that the SVZ
value of <alpha_s G^2> has been underestimated by a factor 1.6, <g^3 G^3> is
much bigger than the instanton model estimate, while the four-quark condensate
which mixes under renormalization is incompatible with the vacuum saturation
which is phenomenologically violated by a factor (2~4). The uses of HQSR for
molecules and tetraquarks states are commented.
|
Many developers and organizations implement apps for Android, the most widely
used operating system for mobile devices. Common problems developers face are
the various hardware devices, customized Android variants, and frequent
updates, forcing them to implement workarounds for the different versions and
variants of Android APIs used in practice. In this paper, we contribute the
Android Compatibility checkS dataSet (AndroidCompass) that comprises changes to
compatibility checks developers use to enforce workarounds for specific Android
versions in their apps. We extracted 80,324 changes to compatibility checks
from 1,394 apps by analyzing the version histories of 2,399 projects from the
F-Droid catalog. With AndroidCompass, we aim to provide data on when and how
developers introduced or evolved workarounds to handle Android
incompatibilities. We hope that AndroidCompass fosters research to deal with
version incompatibilities, address potential design flaws, identify security
concerns, and help derive solutions for other developers, among others-helping
researchers to develop and evaluate novel techniques, and Android app as well
as operating-system developers in engineering their software.
|
With recent advances in distantly supervised (DS) relation extraction (RE),
considerable attention is attracted to leverage multi-instance learning (MIL)
to distill high-quality supervision from the noisy DS. Here, we go beyond label
noise and identify the key bottleneck of DS-MIL to be its low data utilization:
as high-quality supervision being refined by MIL, MIL abandons a large amount
of training instances, which leads to a low data utilization and hinders model
training from having abundant supervision. In this paper, we propose
collaborative adversarial training to improve the data utilization, which
coordinates virtual adversarial training (VAT) and adversarial training (AT) at
different levels. Specifically, since VAT is label-free, we employ the
instance-level VAT to recycle instances abandoned by MIL. Besides, we deploy AT
at the bag-level to unleash the full potential of the high-quality supervision
got by MIL. Our proposed method brings consistent improvements (~ 5 absolute
AUC score) to the previous state of the art, which verifies the importance of
the data utilization issue and the effectiveness of our method.
|
In this paper, we study the Ricci flow on a closed manifold of dimension $n
\ge 4$ and finite time interval $[0,T)~(T < \infty)$ on which the scalar
curvature are uniformly bounded. We prove that if such flow of dimension $4 \le
n \le 7$ has finite time singularities, then every blow-up sequence of a
locally Type I singularity has certain property. Here, locally Type I
singularity is what Buzano and Di-Matteo defined.
|
Abstract Contextuality is a property of systems of random variables. The
identity of a random variable in a system is determined by its joint
distribution with all other random variables in the same context. When context
changes, a variable measuring some property is instantly replaced by another
random variable measuring the same property, or instantly disappears if this
property is not measured in the new context. This replacement/disappearance
requires no action, signaling, or disturbance, although it does not exclude
them. The difference between two random variables measuring the same property
in different contexts is measured by their maximal coupling, and the system is
noncontextual if one of its overall couplings has these maximal couplings as
its marginals.
|
Cued Speech (CS) is a visual communication system for the deaf or hearing
impaired people. It combines lip movements with hand cues to obtain a complete
phonetic repertoire. Current deep learning based methods on automatic CS
recognition suffer from a common problem, which is the data scarcity. Until
now, there are only two public single speaker datasets for French (238
sentences) and British English (97 sentences). In this work, we propose a
cross-modal knowledge distillation method with teacher-student structure, which
transfers audio speech information to CS to overcome the limited data problem.
Firstly, we pretrain a teacher model for CS recognition with a large amount of
open source audio speech data, and simultaneously pretrain the feature
extractors for lips and hands using CS data. Then, we distill the knowledge
from teacher model to the student model with frame-level and sequence-level
distillation strategies. Importantly, for frame-level, we exploit multi-task
learning to weigh losses automatically, to obtain the balance coefficient.
Besides, we establish a five-speaker British English CS dataset for the first
time. The proposed method is evaluated on French and British English CS
datasets, showing superior CS recognition performance to the state-of-the-art
(SOTA) by a large margin.
|
Researchers and practitioners increasingly consider a human-centered
perspective in the design of machine learning-based applications, especially in
the context of Explainable Artificial Intelligence (XAI). However, clear
methodological guidance in this context is still missing because each new
situation seems to require a new setup, which also creates different
methodological challenges. Existing case study collections in XAI inspired us;
therefore, we propose a similar collection of case studies for human-centered
XAI that can provide methodological guidance or inspiration for others. We want
to showcase our idea in this workshop by describing three case studies from our
research. These case studies are selected to highlight how apparently small
differences require a different set of methods and considerations. With this
workshop contribution, we would like to engage in a discussion on how such a
collection of case studies can provide a methodological guidance and critical
reflection.
|
Quantum annealing solves combinatorial optimization problems by finding the
energetic ground states of an embedded Hamiltonian. However, quantum annealing
dynamics under the embedded Hamiltonian may violate the principles of adiabatic
evolution and generate excitations that correspond to errors in the computed
solution. Here we empirically benchmark the probability of chain breaks and
identify sweet spots for solving a suite of embedded Hamiltonians. We further
correlate the physical location of chain breaks in the quantum annealing
hardware with the underlying embedding technique and use these localized rates
in a tailored post-processing strategies. Our results demonstrate how to use
characterization of the quantum annealing hardware to tune the embedded
Hamiltonian and remove computational errors.
|
The dynamics of an open quantum system with balanced gain and loss is not
described by a PT-symmetric Hamiltonian but rather by Lindblad operators.
Nevertheless the phenomenon of PT-symmetry breaking and the impact of
exceptional points can be observed in the Lindbladean dynamics. Here we briefly
review the development of PT symmetry in quantum mechanics, and the
characterisation of PT-symmetry breaking in open quantum systems in terms of
the behaviour of the speed of evolution of the state.
|
Let X and Y be oriented topological manifolds of dimension n + 2, and let K
and J be connected, locally-flat, oriented, n-dimensional submanifolds of X and
Y. We show that up to orientation preserving homeomorphism there is a
well-defined connected sum K # J in X # Y. For n = 1, the proof is classical,
relying on results of Rado and Moise. For dimensions n = 3 and n > 5, results
of Edwards-Kirby, Kirby, and Kirby-Siebenmann concerning higher dimensional
topological manifolds are required. For n = 2, 4, and 5, Freedman and Quinn's
work on topological four-manifolds is needed. The truth of the corresponding
statement for higher codimension seems to be unknown.
|
Chiral superconductors are expected to carry a spontaneous, chiral and
perpetual current along the sample edge. However, despite the availability of
several candidate materials, such a current has not been observed in
experiments. In this article, we suggest an alternative probe in the form of
impurity-induced chiral currents. We first demonstrate that a single
non-magnetic impurity induces an encircling chiral current. Its direction
depends on the chirality of the order parameter and the sign of the impurity
potential. Building on this observation, we consider the case of multiple
impurities, e.g., realized as adatoms deposited on the surface of a candidate
chiral superconductor. We contrast the response that is obtained in two cases:
(a) when the impurities are all identical in sign and (b) when the impurities
have mixed positive and negative signs. The former leads to coherent currents
within the sample, arising from the fusion of individual current loops. The
latter produces loops of random chirality that lead to incoherent local
currents. These two scenarios can be distinguished by measuring the induced
magnetic field using recent probes such as diamond NV centres. We argue that
impurity-induced currents may be easier to observe than edge currents, as they
can be tuned by varying impurity strength and concentration. We demonstrate
these results using a toy model for $p_x \pm i p_y$ superconductivity on a
square lattice. We develop an improved scheme for Bogoliubov deGennes (BdG)
simulations where both the order parameter as well as the magnetic field are
determined self-consistently.
|
In this paper we are interested in positive classical solutions of
\begin{equation} \label{eqx} \left\{\begin{array}{ll} -\Delta u = a(x) u^{p-1}
& \mbox{ in } \Omega, \\ u>0 & \mbox{ in } \Omega, \\ u= 0 & \mbox{ on } \pOm,
\end {array}\right. \end{equation} where $\Omega$ is a bounded annular domain
(not necessarily an annulus) in $\IR^N$ $(N \ge3)$
and $ a(x)$ is a nonnegative continuous function. We show the existence of a
classical positive solution for a range of supercritical values of $p$ when the
problem enjoys certain mild symmetry and monotonicity conditions. As a
consequence of our results, we shall show that (\ref{eqx}) has
$\Bigl\lfloor\frac{N}{2} \Bigr\rfloor$ (the floor of $\frac{N}{2}$) positive
nonradial solutions when $ a(x)=1$ and $\Omega$ is an annulus with certain
assumptions on the radii. We also obtain the existence of positive solutions in
the case of toroidal domains. Our approach is based on a new variational
principle that allows one to deal with supercritical problems variationally by
limiting the corresponding functional on a proper convex subset instead of the
whole space at the expense of a mild invariance property.
|
The L-subshell ionization mechanism is studied in an ultra-thin osmium target
bombarded by 4-6 MeV/u fluorine ions. Multiple ionization effects in the
collisions are considered through the change of fluorescence and Coster-Kronig
yields while determining L-subshell ionization cross sections from L-line x-ray
production cross sections. The L-subshell ionization, as well as L-shell x-ray
production cross sections so obtained, are compared with various theoretical
approximations. The Coulomb direct ionization contributions is studied by (i)
the relativistic semi-classical approximations (RSCA), (ii) the shellwise local
plasma approximation (SLPA), and (iii) the ECUSAR theory, along with the
inclusion of the vacancy sharing among the subshells by the coupled-states
model (CSM) and the electron capture (EC) by a standard formalism. We find that
the ECUSAR-CSM-EC describes the measured excitation function curves the best.
However, the theoretical calculations are still about a factor of two smaller
than the measured values. Such differences are resolved by re-evaluating the
fluorescence and the Coster-Kronig yields. This work demonstrates that, in the
present energy range, the heavy-ion induced inner-shell ionization of heavy
atoms can be understood by combining the basic mechanisms of the direct Coulomb
ionization, the electron capture, the multiple ionization, and the vacancy
sharing among subshells, together with optimized atomic parameters.
|
The theory of path homology for digraphs was developed by Alexander
Grigor'yan, Yong Lin, Yuri Muranov, and Shing-Tung Yau. In this paper, we
generalize the path homology for digraphs. We prove that for any digraph $G$,
any $t\geq 0$, any $0\leq q\leq 2t$, and any $(2t+1)$-dimensional element
$\alpha$ in the differential algebra on the set of the vertices, we always have
an $(\alpha,q)$-path homology for $G$. In particular, if $t=0$, then the
$(\alpha,0)$-path homology gives the weighted path homology for vertex-weighted
digraphs.
|
Cross-validation is a well-known and widely used bandwidth selection method
in nonparametric regression estimation. However, this technique has two
remarkable drawbacks: (i) the large variability of the selected bandwidths, and
(ii) the inability to provide results in a reasonable time for very large
sample sizes. To overcome these problems, bagging cross-validation bandwidths
are analyzed in this paper. This approach consists in computing the
cross-validation bandwidths for a finite number of subsamples and then
rescaling the averaged smoothing parameters to the original sample size. Under
a random-design regression model, asymptotic expressions up to a second-order
for the bias and variance of the leave-one-out cross-validation bandwidth for
the Nadaraya--Watson estimator are obtained. Subsequently, the asymptotic bias
and variance and the limit distribution are derived for the bagged
cross-validation selector. Suitable choices of the number of subsamples and the
subsample size lead to an $n^{-1/2}$ rate for the convergence in distribution
of the bagging cross-validation selector, outperforming the rate $n^{-3/10}$ of
leave-one-out cross-validation. Several simulations and an illustration on a
real dataset related to the COVID-19 pandemic show the behavior of our proposal
and its better performance, in terms of statistical efficiency and computing
time, when compared to leave-one-out cross-validation.
|
In this study, we focus on identifying solution and an unknown
space-dependent coefficient in a space-time fractional differential equation by
employing fractional Taylor series method. The substantial advantage of this
method is that we don't take any over-measured data into account. Consequently,
we determine the solution and unknown coefficient more precisely. The presented
examples illustrate that outcomes of this method are in high agreement with the
exact ones of the corresponding problem. Moreover, it can be implemented and
applied effectively comparing with other methods.
|
The IceCube collaboration relies on GPU compute for many of its needs,
including ray tracing simulation and machine learning activities. GPUs are
however still a relatively scarce commodity in the scientific resource provider
community, so we expanded the available resource pool with GPUs provisioned
from the commercial Cloud providers. The provisioned resources were fully
integrated into the normal IceCube workload management system through the Open
Science Grid (OSG) infrastructure and used CloudBank for budget management. The
result was an approximate doubling of GPU wall hours used by IceCube over a
period of 2 weeks, adding over 3.1 fp32 EFLOP hours for a price tag of about
$58k. This paper describes the setup used and the operational experience.
|
Natural Language Processing (NLP) relies heavily on training data.
Transformers, as they have gotten bigger, have required massive amounts of
training data. To satisfy this requirement, text augmentation should be looked
at as a way to expand your current dataset and to generalize your models. One
text augmentation we will look at is translation augmentation. We take an
English sentence and translate it to another language before translating it
back to English. In this paper, we look at the effect of 108 different language
back translations on various metrics and text embeddings.
|
Hand pose estimation is a crucial part of a wide range of augmented reality
and human-computer interaction applications. Predicting the 3D hand pose from a
single RGB image is challenging due to occlusion and depth ambiguities.
GCN-based (Graph Convolutional Networks) methods exploit the structural
relationship similarity between graphs and hand joints to model kinematic
dependencies between joints. These techniques use predefined or globally
learned joint relationships, which may fail to capture pose-dependent
constraints. To address this problem, we propose a two-stage GCN-based
framework that learns per-pose relationship constraints. Specifically, the
first phase quantizes the 2D/3D space to classify the joints into 2D/3D blocks
based on their locality. This spatial dependency information guides this phase
to estimate reliable 2D and 3D poses. The second stage further improves the 3D
estimation through a GCN-based module that uses an adaptative nearest neighbor
algorithm to determine joint relationships. Extensive experiments show that our
multi-stage GCN approach yields an efficient model that produces accurate 2D/3D
hand poses and outperforms the state-of-the-art on two public datasets.
|
In many-body quantum systems with spatially local interactions, quantum
information propagates with a finite velocity, reminiscent of the ``light cone"
of relativity. In systems with long-range interactions which decay with
distance $r$ as $1/r^\alpha$, however, there are multiple light cones which
control different information theoretic tasks. We show an optimal (up to
logarithms) ``Frobenius light cone" obeying $t\sim r^{\min(\alpha-1,1)}$ for
$\alpha>1$ in one-dimensional power-law interacting systems with finite local
dimension: this controls, among other physical properties, the butterfly
velocity characterizing many-body chaos and operator growth. We construct an
explicit random Hamiltonian protocol that saturates the bound and settles the
optimal Frobenius light cone in one dimension. We partially extend our
constraints on the Frobenius light cone to a several operator $p$-norms, and
show that Lieb-Robinson bounds can be saturated in at most an exponentially
small $e^{-\Omega(r)}$ fraction of the many-body Hilbert space.
|
Hydrodynamic problems with stagnation points are of particular importance in
fluid mechanics as they allow study and investigation of elongational flows. In
this article, the uniaxial elongational flow appearing at the surface of a
viscoelastic drop and its role on the deformation of the droplet at low
inertial regimes is studied. In studies related to viscoelastic droplets
falling/raising in an immiscible Newtonian fluids, it is well known that by
increasing the Deborah number (the ratio of the relaxation time of the interior
fluid to a reference time scale) the droplet might lose its sphericity and
obtain a dimple at the rear end. In this work, the drop deformation is
investigated in detail to study the reason behind this transformation. We will
show that as the contribution of elastic and inertial forces are increased, the
stagnation points at the rear and front sides of the droplet are expanded to
create a region of elongational dominated flows. At this stage, due to a
combined effect of the shear thickening behavior of the elongational viscosity
in viscoelastic fluids and the contribution of the inertial force, the interior
phase is squeezed and consequently the droplet finds a shape similar to an
oblate. As these non-linear forces are increased further, an additional
circular stagnation line appears on the droplet surface in the external field,
pulling the droplet surface outward and therefore creating a dimple shape at
the rear end. Furthermore, the influence of inertia and viscoelastic properties
are also studied on the motion, the drag coefficient and terminal velocity of
drops.
|
We prove sharp $L^p$ regularity results for a class of generalized Radon
transforms for families of curves in a three-dimensional manifold associated to
a canonical relation with fold and blowdown singularities. The proof relies on
decoupling inequalities by Wolff and Bourgain-Demeter for plate decompositions
of thin neighborhoods of cones and $L^2$ estimates for related oscillatory
integrals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.