abstract
stringlengths 42
2.09k
|
---|
The increasing complexity of modern robotic systems and the environments they
operate in necessitates the formal consideration of safety in the presence of
imperfect measurements. In this paper we propose a rigorous framework for
safety-critical control of systems with erroneous state estimates. We develop
this framework by leveraging Control Barrier Functions (CBFs) and unifying the
method of Backup Sets for synthesizing control invariant sets with robustness
requirements -- the end result is the synthesis of Measurement-Robust Control
Barrier Functions (MR-CBFs). This provides theoretical guarantees on safe
behavior in the presence of imperfect measurements and improved robustness over
standard CBF approaches. We demonstrate the efficacy of this framework both in
simulation and experimentally on a Segway platform using an onboard
stereo-vision camera for state estimation.
|
We collect data at all frequencies for the new sources classified as unknown
active galactic nuclei (AGNs) in the latest Burst Alert Telescope (BAT) all-sky
hard X-ray catalog. Focusing on the 36 sources with measured redshift, we
compute their spectral energy distribution (SED) from radio to $\gamma$-rays
with the aim to classify these objects. We apply emission models that attempt
to reproduce the obtained SEDs, including: i) a standard thin accretion disk
together with an obscuring torus and a X-ray corona; ii) a two temperature
thick advection-dominated flow; iii) an obscured AGN model, accounting for
absorption along the line of sight at kiloelectronvolt energies and in the
optical band; and iv) a phenomenological model to describe the jet emission in
blazar-like objects. We integrate the models with the SWIRE template libraries
to account for the emission of the host galaxy. For every source we found a
good agreement between data and our model. Considering that the sources were
selected in the hard X-ray band, which is rather unaffected by absorption, we
expected and found a large fraction of absorbed radio-quiet AGNs (31 out of 36)
and some additional rare radio-loud sources (5 out of 36), since the jet
emission in hard X-rays is important for aligned jets owing to the boost
produced by the beaming effect. With our work we can confirm the hypothesis
that a number of galaxies, whose optical spectra lack AGN emission features,
host an obscured active nucleus. The approach we used proved to be efficient in
rapidly identifying objects, which commonly used methods were not able to
classify.
|
Most asteroids are somewhat elongated and have non-zero lightcurve
amplitudes. Such asteroids can be detected in large-scale sky surveys even if
their mean magnitudes are fainter than the stated sensitivity limits. We
explore the detection of elongated asteroids under a set of idealized but
useful approximations. We find that objects up to 1 magnitude fainter than a
survey's sensitivity limit are likely to be detected, and that the effect is
most pronounced for asteroids with lightcurve amplitudes 0.1-0.4 mag.This
imposes a bias on the derived size and shape distributions of the population
that must be properly accounted for.
|
This paper investigates the problem of online statistical inference of model
parameters in stochastic optimization problems via the Kiefer-Wolfowitz
algorithm with random search directions. We first present the asymptotic
distribution for the Polyak-Ruppert-averaging type Kiefer-Wolfowitz (AKW)
estimators, whose asymptotic covariance matrices depend on the function-value
query complexity and the distribution of search directions. The distributional
result reflects the trade-off between statistical efficiency and function query
complexity. We further analyze the choices of random search directions to
minimize the asymptotic covariance matrix, and conclude that the optimal search
direction depends on the optimality criteria with respect to different summary
statistics of the Fisher information matrix. Based on the asymptotic
distribution result, we conduct online statistical inference by providing two
construction procedures of valid confidence intervals. We provide numerical
experiments verifying our theoretical results with the practical effectiveness
of the procedures.
|
Adversarial training is actively studied for learning robust models against
adversarial examples. A recent study finds that adversarially trained models
degenerate generalization performance on adversarial examples when their weight
loss landscape, which is loss changes with respect to weights, is sharp.
Unfortunately, it has been experimentally shown that adversarial training
sharpens the weight loss landscape, but this phenomenon has not been
theoretically clarified. Therefore, we theoretically analyze this phenomenon in
this paper. As a first step, this paper proves that adversarial training with
the L2 norm constraints sharpens the weight loss landscape in the linear
logistic regression model. Our analysis reveals that the sharpness of the
weight loss landscape is caused by the noise aligned in the direction of
increasing the loss, which is used in adversarial training. We theoretically
and experimentally confirm that the weight loss landscape becomes sharper as
the magnitude of the noise of adversarial training increases in the linear
logistic regression model. Moreover, we experimentally confirm the same
phenomena in ResNet18 with softmax as a more general case.
|
Cross-lingual text classification aims at training a classifier on the source
language and transferring the knowledge to target languages, which is very
useful for low-resource languages. Recent multilingual pretrained language
models (mPLM) achieve impressive results in cross-lingual classification tasks,
but rarely consider factors beyond semantic similarity, causing performance
degradation between some language pairs. In this paper we propose a simple yet
effective method to incorporate heterogeneous information within and across
languages for cross-lingual text classification using graph convolutional
networks (GCN). In particular, we construct a heterogeneous graph by treating
documents and words as nodes, and linking nodes with different relations, which
include part-of-speech roles, semantic similarity, and document translations.
Extensive experiments show that our graph-based method significantly
outperforms state-of-the-art models on all tasks, and also achieves consistent
performance gain over baselines in low-resource settings where external tools
like translators are unavailable.
|
We consider the problem of computing the Fr\'echet distance between two
curves for which the exact locations of the vertices are unknown. Each vertex
may be placed in a given uncertainty region for that vertex, and the objective
is to place vertices so as to minimise the Fr\'echet distance. This problem was
recently shown to be NP-hard in 2D, and it is unclear how to compute an optimal
vertex placement at all.
We present the first general algorithmic framework for this problem. We prove
that it results in a polynomial-time algorithm for curves in 1D with intervals
as uncertainty regions. In contrast, we show that the problem is NP-hard in 1D
in the case that vertices are placed to maximise the Fr\'echet distance.
We also study the weak Fr\'echet distance between uncertain curves. While
finding the optimal placement of vertices seems more difficult than the regular
Fr\'echet distance -- and indeed we can easily prove that the problem is
NP-hard in 2D -- the optimal placement of vertices in 1D can be computed in
polynomial time. Finally, we investigate the discrete weak Fr\'echet distance,
for which, somewhat surprisingly, the problem is NP-hard already in 1D.
|
While reinforcement learning (RL) is gaining popularity in energy systems
control, its real-world applications are limited due to the fact that the
actions from learned policies may not satisfy functional requirements or be
feasible for the underlying physical system. In this work, we propose PROjected
Feasibility (PROF), a method to enforce convex operational constraints within
neural policies. Specifically, we incorporate a differentiable projection layer
within a neural network-based policy to enforce that all learned actions are
feasible. We then update the policy end-to-end by propagating gradients through
this differentiable projection layer, making the policy cognizant of the
operational constraints. We demonstrate our method on two applications:
energy-efficient building operation and inverter control. In the building
operation setting, we show that PROF maintains thermal comfort requirements
while improving energy efficiency by 4% over state-of-the-art methods. In the
inverter control setting, PROF perfectly satisfies voltage constraints on the
IEEE 37-bus feeder system, as it learns to curtail as little renewable energy
as possible within its safety set.
|
The chiral magnetic effect (CME) is a novel transport phenomenon, arising
from the interplay between quantum anomalies and strong magnetic fields in
chiral systems. In high-energy nuclear collisions, the CME may survive the
expansion of the quark-gluon plasma fireball and be detected in experiments.
Over the past decade, the experimental searches for the CME have aroused
extensive interest at the Relativistic Heavy Ion Collider (RHIC) and the Large
Hadron Collider (LHC). The main goal of this article is to investigate three
pertinent experimental approaches: the $\gamma$ correlator, the $R$ correlator
and the signed balance functions. We will exploit both simple Monte Carlo
simulations and a realistic event generator (EBE-AVFD) to verify the
equivalence in the kernel-component observables among these methods and to
ascertain their sensitivities to the CME signal for the isobaric collisions at
RHIC.
|
AN Cam is a little-studied eclipsing binary containing somewhat evolved
components in an orbit with a period of 21.0 d and an eccentricity of 0.47. A
spectroscopic orbit based on photoelectric radial velocities was published in
1977. AN Cam has been observed using the TESS satellite in three sectors: the
data were obtained in long-cadence mode and cover nine eclipses. By modelling
these data and published radial velocities we obtain masses of 1.380 +/- 0.021
Msun and 1.402 +/- 0.025 Msun, and radii of 2.159 +/- 0.012 Rsun and 2.646 +/-
0.014 Rsun. We also derive a precise orbital ephemeris from these data and
recent times of minimum light, but find that the older times of minimum light
cannot be fitted assuming a constant orbital period. This could be caused by
astrophysical or instrumental effects; forthcoming TESS observations will help
the investigation of this issue. We use the Gaia EDR3 parallax and
optical/infrared apparent magnitudes to measure effective temperatures of 6050
+/- 150 K and 5750 +/- 150 K: the primary star is hotter but smaller and less
massive than its companion. A comparison with theoretical models indicates that
the system has an approximately solar chemical composition and an age of 3.3
Gyr. Despite the similarity of their masses the two stars are in different
evolutionary states: the primary is near the end of its main-sequence lifetime
and the secondary is now a subgiant. AN Cam is a promising candidate for
constraining the strength of convective core overshooting in 1.4 Msun stars.
|
We develop a framework to study posterior contraction rates in sparse high
dimensional generalized linear models (GLM). We introduce a new family of GLMs,
denoted by clipped GLM, which subsumes many standard GLMs and makes minor
modification of the rest. With a sparsity inducing prior on the regression
coefficients, we delineate sufficient conditions on true data generating
density that leads to minimax optimal rates of posterior contraction of the
coefficients in $\ell_1$ norm. Our key contribution is to develop sufficient
conditions commensurate with the geometry of the clipped GLM family, propose
prior distributions which do not require any knowledge of the true parameters
and avoid any assumption on the growth rate of the true coefficient vector.
|
Although abbreviations are fairly common in handwritten sources, particularly
in medieval and modern Western manuscripts, previous research dealing with
computational approaches to their expansion is scarce. Yet abbreviations
present particular challenges to computational approaches such as handwritten
text recognition and natural language processing tasks. Often, pre-processing
ultimately aims to lead from a digitised image of the source to a normalised
text, which includes expansion of the abbreviations. We explore different
setups to obtain such a normalised text, either directly, by training HTR
engines on normalised (i.e., expanded, disabbreviated) text, or by decomposing
the process into discrete steps, each making use of specialist models for
recognition, word segmentation and normalisation. The case studies considered
here are drawn from the medieval Latin tradition.
|
A 1d random geometric graph (1d RGG) is built by joining a random sample of
$n$ points from an interval of the real line with probability $p$. We count the
number of $k$-hop paths between two vertices of the graph in the case where the
space is the 1d interval $[0,1]$. We show how the $k$-hop path count between
two vertices at Euclidean distance $|x-y|$ is in bijection with the volume
enclosed by a uniformly random $d$-dimensional lattice path joining the corners
of a $(k-1)$-dimensional hyperrectangular lattice. We are able to provide the
probability generating function and distribution of this $k$-hop path count as
a sum over lattice paths, incorporating the idea of restricted integer
partitions with limited number of parts. We therefore demonstrate and describe
an important link between spatial random graphs, and lattice path
combinatorics, where the $d$-dimensional lattice paths correspond to spatial
permutations of the geometric points on the line.
|
We experimentally demonstrate the steady-state generation of propagating
Wigner-negative states from a continuously driven superconducting qubit. We
reconstruct the Wigner function of the radiation emitted into propagating modes
defined by their temporal envelopes, using digital filtering. For an optimized
temporal filter, we observe a large Wigner logarithmic negativity, in excess of
0.08, in agreement with theory. The fidelity between the theoretical
predictions and the states generated experimentally is up to 99%, reaching
state-of-the-art realizations in the microwave frequency domain. Our results
provide a new way to generate and control nonclassical states, and may enable
promising applications such as quantum networks and quantum computation based
on waveguide quantum electrodynamics.
|
We use numerical bootstrap techniques to study correlation functions of
scalars transforming in the adjoint representation of $SU(N)$ in three
dimensions. We obtain upper bounds on operator dimensions for various
representations and study their dependence on $N$. We discover new families of
kinks, one of which could be related to bosonic QED${}_3$. We then specialize
to the cases $N=3,4$, which have been conjectured to describe a phase
transition respectively in the ferromagnetic complex projective model $CP^2$
and the antiferromagnetic complex projective model $ACP^{3}$. Lattice
simulations provide strong evidence for the existence of a second order phase
transition, while an effective field theory approach does not predict any fixed
point. We identify a set of assumptions that constrain operator dimensions to
small regions overlapping with the lattice predictions.
|
Electronic health records (EHRs), digital collections of patient healthcare
events and observations, are ubiquitous in medicine and critical to healthcare
delivery, operations, and research. Despite this central role, EHRs are
notoriously difficult to process automatically. Well over half of the
information stored within EHRs is in the form of unstructured text (e.g.
provider notes, operation reports) and remains largely untapped for secondary
use. Recently, however, newer neural network and deep learning approaches to
Natural Language Processing (NLP) have made considerable advances,
outperforming traditional statistical and rule-based systems on a variety of
tasks. In this survey paper, we summarize current neural NLP methods for EHR
applications. We focus on a broad scope of tasks, namely, classification and
prediction, word embeddings, extraction, generation, and other topics such as
question answering, phenotyping, knowledge graphs, medical dialogue,
multilinguality, interpretability, etc.
|
This paper is a primer on cryptographic accumulators and how to apply them
practically. A cryptographic accumulator is a space- and time-efficient data
structure used for set-membership tests. Since it is possible to represent any
computational problem where the answer is yes or no as a set-membership
problem, cryptographic accumulators are invaluable data structures in computer
science and engineering. But, to the best of our knowledge, there is neither a
concise survey comparing and contrasting various types of accumulators nor a
guide for how to apply the most appropriate one for a given application.
Therefore, we address that gap by describing cryptographic accumulators while
presenting their fundamental and so-called optional properties. We discuss the
effects of each property on the given accumulator's performance in terms of
space and time complexity, as well as communication overhead.
|
The combination of topology and quantum criticality can give rise to an
exotic mix of counterintuitive effects. Here, we show that unexpected
topological properties take place in a paradigmatic strongly-correlated
Hamiltonian: the 1D extended Bose-Hubbard model. In particular, we reveal the
presence of two distinct topological quantum critical points with localized
edge states and gapless bulk excitations. Our results show that the topological
critical points separate two phases, one topologically protected and the other
topologically trivial, both characterized by a long-range ordered string
correlation function. The long-range order persists also at the topological
critical points and it reflects the presence of localized edge states protected
by a finite charge gap. Finally, we introduce a super-resolution quantum gas
microscopy scheme for dipolar dysprosium atoms, which provides a reliable route
towards the experimental study of topological quantum critical points.
|
Accurately segmenting a variety of clinically significant lesions from whole
body computed tomography (CT) scans is a critical task on precision oncology
imaging, denoted as universal lesion segmentation (ULS). Manual annotation is
the current clinical practice, being highly time-consuming and inconsistent on
tumor's longitudinal assessment. Effectively training an automatic segmentation
model is desirable but relies heavily on a large number of pixel-wise labelled
data. Existing weakly-supervised segmentation approaches often struggle with
regions nearby the lesion boundaries. In this paper, we present a novel
weakly-supervised universal lesion segmentation method by building an attention
enhanced model based on the High-Resolution Network (HRNet), named AHRNet, and
propose a regional level set (RLS) loss for optimizing lesion boundary
delineation. AHRNet provides advanced high-resolution deep image features by
involving a decoder, dual-attention and scale attention mechanisms, which are
crucial to performing accurate lesion segmentation. RLS can optimize the model
reliably and effectively in a weakly-supervised fashion, forcing the
segmentation close to lesion boundary. Extensive experimental results
demonstrate that our method achieves the best performance on the publicly
large-scale DeepLesion dataset and a hold-out test set.
|
Limited by the small keyboard, most mobile apps support the automatic login
feature for better user experience. Therefore, users avoid the inconvenience of
retyping their ID and password when an app runs in the foreground again.
However, this auto-login function can be exploited to launch the so-called
"data-clone attack": once the locally-stored, auto-login depended data are
cloned by attackers and placed into their own smartphones, attackers can break
through the login-device number limit and log in to the victim's account
stealthily. A natural countermeasure is to check the consistency of
devicespecific attributes. As long as the new device shows different device
fingerprints with the previous one, the app will disable the auto-login
function and thus prevent data-clone attacks. In this paper, we develop
VPDroid, a transparent Android OS-level virtualization platform tailored for
security testing. With VPDroid, security analysts can customize different
device artifacts, such as CPU model, Android ID, and phone number, in a virtual
phone without user-level API hooking. VPDroid's isolation mechanism ensures
that user-mode apps in the virtual phone cannot detect device-specific
discrepancies. To assess Android apps' susceptibility to the data-clone attack,
we use VPDroid to simulate data-clone attacks with 234 most-downloaded apps.
Our experiments on five different virtual phone environments show that
VPDroid's device attribute customization can deceive all tested apps that
perform device-consistency checks, such as Twitter, WeChat, and PayPal. 19
vendors have confirmed our report as a zero-day vulnerability. Our findings
paint a cautionary tale: only enforcing a device-consistency check at client
side is still vulnerable to an advanced data-clone attack.
|
Experimental results of the $p(^{13}{\rm B},d)^{12}{\rm B}$ transfer reaction
to the low-lying states in $^{12}$B are reported. The optical potential
parameters for the entrance channel are extracted from the elastic scattering
$p$($^{13}{\rm B}$, $p$) measured in the same experiment, while those for the
exit channel are global ones. Spectroscopic factors associated with the $p$-,
$s$-, and $d$-wave neutron transfer to the known $^{12}$B states, are extracted
by comparing the deuteron angular distributions with the calculation results.
The separated $s$- and $d$-wave intruder strengths in $^{13}{\rm B}_{\rm g.s.}$
were determined to be $10(2)\%$ and $6(1)\%$, respectively, which follow
roughly the systematics for the $N$ = 8 neutron-rich isotones. The measured
total intruder strength is in good agreement with the shell model calculation,
while the individual ones evolve quite differently. Particularly, the sudden
change of the $d$-wave intensity between $^{13}$B and $^{12}$Be needs further
theoretical interpretation.
|
Traffic anomaly detection has played a crucial role in Intelligent
Transportation System (ITS). The main challenges of this task lie in the highly
diversified anomaly scenes and variational lighting conditions. Although much
work has managed to identify the anomaly in homogenous weather and scene, few
resolved to cope with complex ones. In this paper, we proposed a dual-modality
modularized methodology for the robust detection of abnormal vehicles. We
introduced an integrated anomaly detection framework comprising the following
modules: background modeling, vehicle tracking with detection, mask
construction, Region of Interest (ROI) backtracking, and dual-modality tracing.
Concretely, we employed background modeling to filter the motion information
and left the static information for later vehicle detection. For the vehicle
detection and tracking module, we adopted YOLOv5 and multi-scale tracking to
localize the anomalies. Besides, we utilized the frame difference and tracking
results to identify the road and obtain the mask. In addition, we introduced
multiple similarity estimation metrics to refine the anomaly period via
backtracking. Finally, we proposed a dual-modality bilateral tracing module to
refine the time further. The experiments conducted on the Track 4 testset of
the NVIDIA 2021 AI City Challenge yielded a result of 0.9302 F1-Score and
3.4039 root mean square error (RMSE), indicating the effectiveness of our
framework.
|
In comparison to conventional traffic designs, shared spaces promote a more
pleasant urban environment with slower motorized movement, smoother traffic,
and less congestion. In the foreseeable future, shared spaces will be populated
with a mixture of autonomous vehicles (AVs) and vulnerable road users (VRUs)
like pedestrians and cyclists. However, a driver-less AV lacks a way to
communicate with the VRUs when they have to reach an agreement of a
negotiation, which brings new challenges to the safety and smoothness of the
traffic. To find a feasible solution to integrating AVs seamlessly into
shared-space traffic, we first identified the possible issues that the
shared-space designs have not considered for the role of AVs. Then an online
questionnaire was used to ask participants about how they would like a driver
of the manually driving vehicle to communicate with VRUs in a shared space. We
found that when the driver wanted to give some suggestions to the VRUs in a
negotiation, participants thought that the communications via the driver's body
behaviors were necessary. Besides, when the driver conveyed information about
her/his intentions and cautions to the VRUs, participants selected different
communication methods with respect to their transport modes (as a driver,
pedestrian, or cyclist). These results suggest that novel eHMIs might be useful
for AV-VRU communication when the original drivers are not present. Hence, a
potential eHMI design concept was proposed for different VRUs to meet their
various expectations. In the end, we further discussed the effects of the eHMIs
on improving the sociality in shared spaces and the autonomous driving systems.
|
Lie algebroids provide a natural medium to discuss classical systems,
however, quantum systems have not been considered. In aim of this paper is to
attempt to rectify this situation. Lie algebroids are reviewed and their use in
classical systems is described. The geometric structure of the Schr\"{o}dinger
and Heisenberg representations of quantum systems is examined and their
relationship to Lie algebroids is explored. Geometrically, a quantum system is
seen to be a collection of bounded, linear, self-adjoint operators on a
Hilbert, or more precisely, a K\"{a}hler manifold. The geometry of the
Schr\"{o}dinger representation is given by the Poisson structure of the
co-adjoint orbits on the dual of the Lie algebra. Finally, it is shown that the
Schr\"{o}dinger and Heisenberg representations are equivalent.
|
Security related questions for Cyber Physical Systems (CPS) have attracted
much research attention in searching for novel methods for attack-resilient
control and/or estimation. Specifically, false data injection attacks (FDIAs)
have been shown to be capable of bypassing bad data detection (BDD), while
arbitrarily compromising the integrity of state estimators and robust
controller even with very sparse measurements corruption. Moreover, based on
the inherent sparsity of pragmatic attack signals, $\ell_1$-minimization scheme
has been used extensively to improve the design of attack-resilient estimators.
For this, the theoretical maximum for the percentage of compromised nodes that
can be accommodated has been shown to be $50\%$. In order to guarantee correct
state recoveries for larger percentage of attacked nodes, researchers have
begun to incorporate prior information into the underlying resilient observer
design framework. For the most pragmatic cases, this prior information is often
obtained through some data-driven machine learning process. Existing results
have shown strong positive correlation between the tolerated attack percentages
and the precision of the prior information. In this paper, we present a pruning
method to improve the precision of the prior information, given corresponding
stochastic uncertainty characteristics of the underlying machine learning
model. Then a weighted $\ell_1$-minimization is proposed based on the pruned
prior. The theoretical and simulation results show that the pruning method
significantly improves the observer performance for much larger attack
percentages, even when moderately accurate machine learning model used.
|
Many modern imaging applications can be modeled as compressed sensing linear
inverse problems. When the measurement operator involved in the inverse problem
is sufficiently random, denoising Scalable Message Passing (SMP) algorithms
have a potential to demonstrate high efficiency in recovering compressed data.
One of the key components enabling SMP to achieve fast convergence, stability
and predictable dynamics is the Onsager correction that must be updated at each
iteration of the algorithm. This correction involves the denoiser's divergence
that is traditionally estimated via the Black-Box Monte Carlo (BB-MC) method
\cite{MC-divergence}. While the BB-MC method demonstrates satisfying accuracy
of estimation, it requires executing the denoiser additional times at each
iteration and might lead to a substantial increase in computational cost of the
SMP algorithms. In this work we develop two Large System Limit models of the
Onsager correction for denoisers operating within SMP algorithms and use these
models to propose two practical classes of divergence estimators that require
no additional executions of the denoiser and demonstrate similar or superior
correction compared to the BB-MC method.
|
Object detection is widely studied in computer vision filed. In recent years,
certain representative deep learning based detection methods along with solid
benchmarks are proposed, which boosts the development of related researchs.
However, existing detection methods still suffer from undesirable performance
under challenges such as camouflage, blur, inter-class similarity, intra-class
variance and complex environment. To address this issue, we propose LGA-RCNN
which utilizes a loss-guided attention (LGA) module to highlight representative
region of objects. Then, those highlighted local information are fused with
global information for precise classification and localization.
|
We theoretically investigate the formation of $W$ states in a tripartite
system composed of three charge qubits coupled to vibrational modes. The
electromechanical coupling is responsable for second order virtual processes
that result in an effective electron-electron interaction between neighbor
qubits, which yields to the formation of $W$ states. Based on the Lang-Firsov
transformation and perturbation theory, we analytically solve the quantum
dynamics, providing a mathematical expression for the maximally entangled $W$
state. Dephasing is also taken into accout, paying particular attention on the
robustness of bipartite entanglement against local dephasing processes.
|
We study estimating inherent human disagreement (annotation label
distribution) in natural language inference task. Post-hoc smoothing of the
predicted label distribution to match the expected label entropy is very
effective. Such simple manipulation can reduce KL divergence by almost half,
yet will not improve majority label prediction accuracy or learn label
distributions. To this end, we introduce a small amount of examples with
multiple references into training. We depart from the standard practice of
collecting a single reference per each training example, and find that
collecting multiple references can achieve better accuracy under the fixed
annotation budget. Lastly, we provide rich analyses comparing these two methods
for improving label distribution estimation.
|
In this paper, we develop and study approximately smooth basis constructions
for isogeometric analysis over two-patch domains. One key element of
isogeometric analysis is that it allows high order smoothness within one patch.
However, for representing complex geometries, a multi-patch construction is
needed. In this case, a $C^0$-smooth basis is easy to obtain, whereas
$C^1$-smooth isogeometric functions require a special construction. Such spaces
are of interest when solving numerically fourth-order PDE problems, such as the
biharmonic equation and the Kirchhoff-Love plate or shell formulation, using an
isogeometric Galerkin method.
With the construction of so-called analysis-suitable $G^1$ (in short,
AS-$G^1$) parametrizations, as introduced in (Collin, Sangalli, Takacs; CAGD,
2016), it is possible to construct $C^1$ isogeometric spaces which possess
optimal approximation properties. These geometries need to satisfy certain
constraints along the interfaces and additionally require that the regularity
$r$ and degree $p$ of the underlying spline space satisfy $1 \leq r \leq p-2$.
The problem is that most complex geometries are not AS-$G^1$ geometries.
Therefore, we define basis functions for isogeometric spaces by enforcing
approximate $C^1$ conditions following the basis construction from (Kapl,
Sangalli, Takacs; CAGD, 2017). For this reason, the defined function spaces are
not exactly $C^1$ but only approximately.
We study the convergence behavior and define function spaces that converge
optimally under $h$-refinement, by locally introducing functions of higher
polynomial degree and lower regularity. The convergence rate is optimal in
several numerical tests performed on domains with non-trivial interfaces. While
an extension to more general multi-patch domains is possible, we restrict
ourselves to the two-patch case and focus on the construction over a single
interface.
|
We show that a closed orientable 3--dimensional manifold admits a round fold
map into the plane, i.e. a fold map whose critical value set consists of
disjoint simple closed curves isotopic to concentric circles, if and only if it
is a graph manifold, generalizing the characterization for simple stable maps
into the plane. Furthermore, we also give a characterization of closed
orientable graph manifolds that admit directed round fold maps into the plane,
i.e.\ round fold maps such that the number of regular fiber components of a
regular value increases toward the central region in the plane.
|
Inflexible combined heat and power (CHP) plants and uncertain wind power
production result in excess power in distribution networks, which leads to
inverse power flow challenging grid operations. Power-to-X facilities such as
electrolysers and electric boilers can offer extra flexibility to the
integrated energy system. In this regard, we aim to jointly determine the
optimal Power-to-X facility sizing and integrated energy system operations in
this study. To account for wind power uncertainties, a distributionally robust
chance-constrained model is developed to characterize wind power uncertainties
using ambiguity sets. Linear decision rules are applied to analytically express
real-time recourse actions when uncertainties are exposed, which allows the
propagation of wind power uncertainties to gas and heat systems. Accordingly,
the developed three-stage distributionally robust chance-constrained model is
converted into a computationally tractable single-stage mixed-integer conic
model. A case study validates the effectiveness of introducing the electrolyser
and electric boiler into the integrated energy system, with respect to the
decreased system cost, expanded CHP plant flexibility and reduced inverse power
flow. The developed distributionally robust optimization model exhibits better
effectiveness and robustness compared to a chance-constrained optimization
model assuming wind forecast errors follow Gaussian distribution. Detailed
profit analysis reveals that although the overall system cost is minimized, the
profit is distributed unevenly across various stakeholders in the system.
|
Among exoplanets, the small-size population constitutes the dominant one,
with a diversity of properties and compositions ranging from rocky to gas
dominated envelope. While a large fraction of them have masses and radii
similar to or smaller than Neptune, yet none share common properties in term of
orbital period and insulation with our ice giants. These exoplanets belong to
multi-planet systems where planets are closely packed within the first tenth of
AU and often exposed to strong irradiation from their host star. Their
formation process, subsequent evolution, and fate are still debated and trigger
new developments of planet formation models. This paper reviews the
characteristics and properties of this extended sample of planets with radii
between $\sim$ 1.6 and 4.0$_\oplus$. Even though we still lack real
Neptune/Uranus analogues, these exoplanets provide us with key observational
constraints that allow the formation of our ice giants to be placed in a more
general framework than the sole example of our solar system.
|
A new robust stochastic volatility (SV) model having Student-t marginals is
proposed. Our process is defined through a linear normal regression model
driven by a latent gamma process that controls temporal dependence. This gamma
process is strategically chosen to enable us to find an explicit expression for
the pairwise joint density function of the Student-t response process. With
this at hand, we propose a composite likelihood (CL) based inference for our
model, which can be straightforwardly implemented with a low computational
cost. This is a remarkable feature of our Student-t SV process over existing SV
models in the literature that involve computationally heavy algorithms for
estimating parameters. Aiming at a precise estimation of the parameters related
to the latent process, we propose a CL Expectation-Maximization algorithm and
discuss a bootstrap approach to obtain standard errors. The finite-sample
performance of our composite likelihood methods is assessed through Monte Carlo
simulations. The methodology is motivated by an empirical application in the
financial market. We analyze the relationship, across multiple time periods,
between various US sector Exchange-Traded Funds returns and individual
companies' stock price returns based on our novel Student-t model. This
relationship is further utilized in selecting optimal financial portfolios.
|
Efficient generation of spin polarization is very important for spintronics
and quantum computation. We propose a new mechanism of Chiral Phonon activated
Spin Seebeck (CPASS) effect in nonmagnetic materials in the absence of magnetic
field and spin-orbital coupling. Owing to nonequilibrium distribution of chiral
phonons under temperature gradient, we investigate the resulted chiral phonon
activated spin polarization by solving the Boltzmann transport equation. The
CPASS coefficients, with both band and phonon-drag contributions, exhibit
linear dependence on the temperature gradient. The above two contributions are
opposite for negative charge carriers and their relative magnitude is tunable
by the chemical potential modulation. The CPASS effect where the spin
accumulation is induced by chiral phonons, provides opportunities for the
exploration of advanced spintronic devices based on chiral materials even in
the absence of magnetic order and spin-orbit coupling.
|
Real-world networks and knowledge graphs are usually heterogeneous networks.
Representation learning on heterogeneous networks is not only a popular but a
pragmatic research field. The main challenge comes from the heterogeneity --
the diverse types of nodes and edges. Besides, for a given node in a HIN, the
significance of a neighborhood node depends not only on the structural distance
but semantics. How to effectively capture both structural and semantic
relations is another challenge. The current state-of-the-art methods are based
on the algorithm of meta-path and therefore have a serious disadvantage -- the
performance depends on the arbitrary choosing of meta-path(s). However, the
selection of meta-path(s) is experience-based and time-consuming. In this work,
we propose a novel meta-path-free representation learning on heterogeneous
networks, namely Heterogeneous graph Convolutional Networks (HCN). The proposed
method fuses the heterogeneity and develops a $k$-strata algorithm ($k$ is an
integer) to capture the $k$-hop structural and semantic information in
heterogeneous networks. To the best of our knowledge, this is the first attempt
to break out of the confinement of meta-paths for representation learning on
heterogeneous networks. We carry out extensive experiments on three real-world
heterogeneous networks. The experimental results demonstrate that the proposed
method significantly outperforms the current state-of-the-art methods in a
variety of analytic tasks.
|
Discrete max-linear Bayesian networks are directed graphical models specified
by the same recursive structural equations as max-linear models but with
discrete innovations. When all of the random variables in the model are binary,
these models are isomorphic to the conjunctive Bayesian network (CBN) models of
Beerenwinkel, Eriksson, and Sturmfels. Many of the techniques used to study CBN
models can be extended to discrete max-linear models and similar results can be
obtained. In particular, we extend the fact that CBN models are toric varieties
after linear change of coordinates to all discrete max-linear models.
|
Mars northern polar latitudes are known to harbor an enhanced 3 ${\mu}$m
spectral signature when observed from orbit. This may indicate a greater amount
of surface adsorbed or bound water, although it has not yet been possible to
easily reconcile orbital observations with ground measurements by Phoenix. Here
we re-analyzed OMEGA/Mars Express observations acquired during the Northern
summer to further characterize this 3 ${\mu}$m absorption band increase. We
identify the presence of a new specific spectral signature composed of an
additional narrow absorption feature centered at 3.03 ${\mu}$m coupled with an
absorption at ${\lambda}$ ${\geq}$ 3.8 ${\mu}$m. This signature is
homogeneously distributed over a high-albedo open ring surrounding the
circumpolar low-albedo terrains between ~ 68{\deg}N and 76{\deg}N and ~
0{\deg}E and 270{\deg}E. This location includes the Phoenix landing site. This
feature shows no time variability and can be confidently attributed to a
seasonally stable surface component. All together, the stability, spectral
shape and absence of significant correlation with other signatures in the 1 $-$
2.5 ${\mu}$m range discard interpretations relying on water ice or easily
exchangeable adsorbed water. Sulfates, notably anhydrite, provide interesting
comparisons to several sections of the spectrum. Analogies with Earth samples
also show that the spectral signature could result from a latitudinal
modification of the hydration state and/or grains size of salts contaminants.
While the exact full spectral shape cannot be easily reproduced, plausible
explanations to this observation seem to involve geologically recent water
alteration at high northern latitudes.
|
This paper discusses the current critique against neural network-based
Natural Language Understanding (NLU) solutions known as language models. We
argue that much of the current debate rests on an argumentation error that we
will refer to as the singleton fallacy: the assumption that language, meaning,
and understanding are single and uniform phenomena that are unobtainable by
(current) language models. By contrast, we will argue that there are many
different types of language use, meaning and understanding, and that (current)
language models are build with the explicit purpose of acquiring and
representing one type of structural understanding of language. We will argue
that such structural understanding may cover several different modalities, and
as such can handle several different types of meaning. Our position is that we
currently see no theoretical reason why such structural knowledge would be
insufficient to count as "real" understanding.
|
We classify 2+1 dimensional integrable systems with nonlocality of the
intermediate long wave type. Links to the 2+1 dimensional waterbag system
are established. Dimensional reductions of integrable systems constructed in
this paper provide dispersive regularisations of hydrodynamic equations
governing propagation of long nonlinear waves in a shear flow with piecewise
linear velocity profile (for special values of vorticities).
|
Influenza, an infectious disease, causes many deaths worldwide. Predicting
influenza victims during epidemics is an important task for clinical, hospital,
and community outbreak preparation. On-line user-generated contents (UGC),
primarily in the form of social media posts or search query logs, are generally
used for prediction for reaction to sudden and unusual outbreaks. However, most
studies rely only on the UGC as their resource and do not use various UGCs. Our
study aims to solve these questions about Influenza prediction: Which model is
the best? What combination of multiple UGCs works well? What is the nature of
each UGC? We adapt some models, LASSO Regression, Huber Regression, Support
Vector Machine regression with Linear kernel (SVR) and Random Forest, to test
the influenza volume prediction in Japan during 2015 - 2018. For that, we use
on-line five data resources: (1) past flu patients, (2) SNS (Twitter), (3)
search engines (Yahoo! Japan), (4) shopping services (Yahoo! Shopping), and (5)
Q&A services (Yahoo! Chiebukuro) as resources of each model. We then validate
respective resources contributions using the best model, Huber Regression, with
all resources except one resource. Finally, we use Bayesian change point method
for ascertaining whether the trend of time series on any resources is reflected
in the trend of flu patient count or not. Our experiments show Huber Regression
model based on various data resources produces the most accurate results. Then,
from the change point analysis, we get the result that search query logs and
social media posts for three years represents these resources as a good
predictor. Conclusions: We show that Huber Regression based on various data
resources is strong for outliers and is suitable for the flu prediction.
Additionally, we indicate the characteristics of each resource for the flu
prediction.
|
Social media marketing is an emerging marketing technique worldwide. This
research concentrates on how effectively social media can be used to promote a
product in tourism industry. The efficient use of social media develops a
tourism company in terms of sales, branding, reach and relationship management.
The study aims to find the best social media platform to promote and develop a
tourism company and the customer opinion towards planning a trip through
online. It also concentrates on customer response for online offers and
discounts in those social media platforms. The study attempts to understand and
create suitable model for social media marketing for tourism companies with a
sample size of 400. The sampling technique used in this study is purposive
sampling method. The purposive sample can also be called as judgemental sample.
Normally the sample will be selected based on the knowledge possessed by the
respondents on a particular phenomenon. Here, the study is been conducted among
the people who use social media. The sampling technique helped the researcher
to identify the target sample i.e., the social media users.
|
In recent years, physiological signal based authentication has shown great
promises,for its inherent robustness against forgery. Electrocardiogram (ECG)
signal, being the most widely studied biosignal, has also received the highest
level of attention in this regard. It has been proven with numerous studies
that by analyzing ECG signals from different persons, it is possible to
identify them, with acceptable accuracy. In this work, we present, EDITH, a
deep learning-based framework for ECG biometrics authentication system.
Moreover, we hypothesize and demonstrate that Siamese architectures can be used
over typical distance metrics for improved performance. We have evaluated EDITH
using 4 commonly used datasets and outperformed the prior works using less
number of beats. EDITH performs competitively using just a single heartbeat
(96-99.75% accuracy) and can be further enhanced by fusing multiple beats (100%
accuracy from 3 to 6 beats). Furthermore, the proposed Siamese architecture
manages to reduce the identity verification Equal Error Rate (EER) to 1.29%. A
limited case study of EDITH with real-world experimental data also suggests its
potential as a practical authentication system.
|
We prove that the 2D finite depth capillary water wave equations admit no
solitary wave solutions. This closes the existence/non-existence problem for
solitary water waves in 2D, under the classical assumptions of
incompressibility and irrotationality, and with the physical parameters being
gravity, surface tension and the fluid depth.
|
We consider the punctured plane with volume density $|x|^\alpha$ and
perimeter density $|x|^\beta$. We show that centred balls are uniquely
isoperimetric for indices $(\alpha,\beta)$ which satisfy the conditions
$\alpha-\beta+1>0$, $\alpha\leq 2\beta$ and $\alpha(\beta+1)\leq\beta^2$ except
in the case $\alpha=\beta=0$ which corresponds to the classical isoperimetric
inequality.
|
Dynamic languages, such as Python and Javascript, trade static typing for
developer flexibility and productivity. Lack of static typing can cause
run-time exceptions and is a major factor for weak IDE support. To alleviate
these issues, PEP 484 introduced optional type annotations for Python. As
retrofitting types to existing codebases is error-prone and laborious,
learning-based approaches have been proposed to enable automatic type
annotations based on existing, partially annotated codebases. However, it is
still quite challenging for learning-based approaches to give a relevant
prediction in the first suggestion or the first few ones. In this paper, we
present Type4Py, a deep similarity learning-based hierarchical neural network
model that learns to discriminate between types of the same kind and dissimilar
types in a high-dimensional space, which results in clusters of types. Nearest
neighbor search suggests a list of likely types for arguments, variables, and
functions' return. The results of the quantitative and qualitative evaluation
indicate that Type4Py significantly outperforms state-of-the-art approaches at
the type prediction task. Considering the Top-1 prediction, Type4Py obtains a
Mean Reciprocal Rank of 72.5%, which is 10.87% and 16.45% higher than that of
Typilus and TypeWriter, respectively.
|
Quantifying entanglement properties of mixed states in quantum field theory
via entanglement of purification and reflected entropy is a new and challenging
subject. In this work, we study both quantities for two spherical subregions
far away from each other in the vacuum of a conformal field theory in any
number of dimensions. Using lattice techniques, we find an elementary proof
that the decay of both, the entanglement of purification and reflected entropy,
is enhanced with respect to the mutual information behaviour by a logarithm of
the distance between the subregions. In the case of the Ising spin chain at
criticality and the related free fermion conformal field theory, we compute
also the overall coefficients numerically for the both quantities of interest.
|
Spin-dependent transport phenomena due to relativistic spin-orbit coupling
and broken space-inversion symmetry are often difficult to interpret
microscopically, in particular when occurring at surfaces or interfaces. Here
we present a theoretical and experimental study of spin-orbit torque and
unidirectional magnetoresistance in a model room-temperature ferromagnet NiMnSb
with inversion asymmetry in the bulk of this half-heusler crystal. Besides the
angular dependence on magnetization, the competition of Rashba and
Dresselhaus-like spin-orbit couplings results in the dependence of these
effects on the crystal direction of the applied electric field. The
phenomenology that we observe highlights potential inapplicability of commonly
considered approaches for interpreting experiments. We point out that, in
general, there is no direct link between the current-induced non-equilibrium
spin polarization inferred from the measured spin-orbit torque and the
unidirectional magnetiresistance. We also emphasize that the unidirectional
magnetoresistance has not only longitudinal but also transverse components in
the electric field -- current indices which complicates its separation from the
thermoelectric contributions to the detected signals in common experimental
techniques. We use the theoretical results to analyze our measurements of the
on-resonance and off-resonance mixing signals in microbar devices fabricated
from an epitaxial NiMnSb film along different crystal directions. Based on the
analysis we extract an experimental estimate of the unidirectional
magnetoresistance in NiMnSb.
|
Medical visual question answering (Med-VQA) has tremendous potential in
healthcare. However, the development of this technology is hindered by the
lacking of publicly-available and high-quality labeled datasets for training
and evaluation. In this paper, we present a large bilingual dataset, SLAKE,
with comprehensive semantic labels annotated by experienced physicians and a
new structural medical knowledge base for Med-VQA. Besides, SLAKE includes
richer modalities and covers more human body parts than the currently available
dataset. We show that SLAKE can be used to facilitate the development and
evaluation of Med-VQA systems. The dataset can be downloaded from
http://www.med-vqa.com/slake.
|
Recently, Space-Time Memory Network (STM) based methods have achieved
state-of-the-art performance in semi-supervised video object segmentation
(VOS). A crucial problem in this task is how to model the dependency both among
different frames and inside every frame. However, most of these methods neglect
the spatial relationships (inside each frame) and do not make full use of the
temporal relationships (among different frames). In this paper, we propose a
new transformer-based framework, termed TransVOS, introducing a vision
transformer to fully exploit and model both the temporal and spatial
relationships. Moreover, most STM-based approaches employ two separate encoders
to extract features of two significant inputs, i.e., reference sets (history
frames with predicted masks) and query frame (current frame), respectively,
increasing the models' parameters and complexity. To slim the popular
two-encoder pipeline while keeping the effectiveness, we design a single
two-path feature extractor to encode the above two inputs in a unified way.
Extensive experiments demonstrate the superiority of our TransVOS over
state-of-the-art methods on both DAVIS and YouTube-VOS datasets.
|
In the present work, single- and segregated-network PINN architectures are
applied to predict momentum, species and temperature distributions of a dry air
humidification problem in a simple 2D rectangular domain. The created PINN
models account for variable fluid properties, species- and heat-diffusion and
convection. Both the mentioned PINN architectures were trained using different
hyperparameter settings, such as network width and depth to find the
best-performing configuration. It is shown that the segregated-network PINN
approach results in on-average 62% lower losses when compared to the
single-network PINN architecture for the given problem. Furthermore, the
single-network variant struggled to ensure species mass conservation in
different areas of the computational domain, whereas, the segregated approach
successfully maintained species conservation. The PINN predicted velocity,
temperature and species profiles for a given set of boundary conditions were
compared to results generated using OpenFOAM software. Both the single- and
segregated-network PINN models produced accurate results for temperature and
velocity profiles, with average percentage difference relative to the CFD
results of approximately 7.5% for velocity and 8% for temperature. The mean
error percentages for the species mass fractions are 9\% for the single-network
model and 1.5% for the segregated-network approach. To showcase the
applicability of PINNs for surrogate modelling of multi-species problems, a
parameterised version of the segregated-network PINN is trained which could
produce results for different water vapour inlet velocities. The normalised
mean absolute percentage errors, relative to the OpenFOAM results, across three
predicted cases for velocity and temperature are approximately 7.5% and 2.4%
for water vapour mass fraction.
|
We introduce the problem of constructing explicit variety evasive subspace
families. Given a family $\mathcal{F}$ of subvarieties of a projective or
affine space, a collection $\mathcal{H}$ of projective or affine $k$-subspaces
is $(\mathcal{F},\epsilon)$-evasive if for every $\mathcal{V}\in\mathcal{F}$,
all but at most $\epsilon$-fraction of $W\in\mathcal{H}$ intersect every
irreducible component of $\mathcal{V}$ with (at most) the expected dimension.
The problem of constructing such an explicit subspace family generalizes both
deterministic black-box polynomial identity testing (PIT) and the problem of
constructing explicit (weak) lossless rank condensers.
Using Chow forms, we construct explicit $k$-subspace families of polynomial
size that are evasive for all varieties of bounded degree in a projective or
affine $n$-space. As one application, we obtain a complete derandomization of
Noether's normalization lemma for varieties of low degree in a projective or
affine $n$-space. In another application, we obtain a simple polynomial-time
black-box PIT algorithm for depth-4 arithmetic circuits with bounded top fan-in
and bottom fan-in that are not in the Sylvester-Gallai configuration, improving
and simplifying a result of Gupta (ECCC TR 14-130).
As a complement of our explicit construction, we prove a tight lower bound
for the size of $k$-subspace families that are evasive for degree-$d$ varieties
in a projective $n$-space. When $n-k=n^{\Omega(1)}$, the lower bound is
superpolynomial unless $d$ is bounded. The proof uses a dimension-counting
argument on Chow varieties that parametrize projective subvarieties.
|
We present the results of a 3D global magnetohydrodynamic (MHD) simulation of
an AM CVn system that was aimed at exploring eccentricity growth in the
accretion disc self-consistently from a first principles treatment of the MHD
turbulence. No significant eccentricity growth occurs in the simulation. In
order to investigate the reasons why, we ran 2D alpha disc simulations with
alpha values of 0.01, 0.1, and 0.2, and found that only the latter two exhibit
significant eccentricity growth. We present an equation expressing global
eccentricity evolution in terms of contributing forces and use it to analyze
the simulations. As expected, we find that the dominant term contributing to
the growth of eccentricity is the tidal gravity of the companion star. In the
2D simulations, the alpha viscosity directly contributes to eccentricity
growth. In contrast, the overall magnetic forces in the 3D simulation damp
eccentricity. We also analyzed the mode-coupling mechanism of Lubow, and
confirmed that the spiral wave excited by the 3:1 resonance was the dominant
contributor to eccentricity growth in the 2D $\alpha=0.1$ simulations, but
other waves also contribute significantly. We found that the $\alpha=0.1$ and
0.2 simulations had more relative mass at larger radii compared to the
$\alpha=0.01$ and 3D MHD simulation, which also had an effective $\alpha$ of
0.01. This suggests that in 3D MHD simulations without sufficient poloidal
magnetic flux, MRI turbulence does not saturate at a high enough $\alpha$ to
spread the disc to large enough radii to reproduce the superhumps observed in
real systems.
|
The problem of open-set noisy labels denotes that part of training data have
a different label space that does not contain the true class. Lots of
approaches, e.g., loss correction and label correction, cannot handle such
open-set noisy labels well, since they need training data and test data to
share the same label space, which does not hold for learning with open-set
noisy labels. The state-of-the-art methods thus employ the sample selection
approach to handle open-set noisy labels, which tries to select clean data from
noisy data for network parameters updates. The discarded data are seen to be
mislabeled and do not participate in training. Such an approach is intuitive
and reasonable at first glance. However, a natural question could be raised
"can such data only be discarded during training?". In this paper, we show that
the answer is no. Specifically, we discuss that the instances of discarded data
could consist of some meaningful information for generalization. For this
reason, we do not abandon such data, but use instance correction to modify the
instances of the discarded data, which makes the predictions for the discarded
data consistent with given labels. Instance correction are performed by
targeted adversarial attacks. The corrected data are then exploited for
training to help generalization. In addition to the analytical results, a
series of empirical evidences are provided to justify our claims.
|
We propose a new hash function QHFM based on controlled alternate quantum
walks with memory on cycles, where the jth message bit decides whether to run
quantum walk with one-step memory or to run quantum walk with two-step memory
at the jth time step, and the hash value is calculated from the resulting
probability distribution of the walker. Numerical simulation shows that the
proposed hash function has near-ideal statistical performance and is at least
on a par with the state-of-the-art hash functions based on quantum walks in
terms of sensitivity of hash value to message, diffusion and confusion
properties, uniform distribution property, and collision resistance property;
and theoretical analysis indicates that the time and space complexity of the
new scheme are not greater than those of its peers. The good performance of
QHFM suggests that quantum walks that differ not only in coin operators but
also in memory lengths can be combined to build good hash functions, which, in
turn, enriches the construction of controlled alternate quantum walks.
|
Recent progress in self-supervised learning has resulted in models that are
capable of extracting rich representations from image collections without
requiring any explicit label supervision. However, to date the vast majority of
these approaches have restricted themselves to training on standard benchmark
datasets such as ImageNet. We argue that fine-grained visual categorization
problems, such as plant and animal species classification, provide an
informative testbed for self-supervised learning. In order to facilitate
progress in this area we present two new natural world visual classification
datasets, iNat2021 and NeWT. The former consists of 2.7M images from 10k
different species uploaded by users of the citizen science application
iNaturalist. We designed the latter, NeWT, in collaboration with domain experts
with the aim of benchmarking the performance of representation learning
algorithms on a suite of challenging natural world binary classification tasks
that go beyond standard species classification. These two new datasets allow us
to explore questions related to large-scale representation and transfer
learning in the context of fine-grained categories. We provide a comprehensive
analysis of feature extractors trained with and without supervision on ImageNet
and iNat2021, shedding light on the strengths and weaknesses of different
learned features across a diverse set of tasks. We find that features produced
by standard supervised methods still outperform those produced by
self-supervised approaches such as SimCLR. However, improved self-supervised
learning methods are constantly being released and the iNat2021 and NeWT
datasets are a valuable resource for tracking their progress.
|
Smartphone apps for exposure notification and contact tracing have been shown
to be effective in controlling the COVID-19 pandemic. However, Bluetooth Low
Energy tokens similar to those broadcast by existing apps can still be picked
up far away from the transmitting device. In this paper, we present a new class
of methods for detecting whether or not two Wi-Fi-enabled devices are in
immediate physical proximity, i.e. 2 or fewer meters apart, as established by
the U.S. Centers for Disease Control and Prevention (CDC). Our goal is to
enhance the accuracy of smartphone-based exposure notification and contact
tracing systems. We present a set of binary machine learning classifiers that
take as input pairs of Wi-Fi RSSI fingerprints. We empirically verify that a
single classifier cannot generalize well to a range of different environments
with vastly different numbers of detectable Wi-Fi Access Points (APs). However,
specialized classifiers, tailored to situations where the number of detectable
APs falls within a certain range, are able to detect immediate physical
proximity significantly more accurately. As such, we design three classifiers
for situations with low, medium, and high numbers of detectable APs. These
classifiers distinguish between pairs of RSSI fingerprints recorded 2 or fewer
meters apart and pairs recorded further apart but still in Bluetooth range. We
characterize their balanced accuracy for this task to be between 66.8% and
77.8%.
|
We give a complete classification of left invariant para-K\"ahler structures
on four-dimensional simply connected Lie groups up to an automorphism. As an
application we discuss some curvatures properties of the canonical connection
associated to these structures as flat, Ricci flat and existence of Ricci
solitons.
|
This paper studies the convergence of a spatial semi-discretization for a
backward semilinear stochastic parabolic equation. The filtration is general,
and the spatial semi-discretization uses the standard continuous piecewise
linear element method. Firstly, higher regularity of the solution to the
continuous equation is derived. Secondly, the first-order spatial accuracy is
derived for the spatial semi-discretization. Thirdly, an application of the
theoretical result to a stochastic linear quadratic control problem is
presented.
|
Quantum mechanics dictates the band-structure of materials that is essential
for functional electronic components. With increased miniaturization of devices
it becomes possible to exploit the full potential of quantum mechanics through
the principles of superpositions and entanglement. We propose a new class of
quantum rectifiers that can leverage entanglement to dramatically increase
performance by coupling two small spin chains through an effective double-slit
interface. Simulations show that rectification is enhanced by several orders of
magnitude even in small systems and should be realizable using several of the
quantum technology platforms currently available.
|
Light bridges (LBs) are bright lanes that divide an umbra into multiple parts
in some sunspots. Persistent oscillatory bright fronts at a temperature of
$\sim$$10^5$ K are commonly observed above LBs in the 1400/1330 \AA~passbands
of the Interface Region Imaging Spectrograph (IRIS). Based on IRIS
observations, we report small-scale bright blobs from the oscillating bright
front above a light bridge. Some of these blobs reveal a clear acceleration,
whereas the others do not. The average speed of these blobs projected onto the
plane of sky is $71.7\pm14.7$ km s$^{-1}$, with an initial acceleration of
$1.9\pm1.3$ km s$^{-2}$. These blobs normally reach a projected distance of
3--7 Mm from their origin sites. From the transition region images we find an
average projected area of $0.57\pm0.37$ Mm$^{2}$ for the blobs. The blobs were
also detected in multi-passbands of the Solar Dynamics Observatory, but not in
the H$\alpha$ images. These blobs are likely to be plasma ejections, and we
investigate their kinematics and energetics. Through emission measure analyses,
the typical temperature and electron density of these blobs are found to be
around $10^{5.47}$ K and $10^{9.7}$ cm$^{-3}$, respectively. The estimated
kinetic and thermal energies are on the order of $10^{22.8}$ erg and
$10^{23.3}$ erg, respectively. These small-scale blobs appear to show three
different types of formation process. They are possibly triggered by induced
reconnection or release of enhanced magnetic tension due to interaction of
adjacent shocks, local magnetic reconnection between emerging magnetic bipoles
on the light bridge and surrounding unipolar umbral fields, and plasma
acceleration or instability caused by upward shocks, respectively.
|
To realize spin wave logic gates programmable phase inverters are essential.
We image with phase-resolved Brillouin light scattering microscopy propagating
spin waves in a one-dimensional magnonic crystal consisting of dipolarly
coupled magnetic nanostripes. We demonstrate phase shifts upon a single
nanostripe of opposed magnetization. Using micromagnetic simulations we model
our experimental finding in a wide parameter space of bias fields and wave
vectors. We find that low-loss phase inversion is achieved, when the internal
field of the oppositely magnetized nanostripe is tuned such that the latter
supports a resonant standing spin wave mode with odd quantization number at the
given frequency. Our results are key for the realization of phase inverters
with optimized signal transmission.
|
This work is the first to employ and adapt the image-to-image translation
concept based on conditional generative adversarial networks (cGAN) towards
learning a forward and an inverse solution operator of partial differential
equations (PDEs). Even though the proposed framework could be applied as a
surrogate model for the solution of any PDEs, here we focus on steady-state
solutions of coupled hydro-mechanical processes in heterogeneous porous media.
Strongly heterogeneous material properties, which translate to the
heterogeneity of coefficients of the PDEs and discontinuous features in the
solutions, require specialized techniques for the forward and inverse solution
of these problems. Additionally, parametrization of the spatially heterogeneous
coefficients is excessively difficult by using standard reduced order modeling
techniques. In this work, we overcome these challenges by employing the
image-to-image translation concept to learn the forward and inverse solution
operators and utilize a U-Net generator and a patch-based discriminator. Our
results show that the proposed data-driven reduced order model has competitive
predictive performance capabilities in accuracy and computational efficiency as
well as training time requirements compared to state-of-the-art data-driven
methods for both forward and inverse problems.
|
Identifying the underlying mechanisms behind the excitation of transverse
oscillations in coronal loops is essential for their role as diagnostic tools
in coronal seismology and their potential use as wave heating mechanisms of the
solar corona. In this paper, we explore the concept of these transverse
oscillations being excited through a self-sustaining process, caused by
Alfv\'{e}nic vortex shedding from strong background flows interacting with
coronal loops. We show for the first time in 3D simulations that vortex
shedding can generate transverse oscillations in coronal loops, in the
direction perpendicular to the flow due to periodic "pushing" by the vortices.
By plotting the power spectral density we identify the excited frequencies of
these oscillations. We see that these frequencies are dependent both on the
speed of the flow, as well as the characteristics of the oscillating loop.
This, in addition to the fact that the background flow is constant and not
periodic, makes us treat this as a self-oscillating process. Finally, the
amplitudes of the excited oscillations are near constant in amplitude, and are
comparable with the observations of decay-less oscillations. This makes the
mechanism under consideration a possible interpretation of these undamped waves
in coronal loops.
|
One-loop $W$ boson contributions to the decay $H\rightarrow Z\gamma$ in the
general $R_\xi$ gauge are presented. The analytical results are expressed in
terms of well-known Passarino-Veltman functions which their numerical
evaluations can be generated using {\tt LoopTools}. In the limit $d\rightarrow
4$, we have shown that these analytical results are independent of the
unphysical parameter $\xi$ and consistent with previous results. The gauge
parameter independence are also checked numerically for consistence. Our
results are also well stable with different values of $\xi =0, 1, 100,$ and
$\xi \rightarrow \infty$.
|
The performance and cost of Bi-2212/Ag wire is limited by the large fraction
of silver matrix (~3:1) that is required in the oxide-powder-in-tube
fabrication process. An alternative fabrication process is being developed in
which fine-powder Bi-2212 is uni-axially compressed to form bars with a thin Ag
foil sheath. The fine powder naturally textures (aligns the a-b planes
perpendicular to the direction of compaction) with texture >80% using 200 MPa
compression. A billet is formed by stacking trapezoidal-cross-section bars in a
symmetric 8-12-16 pattern around a Ag rod and enclosing in a Ag-wall extrusion
can. The billet is extruded and drawn to fine wire. Results are presented on
present status of the development and testing.
|
Identifying the roles of individual units is critical for understanding the
mechanism of convolutional neural networks (CNNs). However, it is challenging
to give the fully automatic and quantitative measures for effectiveness
assessment of individual units in CNN. To this end, we propose a novel method
for quantitatively clarifying the status and usefulness of single unit of CNN
in image classification tasks. The technical substance of our method is ranking
the importance of unit for each class in classification based on calculation of
specifically defined entropy using algebraic topological tools. It could be
implemented totally by machine without any human intervention. Some interesting
phenomena including certain kind of phase transition are observed via the
evolution of accuracy and loss of network in the successive ablation process of
units. All of the network units are divided into four categories according to
their performance on training and testing data. The role categorization is
excellent startpoint for network construction and simplification. The diverse
utility and contribution to the network generalization of units in
classification tasks are thoroughly illustrated by extensive experiments on
network (VGG) and dataset (ImageNet) with considerable scale. It is easy for
our method to have extensional applications on other network models and tasks
without essential difficulties.
|
This paper develops new analytical process noise covariance models for both
absolute and relative spacecraft states. Process noise is always present when
propagating a spacecraft state due to dynamics modeling deficiencies.
Accurately modeling this noise is essential for sequential orbit determination
and improves satellite conjunction analysis. A common approach called state
noise compensation models process noise as zero-mean Gaussian white noise
accelerations. The resulting process noise covariance can be evaluated
numerically, which is computationally intensive, or through a widely used
analytical model that is restricted to an absolute Cartesian state and small
propagation intervals. Moreover, mathematically rigorous, analytical process
noise covariance models for relative spacecraft states are not currently
available. To address these limitations of the state of the art, new analytical
process noise covariance models are developed for state noise compensation for
both Cartesian and orbital element absolute state representations by leveraging
spacecraft relative dynamics models. Two frameworks are then presented for
modeling the process noise covariance of relative spacecraft states by assuming
either small or large interspacecraft separation. The presented techniques are
validated through numerical simulations.
|
While game theory has been transformative for decision-making, the
assumptions made can be overly restrictive in certain instances. In this work,
we focus on some of the assumptions underlying rationality such as mutual
consistency and best response, and consider ways to relax these assumptions
using concepts from level-$k$ reasoning and quantal response equilibrium (QRE)
respectively. Specifically, we provide an information-theoretic two-parameter
model that can relax both mutual consistency and best response, but can recover
approximations of level-$k$, QRE, or typical Nash equilibrium behaviour in the
limiting cases. The proposed Quantal Hierarchy model is based on a recursive
form of the variational free energy principle, representing self-referential
games as (pseudo) sequential decisions. Bounds in player processing abilities
are captured as information costs, where future chains of reasoning are
discounted, implying a hierarchy of players where lower-level players have
fewer processing resources. We demonstrate the applicability of the proposed
model to several canonical economic games.
|
Recently a conformally invariant action describing the Wilson-Fischer fixed
point in $D=4-\epsilon$ dimensions in the presence of a {\em finite} UV cutoff
was constructed \cite{Dutta}. In the present paper we construct two composite
operator perturbations of this action with definite scaling dimension also in
the presence of a finite cutoff. Thus the operator (as well as the fixed point
action) is well defined at all momenta $0\leq p\leq \infty$ and at low energies
they reduce to $\int_x \phi^2$ and $\int _x \phi^4$ respectively. The
construction includes terms up to $O(\lamda^2)$. In the presence of a finite
cutoff they mix with higher order irrelevant operators. The dimensions are also
calculated to this order and agree with known results.
|
We consider the practicalities of defining, simulating, and characterizing
"Liquids" from a pedagogical standpoint based on atomistic computer
simulations. For simplicity and clarity we study two-dimensional systems
throughout. In addition to the infinite-ranged Lennard-Jones 12/6 potential we
consider two shorter-ranged families of pair potentials. At zero pressure one
of them includes just nearest neighbors. The other longer-ranged family
includes twelve additional neighbors. We find that these further neighbors can
help stabilize the liquid phase.
What about liquids? To implement Wikipedia's definition of liquids as
conforming to their container we begin by formulating and imposing
smooth-container boundary conditions. To encourage conformation further we add
a vertical gravitational field. Gravity helps stabilize the relatively vague
liquid-gas interface. Gravity reduces the messiness associated with the
curiously-named "spinodal" (tensile) portion of the phase diagram. Our
simulations are mainly isothermal. We control the kinetic temperature with
Nos\'e-Hoover thermostating, extracting or injecting heat so as to impose a
mean kinetic temperature over time. Our simulations stabilizing density
gradients and the temperature provide critical-point estimates fully consistent
with previous efforts from free energy and Gibbs' ensemble simulations. This
agreement validates our approach.
|
Recent work has highlighted the role of initialization scale in determining
the structure of the solutions that gradient methods converge to. In
particular, it was shown that large initialization leads to the neural tangent
kernel regime solution, whereas small initialization leads to so called "rich
regimes". However, the initialization structure is richer than the overall
scale alone and involves relative magnitudes of different weights and layers in
the network. Here we show that these relative scales, which we refer to as
initialization shape, play an important role in determining the learned model.
We develop a novel technique for deriving the inductive bias of gradient-flow
and use it to obtain closed-form implicit regularizers for multiple cases of
interest.
|
The success of the current generation of Noisy Intermediate-Scale Quantum
(NISQ) hardware shows that quantum hardware may be able to tackle complex
problems even without error correction. One outstanding issue is that of
coherent errors arising from the increased complexity of these devices. These
errors can accumulate through a circuit, making their impact on algorithms hard
to predict and mitigate. Iterative algorithms like Quantum Imaginary Time
Evolution are susceptible to these errors. This article presents the
combination of both noise tailoring using Randomized Compiling and error
mitigation with a purification. We also show that Cycle Benchmarking gives an
estimate of the reliability of the purification. We apply this method to the
Quantum Imaginary Time Evolution of a Transverse Field Ising Model and report
an energy estimation and a ground state infidelity both below 1\%. Our
methodology is general and can be used for other algorithms and platforms. We
show how combining noise tailoring and error mitigation will push forward the
performance of NISQ devices.
|
For a compact set of actions, an invariant of Kushnirenko's entropy type is
chosen in such a way that on this set it is equal to zero, but will be infinity
for typical actions. As a consequence, we show that typical measure-preserving
transformations are not isomorphic to geometric shape exchange transformations.
This problem arose in connection with the result of Chaika and Davis about the
atypical nature of IETs.
|
Neural language models exhibit impressive performance on a variety of tasks,
but their internal reasoning may be difficult to understand. Prior art aims to
uncover meaningful properties within model representations via probes, but it
is unclear how faithfully such probes portray information that the models
actually use. To overcome such limitations, we propose a technique, inspired by
causal analysis, for generating counterfactual embeddings within models. In
experiments testing our technique, we produce evidence that suggests some
BERT-based models use a tree-distance-like representation of syntax in
downstream prediction tasks.
|
By ungauging a recently discovered lattice rotor model for Chern-Simons
theory, we create an exactly soluble path integral on spacetime lattice for
$U^\kappa(1)$ Symmetry Protected Topological (SPT) phases in $2+1$ dimensions
with a non-zero Hall conductance. We then convert the path integral on a $2+1$d
spacetime lattice into a $2$d Hamiltonian lattice model, and show that the
Hamiltonian consists of mutually commuting local projectors. We confirm the
non-zero Hall conductance by calculating the Chern number of the exact ground
state. It has recently been suggested that no commuting projector model can
host a nonzero Hall conductance. We evade this no-go theorem by considering a
rotor model, with a countably infinite number of states per site.
|
With the need of fast retrieval speed and small memory footprint, document
hashing has been playing a crucial role in large-scale information retrieval.
To generate high-quality hashing code, both semantics and neighborhood
information are crucial. However, most existing methods leverage only one of
them or simply combine them via some intuitive criteria, lacking a theoretical
principle to guide the integration process. In this paper, we encode the
neighborhood information with a graph-induced Gaussian distribution, and
propose to integrate the two types of information with a graph-driven
generative model. To deal with the complicated correlations among documents, we
further propose a tree-structured approximation method for learning. Under the
approximation, we prove that the training objective can be decomposed into
terms involving only singleton or pairwise documents, enabling the model to be
trained as efficiently as uncorrelated ones. Extensive experimental results on
three benchmark datasets show that our method achieves superior performance
over state-of-the-art methods, demonstrating the effectiveness of the proposed
model for simultaneously preserving semantic and neighborhood information.\
|
Toroidal rotation is well known to play significant roles in the edge
transport and L-H transition dynamics of tokamaks. Our recent calculation finds
that a sufficiently strong localized toroidal rotation can directly bring out
the formation of edge pressure pedestal with reversed magnetic shear that is
reminiscent of an H-mode plasma, purely through the effects of toroidal
rotation on the tokamak MHD equilibrium itself. In particular, the enhanced
edge toroidal rotation enables a substantial peaking of the parallel current
profile near edge in higher $\beta$ regimes, which leads to the flattening or
reversal of the local $q$ (safety factor) profile. Here the formation of
pressure pedestal along with the reversed magnetic shear region is shown to be
the natural outcome of the MHD tokamak equilibrium in a self-consistent
response to the presence of a localized toroidal rotation typically observed in
H-mode or QH-mode.
|
The exchange of information between an open quantum system and its
environment allows us to discriminate among different kinds of dynamics, in
particular detecting memory effects to characterize non-Markovianity. Here, we
investigate the role played by the system-environment correlations and the
environmental evolution in the flow of information. First, we derive general
conditions ensuring that two generalized dephasing microscopic models of the
global system-environment evolution result exactly in the same open-system
dynamics, for any initial state of the system. Then, we use the trace distance
to quantify the distinct contributions to the information inside and outside
the open system in the two models. Our analysis clarifies how the interplay
between system-environment correlations and environmental-state
distinguishability can lead to the same information flow from and toward the
open system, despite significant qualitative and quantitative differences at
the level of the global evolution.
|
Recently a non-supersymmetric conformal field theory with an exactly marginal
deformation in the large $N$ limit was constructed by
Chaudhuri-Choi-Rabinovici. On a non-supersymmetric conformal manifold, $c$
coefficient of the trace anomaly in four dimensions would generically change.
In this model, we, however, find that it does not change at the first
non-trivial order given by three-loop diagrams.
|
We propose a logic of knowledge for impure simplicial complexes. Impure
simplicial complexes represent distributed systems under uncertainty over which
processes are still active (are alive) and which processes have failed or
crashed (are dead). Our work generalizes the logic of knowledge for pure
simplicial complexes, where all processes are alive, by Goubault et al. Our
logical semantics has a satisfaction relation defined simultaneously with a
definability relation. The latter restricts which formulas are allowed to have
a truth value: dead processes cannot know or be ignorant of any proposition,
and live processes cannot know or be ignorant of propositions involving
processes they know to be dead. The logic satisfies some but not all axioms and
rules of the modal logic S5. Impure simplicial complexes correspond to Kripke
models where each agent's accessibility relation is an equivalence relation on
a subset of the domain only, and otherwise empty, and where each propositional
variable is known by an agent. We also propose a notion of bisimulation for
impure simplexes and show bisimulation correspondence on certain finitary
simplexes. % Dynamic aspects of our semantics, such as how to formalize
possibly incomplete tasks and algorithms in distributed computing, is left for
future research.
|
Missing node attributes is a common problem in real-world graphs. Graph
neural networks have been demonstrated powerful in graph representation
learning, however, they rely heavily on the completeness of graph information.
Few of them consider the incomplete node attributes, which can bring great
damage to the performance in practice. In this paper, we propose an innovative
node representation learning framework, Wasserstein graph diffusion (WGD), to
mitigate the problem. Instead of feature imputation, our method directly learns
node representations from the missing-attribute graphs. Specifically, we extend
the message passing schema in general graph neural networks to a Wasserstein
space derived from the decomposition of attribute matrices. We test WGD in node
classification tasks under two settings: missing whole attributes on some nodes
and missing only partial attributes on all nodes. In addition, we find WGD is
suitable to recover missing values and adapt it to tackle matrix completion
problems with graphs of users and items. Experimental results on both tasks
demonstrate the superiority of our method.
|
Non-potential magnetic energy promptly released in solar flares is converted
to other forms of energy. This may include nonthermal energy of
flare-accelerated particles, thermal energy of heated flaring plasma, and
kinetic energy of eruptions, jets, up/down flows, and stochastic (turbulent)
plasma motions. The processes or parameters governing partitioning of the
released energy between these components is an open question. How these
components are distributed between distinct flaring loops and what controls
these spatial distributions is also unclear. Here, based on multi-wavelength
data and 3D modeling, we quantify the energy partitioning and spatial
distribution in the well observed SOL2014-02-16T064620 solar flare of class
C1.5. Nonthermal emissions of this flare displayed a simple impulsive
single-spike light curves lasting about 20\,s. In contrast, the thermal
emission demonstrated at least three distinct heating episodes, only one of
which was associated with the nonthermal component. The flare was accompanied
by up and down flows and substantial turbulent velocities. The results of our
analysis suggest that (i) the flare occurs in a multi-loop system that included
at least three distinct flux tubes; (ii) the released magnetic energy is
divided unevenly between the thermal and nonthermal components in these loops;
(iii) only one of these three flaring loops contains an energetically important
amount of nonthermal electrons, while two other loops remain thermal; (iv) the
amounts of direct plasma heating and that due to nonthermal electron loss are
comparable; (v) the kinetic energy in the flare footpoints constitute only a
minor fraction compared with the thermal and nonthermal energies.
|
This article develops for the first time a rigorous analysis of Hibler's
model of sea ice dynamics. Identifying Hibler's ice stress as a quasilinear
second order operator and regarding Hibler's model as a quasilinear evolution
equation, it is shown that Hibler's coupled sea ice model, i.e., the model
coupling velocity, thickness and compactness of sea ice, is locally strongly
well-posed within the $L_q$-setting and also globally strongly well-posed for
initial data close to constant equilibria.
|
Ensemble data from Earth system models has to be calibrated and
post-processed. I propose a novel member-by-member post-processing approach
with neural networks. I bridge ideas from ensemble data assimilation with
self-attention, resulting into the self-attentive ensemble transformer. Here,
interactions between ensemble members are represented as additive and dynamic
self-attentive part. As proof-of-concept, I regress global ECMWF ensemble
forecasts to 2-metre-temperature fields from the ERA5 reanalysis. I demonstrate
that the ensemble transformer can calibrate the ensemble spread and extract
additional information from the ensemble. As it is a member-by-member approach,
the ensemble transformer directly outputs multivariate and spatially-coherent
ensemble members. Therefore, self-attention and the transformer technique can
be a missing piece for a non-parametric post-processing of ensemble data with
neural networks.
|
Cold spots are sub-wavelength regions which might emerge near a plasmonic
nanoantenna, should one or more components of some far-field illumination
cancel out with scattered light. With a simplest-case demonstration using two
dipolar scatterers, we show that by changing only the polarisation and
amplitude of two plane waves, a unique, zero-magnitude and super sub-wavelength
cold spot can be created anywhere in the space around a nanoantenna. This
technique is a means for ultra-fast, remote, and non-mechanical sub-wavelength
electric field manipulation.
|
Bayesian inference allows to obtain useful information on the parameters of
models, either in computational statistics or more recently in the context of
Bayesian Neural Networks. The computational cost of usual Monte Carlo methods
for sampling a posteriori laws in Bayesian inference scales linearly with the
number of data points. One option to reduce it to a fraction of this cost is to
resort to mini-batching in conjunction with unadjusted discretizations of
Langevin dynamics, in which case only a random fraction of the data is used to
estimate the gradient. However, this leads to an additional noise in the
dynamics and hence a bias on the invariant measure which is sampled by the
Markov chain. We advocate using the so-called Adaptive Langevin dynamics, which
is a modification of standard inertial Langevin dynamics with a dynamical
friction which automatically corrects for the increased noise arising from
mini-batching. We investigate the practical relevance of the assumptions
underpinning Adaptive Langevin (constant covariance for the estimation of the
gradient), which are not satisfied in typical models of Bayesian inference, and
quantify the bias induced by minibatching in this case. We also show how to
extend AdL in order to systematically reduce the bias on the posterior
distribution by considering a dynamical friction depending on the current value
of the parameter to sample.
|
In localization microscopy, subnanometer precision is possible but supporting
accuracy is challenging, and no study has demonstrated reliable traceability to
the International System of Units (SI). To do so, we measure the positions of
nanoscale apertures in a reference array by traceable atomic-force microscopy,
creating a master standard. We perform correlative measurements of this
standard by optical microscopy, correcting position errors from optical
aberrations by a Zernike calibration. We establish an uncertainty field due to
localization errors and scale uncertainty, with regions of position
traceability to within a 68 % coverage interval of +/- 1.0 nm. These results
enable localization metrology with high throughput, which we apply to measure
working standards, validating the subnanometer accuracy of lithographic pitch.
|
Motivated by entropic optimal transport, time reversal of diffusion processes
is revisited. An integration by parts formula is derived for the carr\'e du
champ of a Markov process in an abstract space. It leads to a time reversal
formula for a wide class of diffusion processes in $ \mathbb{R}^n$ possibly
with singular drifts, extending the already known results in this domain.
The proof of the integration by parts formula relies on stochastic
derivatives. Then, this formula is applied to compute the semimartingale
characteristics of the time-reversed $P^*$ of a diffusion measure $P$ provided
that the relative entropy of $P$ with respect to another diffusion measure $R$
is finite, and the semimartingale characteristics of the time-reversed $R^*$
are known (for instance when the reference path measure $R$ is reversible).
As an illustration of the robustness of this method, the integration by parts
formula is also employed to derive a time-reversal formula for a random walk on
a graph.
|
Collecting and aggregating information from several probability measures or
histograms is a fundamental task in machine learning. One of the popular
solution methods for this task is to compute the barycenter of the probability
measures under the Wasserstein metric. However, approximating the Wasserstein
barycenter is numerically challenging because of the curse of dimensionality.
This paper proposes the projection robust Wasserstein barycenter (PRWB) that
has the potential to mitigate the curse of dimensionality. Since PRWB is
numerically very challenging to solve, we further propose a relaxed PRWB
(RPRWB) model, which is more tractable. The RPRWB projects the probability
measures onto a lower-dimensional subspace that maximizes the Wasserstein
barycenter objective. The resulting problem is a max-min problem over the
Stiefel manifold. By combining the iterative Bregman projection algorithm and
Riemannian optimization, we propose two new algorithms for computing the RPRWB.
The complexity of arithmetic operations of the proposed algorithms for
obtaining an $\epsilon$-stationary solution is analyzed. We incorporate the
RPRWB into a discrete distribution clustering algorithm, and the numerical
results on real text datasets confirm that our RPRWB model helps improve the
clustering performance significantly.
|
Context. We report the discovery of VVV-CL160, a new nearby globular cluster
(GC) with extreme kinematics, located in the Galactic plane at $l = 10.1477$
deg, $b = 0.2999$ deg. Aims. We aim to characterize the physical properties of
this new GC and place it in the context of the Milky Way, exploring its
possible connection with the known GC NGC 6544 and with the Hrid halo stream.
Methods. VVV-CL160 was originally detected in the VISTA Variables in the V\'ia
L\'actea (VVV) survey. We use the proper motions (PMs) from the updated VVV
Infrared Astrometric Catalog (VIRAC2) to select GC members and make deep
near-infrared color-magnitude diagrams (CMDs) to study the cluster properties.
We also fit King models to the decontaminated sample to determine the GC
structural parameters. Results. VVV-CL160 has an unusually large PM for a
Galactic GC as measured with VIRAC2 and Gaia EDR3: $\mu_{\alpha}\cos(\delta)$ =
$-2.3 \pm 0.1 $ mas yr$^{-1}$ and $\mu_{\delta}$ = $-16.8 \pm 0.1 $ mas
yr$^{-1}$. The kinematics are similar to those of the known GC NGC 6544 and the
Hrid halo stream. We estimate a reddening of $E(J-K) = 1.95$ mag and an
extinction of $A_{k}= 1.40$ mag for VVV-CL160. We also measure a distance
modulus of $(m-M) = 13.01$ mag and a distance of $D_{\odot} = 4.0 \pm 0.5$ kpc.
This places the GC at $z=29$ pc above the Galactic plane and at a
galactocentric distance of $R_G=4.2$ kpc. We also measure a metallicity of
$[Fe/H] = -1.4 \pm 0.2$ dex for an adopted age of $t=12$ Gyr; King model fits
of the PM-decontaminated sample reveal a concentrated GC, with core radius
$r_{c}= 22.8"$ and tidal radius $r_{t}= 50'$. .... We also explore the possible
association of this new GC with other GCs and halo streams. Conclusions. Based
on the locations and kinematics, we suggest that VVV-CL160, along with NGC
6544, may be associated with the extension of the Hrid halo stream.
|
A rank-$r$ integer matrix $A$ is $\Delta$-modular if the determinant of each
$r \times r$ submatrix has absolute value at most $\Delta$. The class of
$1$-modular, or unimodular, matrices is of fundamental significance in both
integer programming theory and matroid theory. A 1957 result of Heller shows
that the maximum number of nonzero, pairwise non-parallel rows of a rank-$r$
unimodular matrix is ${r + 1 \choose 2}$. We prove that, for each sufficiently
large integer $r$, the maximum number of nonzero, pairwise non-parallel rows of
a rank-$r$ $2$-modular matrix is ${r + 2 \choose 2} - 2$.
|
Remote sensing images and techniques are powerful tools to investigate earth
surface. Data quality is the key to enhance remote sensing applications and
obtaining a clear and noise-free set of data is very difficult in most
situations due to the varying acquisition (e.g., atmosphere and season),
sensor, and platform (e.g., satellite angles and sensor characteristics)
conditions. With the increasing development of satellites, nowadays Terabytes
of remote sensing images can be acquired every day. Therefore, information and
data fusion can be particularly important in the remote sensing community. The
fusion integrates data from various sources acquired asynchronously for
information extraction, analysis, and quality improvement. In this chapter, we
aim to discuss the theory of spatiotemporal fusion by investigating previous
works, in addition to describing the basic concepts and some of its
applications by summarizing our prior and ongoing works.
|
Deep neural networks give state-of-the-art accuracy for reconstructing images
from few and noisy measurements, a problem arising for example in accelerated
magnetic resonance imaging (MRI). However, recent works have raised concerns
that deep-learning-based image reconstruction methods are sensitive to
perturbations and are less robust than traditional methods: Neural networks (i)
may be sensitive to small, yet adversarially-selected perturbations, (ii) may
perform poorly under distribution shifts, and (iii) may fail to recover small
but important features in an image. In order to understand the sensitivity to
such perturbations, in this work, we measure the robustness of different
approaches for image reconstruction including trained and un-trained neural
networks as well as traditional sparsity-based methods. We find, contrary to
prior works, that both trained and un-trained methods are vulnerable to
adversarial perturbations. Moreover, both trained and un-trained methods tuned
for a particular dataset suffer very similarly from distribution shifts.
Finally, we demonstrate that an image reconstruction method that achieves
higher reconstruction quality, also performs better in terms of accurately
recovering fine details. Our results indicate that the state-of-the-art
deep-learning-based image reconstruction methods provide improved performance
than traditional methods without compromising robustness.
|
Efficiently approximating local curvature information of the loss function is
a key tool for optimization and compression of deep neural networks. Yet, most
existing methods to approximate second-order information have high
computational or storage costs, which can limit their practicality. In this
work, we investigate matrix-free, linear-time approaches for estimating
Inverse-Hessian Vector Products (IHVPs) for the case when the Hessian can be
approximated as a sum of rank-one matrices, as in the classic approximation of
the Hessian by the empirical Fisher matrix. We propose two new algorithms as
part of a framework called M-FAC: the first algorithm is tailored towards
network compression and can compute the IHVP for dimension $d$, if the Hessian
is given as a sum of $m$ rank-one matrices, using $O(dm^2)$ precomputation,
$O(dm)$ cost for computing the IHVP, and query cost $O(m)$ for any single
element of the inverse Hessian. The second algorithm targets an optimization
setting, where we wish to compute the product between the inverse Hessian,
estimated over a sliding window of optimization steps, and a given gradient
direction, as required for preconditioned SGD. We give an algorithm with cost
$O(dm + m^2)$ for computing the IHVP and $O(dm + m^3)$ for adding or removing
any gradient from the sliding window. These two algorithms yield
state-of-the-art results for network pruning and optimization with lower
computational overhead relative to existing second-order methods.
Implementations are available at [9] and [17].
|
With a growing number of cores per socket in modern data-centers where
multi-tenancy of a diverse set of applications must be efficiently supported,
effective sharing of the last level cache is a very important problem. This is
challenging because modern workloads exhibit dynamic phase behavior - their
cache requirements & sensitivity vary across different execution points. To
tackle this problem, we propose Com-CAS, a compiler-guided cache apportioning
system that provides smart cache allocation to co-executing applications in a
system. The front-end of Com-CAS is primarily a compiler-framework equipped
with learning mechanisms to predict cache requirements, while the backend
consists of an allocation framework with a pro-active scheduler that apportions
cache dynamically to co-executing applications. Our system improved average
throughput by 21%, with a maximum of 54% while maintaining the worst individual
application execution time degradation within 15% to meet SLA requirements.
|
The performance of algorithms for neural architecture search strongly depends
on the parametrization of the search space. We use contrastive learning to
identify networks across different initializations based on their data
Jacobians, and automatically produce the first architecture embeddings
independent from the parametrization of the search space. Using our contrastive
embeddings, we show that traditional black-box optimization algorithms, without
modification, can reach state-of-the-art performance in Neural Architecture
Search. As our method provides a unified embedding space, we perform for the
first time transfer learning between search spaces. Finally, we show the
evolution of embeddings during training, motivating future studies into using
embeddings at different training stages to gain a deeper understanding of the
networks in a search space.
|
The field of natural language understanding has experienced exponential
progress in the last few years, with impressive results in several tasks. This
success has motivated researchers to study the underlying knowledge encoded by
these models. Despite this, attempts to understand their semantic capabilities
have not been successful, often leading to non-conclusive, or contradictory
conclusions among different works. Via a probing classifier, we extract the
underlying knowledge graph of nine of the most influential language models of
the last years, including word embeddings, text generators, and context
encoders. This probe is based on concept relatedness, grounded on WordNet. Our
results reveal that all the models encode this knowledge, but suffer from
several inaccuracies. Furthermore, we show that the different architectures and
training strategies lead to different model biases. We conduct a systematic
evaluation to discover specific factors that explain why some concepts are
challenging. We hope our insights will motivate the development of models that
capture concepts more precisely.
|
Convolutional Neural Networks (CNNs) are successful deep learning models in
the field of computer vision. To get the maximum advantage of CNN model for
Human Action Recognition (HAR) using inertial sensor data, in this paper, we
use 4 types of spatial domain methods for transforming inertial sensor data to
activity images, which are then utilized in a novel fusion framework. These
four types of activity images are Signal Images (SI), Gramian Angular Field
(GAF) Images, Markov Transition Field (MTF) Images and Recurrence Plot (RP)
Images. Furthermore, for creating a multimodal fusion framework and to exploit
activity image, we made each type of activity images multimodal by convolving
with two spatial domain filters : Prewitt filter and High-boost filter.
Resnet-18, a CNN model, is used to learn deep features from multi-modalities.
Learned features are extracted from the last pooling layer of each ReNet and
then fused by canonical correlation based fusion (CCF) for improving the
accuracy of human action recognition. These highly informative features are
served as input to a multiclass Support Vector Machine (SVM). Experimental
results on three publicly available inertial datasets show the superiority of
the proposed method over the current state-of-the-art.
|
The paper presents a new approach to multiview video coding using Screen
Content Coding. It is assumed that for a time instant the frames corresponding
to all views are packed into a single frame, i.e. the frame-compatible approach
to multiview coding is applied. For such coding scenario, the paper
demonstrates that Screen Content Coding can be efficiently used for multiview
video coding. Two approaches are considered: the first using standard HEVC
Screen Content Coding, and the second using Advanced Screen Content Coding. The
latter is the original proposal of the authors that exploits quarter-pel motion
vectors and other nonstandard extensions of HEVC Screen Content Coding. The
experimental results demonstrate that multiview video coding even using
standard HEVC Screen Content Coding is much more efficient than simulcast HEVC
coding. The proposed Advanced Screen Content Coding provides virtually the same
coding efficiency as MV-HEVC, which is the state-of-the-art multiview video
compression technique. The authors suggest that Advanced Screen Content Coding
can be efficiently used within the new Versatile Video Coding (VVC) technology.
Nevertheless a reference multiview extension of VVC does not exist yet,
therefore, for VVC-based coding, the experimental comparisons are left for
future work.
|
Subsets and Splits