abstract
stringlengths 42
2.09k
|
---|
Contemporary approaches to perception, planning, estimation, and control have
allowed robots to operate robustly as our remote surrogates in uncertain,
unstructured environments. This progress now creates an opportunity for robots
to operate not only in isolation, but also with and alongside humans in our
complex environments. Realizing this opportunity requires an efficient and
flexible medium through which humans can communicate with collaborative robots.
Natural language provides one such medium, and through significant progress in
statistical methods for natural-language understanding, robots are now able to
interpret a diverse array of free-form commands. However, most contemporary
approaches require a detailed, prior spatial-semantic map of the robot's
environment that models the space of possible referents of an utterance.
Consequently, these methods fail when robots are deployed in new, previously
unknown, or partially-observed environments, particularly when mental models of
the environment differ between the human operator and the robot. This paper
provides a comprehensive description of a novel learning framework that allows
field and service robots to interpret and correctly execute natural-language
instructions in a priori unknown, unstructured environments. Integral to our
approach is its use of language as a "sensor" -- inferring spatial,
topological, and semantic information implicit in the utterance and then
exploiting this information to learn a distribution over a latent environment
model. We incorporate this distribution in a probabilistic, language grounding
model and infer a distribution over a symbolic representation of the robot's
action space. We use imitation learning to identify a belief-space policy that
reasons over the environment and behavior distributions. We evaluate our
framework through a variety navigation and mobile-manipulation experiments.
|
This paper presents a model predictive control (MPC)-based online real-time
adaptive control scheme for emergency voltage control in power systems. Despite
tremendous success in various applications, real-time implementation of MPC for
control in power systems has not been successful due to its online
computational burden for large-sized systems that takes more time than
available between the two control decisions. This long-standing problem is
addressed here by developing a novel MPC-based adaptive control framework which
(i) adapts the nominal offline computed control, by successive control
corrections, at each control decision point using the latest measurements, (ii)
utilizes data-driven approach for prediction of voltage trajectory and its
sensitivity with respect to control using trained deep neural networks (DNNs).
In addition, a realistic coordination scheme among control inputs of static var
compensators (SVC), load-shedding (LS), and load tap-changers (LTC) is
presented with a goal of maintaining bus voltages within a predefined
permissible range, where the delayed effect of LTC action is also incorporated
in a novel way. The performance of the proposed scheme is validated for IEEE
9-bus as well as 39-bus systems, with $\pm 20\%$ variations in nominal loading
conditions. We also show that the proposed new scheme speeds up the online
computation by a factor of 20 bringing it down to under one-tenth the control
interval, making the MPC-based power system control practically feasible.
|
Graphene oxide (GO) is reduced by Joule heating using in-situ transmission
electron microscopy (TEM). The approach allows the simultaneous study of GO
conductivity by electrical measurements and of its composition and structural
properties throughout the reduction process by TEM, electron diffraction and
electron energy-loss spectroscopy. The small changes of GO properties observed
at low applied electric currents are attributed to the promotion of diffusion
processes. The actual reduction process starts from an applied power density of
about 2 1014 Wm-3 and occurs in a highly uniform and localized manner. The
conductivity increases more than 4 orders of magnitude reaching a value of 3
103 Sm-1 with a final O content of less than 1%. We discuss differences between
the reduction by thermal annealing and Joule heating.
|
We have recently proposed a Lagrangian in trace dynamics, to describe a
possible unification of gravity, Yang-Mills fields, and fermions, at the Planck
scale. This Lagrangian for the unified entity - called the aikyon - is
invariant under global unitary transformations, and as a result possesses a
novel conserved charge, known as the Adler-Millard charge. In the present
paper, we derive an eigenvalue equation, analogous to the time-independent
Schr\"{o}dinger equation, for the Hamiltonian of the theory. We show that in
the emergent quantum theory, the energy eigenvalues of the aikyon are
characterised in terms of a fundamental frequency times Planck's constant. The
eigenvalues of this equation can, in principle, determine the values of the
parameters of the standard model. We also report a ground state, in this theory
of spontaneous quantum gravity, which could characterise a non-singular initial
epoch in quantum cosmology.
|
Probing classifiers have emerged as one of the prominent methodologies for
interpreting and analyzing deep neural network models of natural language
processing. The basic idea is simple -- a classifier is trained to predict some
linguistic property from a model's representations -- and has been used to
examine a wide variety of models and properties. However, recent studies have
demonstrated various methodological limitations of this approach. This article
critically reviews the probing classifiers framework, highlighting their
promises, shortcomings, and advances.
|
We propose experimental measurements of the logarithmic negativity, which
quantifies quantum correlations using Gouy phase measurements in an asymmetric
double-slit interference experiment for twin photons. This is possible because
both quantities have analogous dependence with the spatial confinement by the
slits and enables one to manipulate the portion of entanglement by the Gouy
phase. In order to obtain those measurements, we need to work in a regime where
the position correlations between particles are strong, therefore we
investigate such correlations for biphotons. Since we would like to handle
entanglement quantifiers through the Gouy phase, we analyze the Gouy phase
difference for two entangled photons in an asymmetric double-slit interference
experiment.
|
Majorana zero modes are expected to arise in semiconductor-superconductor
hybrid systems, with potential topological quantum computing applications. One
limitation of this approach is the need for a relatively high external magnetic
field that should also change direction at nanoscale. This proposal considers
devices that incorporate micromagnets to address this challenge. We perform
numerical simulations of stray magnetic fields from different micromagnet
configurations, which are then used to solve for Majorana wavefunctions.
Several devices are proposed, starting with the basic four-magnet design to
align magnetic field with the nanowire and scaling up to nanowire T-junctions.
The feasibility of the approach is assessed by performing magnetic imaging of
prototype patterns.
|
Network Intrusion Detection Systems (NIDSs) are important tools for the
protection of computer networks against increasingly frequent and sophisticated
cyber attacks. Recently, a lot of research effort has been dedicated to the
development of Machine Learning (ML) based NIDSs. As in any ML-based
application, the availability of high-quality datasets is critical for the
training and evaluation of ML-based NIDS. One of the key problems with the
currently available datasets is the lack of a standard feature set. The use of
a unique and proprietary set of features for each of the publicly available
datasets makes it virtually impossible to compare the performance of ML-based
traffic classifiers on different datasets, and hence to evaluate the ability of
these systems to generalise across different network scenarios. To address that
limitation, this paper proposes and evaluates standard NIDS feature sets based
on the NetFlow network meta-data collection protocol and system. We evaluate
and compare two NetFlow-based feature set variants, a version with 12 features,
and another one with 43 features.
|
The study of fracture propagation is an essential topic for several
disciplines in engineering and material sciences. Different mathematical
approaches and numerical methods have been applied to simulate brittle
fractures. Materials, naturally, present random properties that contribute its
physical properties, durability, and resistance, for this reason, stochastic
modeling is critical to obtain realistic simulations for fractures. In this
article, we propose applying a Gaussian random field with a Mat\'ern covariance
function to simulate a non-homogeneous energy release rate ($G_c$) of a
material. We propose a surrogate mathematical model based on a
weighted-variational model to reduce numerical complexity and execution times
for simulations in the hybrid phase-field model. The FEniCS open-source
software is used to obtain numerical solutions to the variational and hybrid
phase-field models with Gaussian random fields on the parameter $G_c$. Results
have shown that the weighted-variational model as a surrogate model is a
competitive tool to emulate brittle fractures for real structures, reducing
execution times by 90\%.
|
Chase's lemma provides a powerful tool for translating properties of
(co)products in abelian categories into chain conditions. This note discusses
the context in which the lemma is used, making explicit what is often neglected
in the literature because of its technical nature.
|
High-Efficiency Video Coding (HEVC) surpasses its predecessors in encoding
efficiency by introducing new coding tools at the cost of an increased encoding
time-complexity. The Coding Tree Unit (CTU) is the main building block used in
HEVC. In the HEVC standard, frames are divided into CTUs with the predetermined
size of up to 64x64 pixels. Each CTU is then divided recursively into a number
of equally sized square areas, known as Coding Units (CUs). Although this
diversity of frame partitioning increases encoding efficiency, it also causes
an increase in the time complexity due to the increased number of ways to find
the optimal partitioning. To address this complexity, numerous algorithms have
been proposed to eliminate unnecessary searches during partitioning CTUs by
exploiting the correlation in the video. In this paper, existing CTU depth
decision algorithms for HEVC are surveyed. These algorithms are categorized
into two groups, namely statistics and machine learning approaches. Statistics
approaches are further subdivided into neighboring and inherent approaches.
Neighboring approaches exploit the similarity between adjacent CTUs to limit
the depth range of the current CTU, while inherent approaches use only the
available information within the current CTU. Machine learning approaches try
to extract and exploit similarities implicitly. Traditional methods like
support vector machines or random forests use manually selected features, while
recently proposed deep learning methods extract features during training.
Finally, this paper discusses extending these methods to more recent video
coding formats such as Versatile Video Coding (VVC) and AOMedia Video 1(AV1).
|
Simulation techniques based on accurate and efficient representations of
potential energy surfaces are urgently needed for the understanding of complex
aqueous systems such as solid-liquid interfaces. Here, we present a machine
learning framework that enables the efficient development and validation of
models for complex aqueous systems. Instead of trying to deliver a
globally-optimal machine learning potential, we propose to develop models
applicable to specific thermodynamic state points in a simple and user-friendly
process. After an initial ab initio simulation, a machine learning potential is
constructed with minimum human effort through a data-driven active learning
protocol. Such models can afterwards be applied in exhaustive simulations to
provide reliable answers for the scientific question at hand. We showcase this
methodology on a diverse set of aqueous systems with increasing degrees of
complexity. The systems chosen here comprise bulk water with different ions in
solution, water on a titanium dioxide surface, as well as water confined in
nanotubes and between molybdenum disulfide sheets. Highlighting the accuracy of
our approach with respect to the underlying ab initio reference, the resulting
models are evaluated in detail with an automated validation protocol that
includes structural and dynamical properties and the precision of the force
prediction of the models. Finally, we demonstrate the capabilities of our
approach for the description of water on the rutile titanium dioxide (110)
surface to analyze the structure and mobility of water on this surface. Such
machine learning models provide a straightforward and uncomplicated but
accurate extension of simulation time and length scales for complex systems.
|
A line bundle on a smooth curve $C$ with two marked points determines a rank
function $r(a,b) = h^0(C, L(-ap-bq))$. This paper studies Brill-Noether
degeneracy loci; such a locus is defined to be the closure in
$\operatorname{Pic}^d(C)$ of the locus of line bundles with a specified rank
function. These loci generalize the classical Brill-Noether loci $W^r_d(C)$ as
well as Brill-Noether loci with imposed ramification. For general $(C,p,q)$ we
determine the dimension, singular locus, and intersection class of
Brill-Noether degeneracy loci, generalizing classical results about $W^r_d(C)$.
The intersection class has a combinatorial interpretation in terms of the
number of reduced words for a permutation associated to the rank function, or
alternatively the number of saturated chains in the Bruhat order. The essential
tool is a versality theorem for a certain pair of flags on
$\operatorname{Pic}^d(C)$, conjectured by Melody Chan and the author. We prove
this versality theorem by showing the injectivity of a generalized Petri map,
along the lines of Eisenbud and Harris's proof of the Gieseker-Petri theorem.
|
This paper develops an improved distributed finite-time control algorithm for
multiagent-based ac microgrids with battery energy storage systems (BESSs)
utilizing a low-width communication network. The proposed control algorithm can
simultaneously coordinate BESSs to eliminate any deviation from the nominal
frequency as well as solving the state of charge (SoC) balancing problem. The
stability of the proposed control algorithm is established using the Lyapunov
method and homogeneous approximation theory, which guarantees an accelerated
convergence within a settling time that does not dependent on initial
conditions. Based on this, to significantly reduce the communication burdens,
an event-triggered communication mechanism is designed which can also avoid
Zeno behavior. Then sufficient conditions on the event-triggered boundary are
derived to guarantee the stability and reliability of the whole system.
Practical local constraints are imposed to implement the control protocol, and
the theoretical results are applied to a test system consisting of five DGs and
five BESSs, which verifies the effectiveness of the proposed strategy.
|
It is essential for an automated vehicle in the field to perform
discretionary lane changes with appropriate roadmanship - driving safely and
efficiently without annoying or endangering other road users - under a wide
range of traffic cultures and driving conditions. While deep reinforcement
learning methods have excelled in recent years and been applied to automated
vehicle driving policy, there are concerns about their capability to quickly
adapt to unseen traffic with new environment dynamics. We formulate this
challenge as a multi-Markov Decision Processes (MDPs) adaptation problem and
developed Meta Reinforcement Learning (MRL) driving policies to showcase their
quick learning capability. Two types of distribution variation in environments
were designed and simulated to validate the fast adaptation capability of
resulting MRL driving policies which significantly outperform a baseline RL.
|
In recent years, conversational agents have provided a natural and convenient
access to useful information in people's daily life, along with a broad and new
research topic, conversational question answering (QA). Among the popular
conversational QA tasks, conversational open-domain QA, which requires to
retrieve relevant passages from the Web to extract exact answers, is more
practical but less studied. The main challenge is how to well capture and fully
explore the historical context in conversation to facilitate effective
large-scale retrieval. The current work mainly utilizes history questions to
refine the current question or to enhance its representation, yet the relations
between history answers and the current answer in a conversation, which is also
critical to the task, are totally neglected. To address this problem, we
propose a novel graph-guided retrieval method to model the relations among
answers across conversation turns. In particular, it utilizes a passage graph
derived from the hyperlink-connected passages that contains history answers and
potential current answers, to retrieve more relevant passages for subsequent
answer extraction. Moreover, in order to collect more complementary information
in the historical context, we also propose to incorporate the multi-round
relevance feedback technique to explore the impact of the retrieval context on
current question understanding. Experimental results on the public dataset
verify the effectiveness of our proposed method. Notably, the F1 score is
improved by 5% and 11% with predicted history answers and true history answers,
respectively.
|
The variations in feedstock characteristics such as moisture and particle
size distribution lead to an inconsistent flow of feedstock from the biomass
pre-processing system to the reactor in-feed system. These inconsistencies
result in low on-stream times at the reactor in-feed equipment. This research
develops an optimal process control method for a biomass pre-processing system
comprised of milling and densification operations to provide the consistent
flow of feedstock to a reactor's throat. This method uses a mixed-integer
optimization model to identify optimal bale sequencing, equipment in-feed rate,
and buffer location and size in the biomass pre-processing system. This method,
referred to as the hybrid process control (HPC), aims to maximize throughput
over time. We compare HPC with a baseline feed forward process control. Our
case study based on switchgrass finds that HPC reduces the variation of a
reactor's feeding rate by 100\% without increasing the operating cost of the
biomass pre-processing system for biomass with moisture ranging from 10 to
25\%. A biorefinery can adapt HPC to achieve its design capacity.
|
Purpose: To characterize regional pulmonary function on CT images using a
radiomic filtering approach. Methods: We develop a radiomic filtering technique
to capture the image encoded regional pulmonary ventilation information on CT.
The lung volumes were first segmented on 46 CT images. Then, a 3D sliding
window kernel is implemented to map the impulse response of radiomic features.
Specifically, for each voxel in the lungs, 53 radiomic features were calculated
in such a rotationally-invariant 3D kernel to capture spatially-encoded
information. Accordingly, each voxel coordinate is represented as a
53-dimensional feature vector, and each image is represented as an image tensor
that we refer to as a feature map. To test the technique as a potential
pulmonary biomarker, the Spearman correlation analysis is performed between the
feature map and matched nuclear imaging measurements (Galligas PET or
DTPA-SPECT) of lung ventilation. Results: Two features were found to be highly
correlated with benchmark pulmonary ventilation function results based on the
median of Spearman correlation coefficient () distribution. In particular,
feature GLRLM-based Run Length Non-uniformity and GLCOM-based Sum Average
achieved robust high correlation across 46 patients and both Galligas PET or
DTPA-SPECT nuclear imaging modalities, with the range (median) of [0.05, 0.67]
(0.46) and [0.21, 0.65] (0.45), respectively. Such results are comparable to
other image-based pulmonary function quantification techniques. Conclusions:
Our results provide evidence that local regions of sparsely encoded homogenous
lung parenchyma on CT are associated with diminished radiotracer uptake and
measured lung ventilation defects on PET/SPECT imaging. This finding
demonstrates the potential of radiomics to serve as a non-invasive surrogate of
regional lung function and provides hypothesis-generating data for future
studies.
|
Diamond heat-spreaders for gallium nitride (GaN) devices currently depend
upon a robust wafer bonding process. Bonding-free membrane methods demonstrate
potential, however, chemical vapour deposition (CVD) of diamond directly onto a
III-nitride (III-N) heterostructure membrane induces significant thermal
stresses. In this work, these thermal stresses are investigated using an
analytical approach, a numerical model and experimental validation. The thermal
stresses are caused by the mismatch in the coefficient of thermal expansion
(CTE) between the GaN/III-N stack, silicon (Si) and the diamond from room
temperature to CVD growth temperatures. Simplified analytical wafer bow models
underestimate the membrane bow for small sizes while numerical models replicate
the stresses and bows with increased accuracy using temperature gradients. The
largest tensile stress measured using Raman spectroscopy at room temperature
was approximately 1.0 $\pm0.2$ GPa while surface profilometry shows membrane
bows as large as \SI{58}{\micro\metre}. This large bow is caused by additional
stresses from the Si frame in the initial heating phase which are held in place
by the diamond and highlights challenges for any device fabrication using
contact lithography. However, the bow can be reduced if the membrane is
pre-stressed to become flat at CVD temperatures. In this way, a sufficient
platform to grow diamond on GaN/III-N structures without wafer bonding can be
realised.
|
Scientific workflows are a cornerstone of modern scientific computing, and
they have underpinned some of the most significant discoveries of the last
decade. Many of these workflows have high computational, storage, and/or
communication demands, and thus must execute on a wide range of large-scale
platforms, from large clouds to upcoming exascale HPC platforms. Workflows will
play a crucial role in the data-oriented and post-Moore's computing landscape
as they democratize the application of cutting-edge research techniques,
computationally intensive methods, and use of new computing platforms. As
workflows continue to be adopted by scientific projects and user communities,
they are becoming more complex. Workflows are increasingly composed of tasks
that perform computations such as short machine learning inference, multi-node
simulations, long-running machine learning model training, amongst others, and
thus increasingly rely on heterogeneous architectures that include CPUs but
also GPUs and accelerators. The workflow management system (WMS) technology
landscape is currently segmented and presents significant barriers to entry due
to the hundreds of seemingly comparable, yet incompatible, systems that exist.
Another fundamental problem is that there are conflicting theoretical bases and
abstractions for a WMS. Systems that use the same underlying abstractions can
likely be translated between, which is not the case for systems that use
different abstractions. More information:
https://workflowsri.org/summits/technical
|
In our analysis, we show that what Cottenden et al. accomplish is the
derivation of the ordinary capstan equation, and a solution to a dynamic
membrane with both a zero-Poisson's ratio and a zero-mass density on a rigid
right-circular cone. The authors states that the capstan equation holds true
for an elastic obstacle, and thus, it can be used to calculate the coefficient
of friction between human skin and fabrics. However, using data that we
gathered from human trials, we show that this claim cannot be substantiated as
it is unwise to use the capstan equation (i.e. belt-friction models in general)
to calculate the friction between in-vivo skin and fabrics. This is due to the
fact that such models assume a rigid foundation, while human soft-tissue is
deformable, and thus, a portion of the applied force to the fabric is expended
on deforming the soft-tissue, which in turn leads to the illusion of a higher
coefficient of friction when using belt-friction models.
|
This paper considers joint learning of multiple sparse Granger graphical
models to discover underlying common and differential Granger causality (GC)
structures across multiple time series. This can be applied to drawing
group-level brain connectivity inferences from a homogeneous group of subjects
or discovering network differences among groups of signals collected under
heterogeneous conditions. By recognizing that the GC of a single multivariate
time series can be characterized by common zeros of vector autoregressive (VAR)
lag coefficients, a group sparse prior is included in joint regularized
least-squares estimations of multiple VAR models. Group-norm regularizations
based on group- and fused-lasso penalties encourage a decomposition of multiple
networks into a common GC structure, with other remaining parts defined in
individual-specific networks. Prior information about sparseness and sparsity
patterns of desired GC networks are incorporated as relative weights, while a
non-convex group norm in the penalty is proposed to enhance the accuracy of
network estimation in low-sample settings. Extensive numerical results on
simulations illustrated our method's improvements over existing sparse
estimation approaches on GC network sparsity recovery. Our methods were also
applied to available resting-state fMRI time series from the ADHD-200 data sets
to learn the differences of causality mechanisms, called effective brain
connectivity, between adolescents with ADHD and typically developing children.
Our analysis revealed that parts of the causality differences between the two
groups often resided in the orbitofrontal region and areas associated with the
limbic system, which agreed with clinical findings and data-driven results in
previous studies.
|
Gaining insight into course choices holds significant value for universities,
especially those who aim for flexibility in their programs and wish to adapt
quickly to changing demands of the job market. However, little emphasis has
been put on utilizing the large amount of educational data to understand these
course choices. Here, we use network analysis of the course selection of all
students who enrolled in an undergraduate program in engineering, psychology,
business or computer science at a Nordic university over a five year period.
With these methods, we have explored student choices to identify their distinct
fields of interest. This was done by applying community detection to a network
of courses, where two courses were connected if a student had taken both. We
compared our community detection results to actual major specializations within
the computer science department and found strong similarities. To compliment
this analysis, we also used directed networks to identify the "typical"
student, by looking at students' general course choices by semester. We found
that course choices diversify as programs progress, meaning that attempting to
understand course choices by identifying a "typical" student gives less insight
than understanding what characterizes course choice diversity. Analysis with
our proposed methodology can be used to offer more tailored education, which in
turn allows students to follow their interests and adapt to the ever-changing
career market.
|
We study the entanglement wedge cross-section (EWCS) in holographic massive
gravity theory, in which a first and second-order phase transition can occur.
We find that the mixed state entanglement measures, the EWCS and mutual
information (MI) can characterize the phase transitions. The EWCS and MI show
exactly the opposite behavior in the critical region, which suggests that the
EWCS captures distinct degrees of freedom from that of the MI. More
importantly, EWCS, MI and HEE all show the same scaling behavior in the
critical region. We give an analytical understanding of this phenomenon. By
comparing the quantum information behavior in the thermodynamic phase
transition of holographic superconductors, we analyze the relationship and
difference between them, and provide two mechanisms of quantum information
scaling behavior in the thermodynamic phase transition.
|
The valleys in hexagonal two-dimensional systems with broken inversion
symmetry carry an intrinsic orbital magnetic moment. Despite this, such systems
possess zero net magnetization unless additional symmetries are broken, since
the contributions from both valleys cancel. A nonzero net magnetization can be
induced through applying both uniaxial strain to break the rotational symmetry
of the lattice and an in-plane electric field to break time-reversal symmetry
owing to the resulting current. This creates a magnetoelectric effect whose
strength is characterized by a magnetoelectric susceptibility, which describes
the induced magnetization per unit applied in-plane electric field. Here, we
predict the strength of this magnetoelectric susceptibility for Bernal-stacked
bilayer graphene as a function of the magnitude and direction of strain, the
chemical potential, and the interlayer electric field. We estimate that an
orbital magnetization of ~5400 $\mu_{\text{B}}/\mu\text{m}^2$ can be achieved
for 1% uniaxial strain and a 10 $\mu\text{A}$ bias current, which is almost
three orders of magnitude larger than previously probed experimentally in
strained monolayer MoS$_2$. We also identify regimes in which the
magnetoelectric susceptibility not only switches sign upon reversal of the
interlayer electric field but also in response to small changes in the carrier
density. Taking advantage of this reversibility, we further show that it is
experimentally feasible to probe the effect using scanning magnetometry.
|
We develop a description of defect loops in three-dimensional active nematics
based on a multipole expansion of the far-field director and show how this
leads to a self-dynamics dependent on the loop's geometric type. The dipole
term leads to active stresses that generate a global self-propulsion for splay
and bend loops. The quadrupole moment is non-zero only for non-planar loops and
generates a net `active torque', such that defect loops are both self-motile
and self-orienting. Our analysis identifies right- and left-handed twist loops
as the only force and torque free geometries, suggesting a mechanism for
generating an excess of twist loops. Finally, we determine the Stokesian flows
created by defect loops and describe qualitatively their hydrodynamics.
|
Time of flight based Non-line-of-sight (NLOS) imaging approaches require
precise calibration of illumination and detector positions on the visible scene
to produce reasonable results. If this calibration error is sufficiently high,
reconstruction can fail entirely without any indication to the user. In this
work, we highlight the necessity of building autocalibration into NLOS
reconstruction in order to handle mis-calibration. We propose a forward model
of NLOS measurements that is differentiable with respect to both, the hidden
scene albedo, and virtual illumination and detector positions. With only a mean
squared error loss and no regularization, our model enables joint
reconstruction and recovery of calibration parameters by minimizing the
measurement residual using gradient descent. We demonstrate our method is able
to produce robust reconstructions using simulated and real data where the
calibration error applied causes other state of the art algorithms to fail.
|
Recently, Liu et al. reported that Ti2CTx MXene have ultra-high hydrogen
storage capacity (8.8 wt.%) at room temperature. For the purpose to clearly
understand the hydrogen storage (H-storage), the composition of studied samples
should be clearly characterized and the H-storage structure need be explored.
To achieve 8.8 wt.% capacity, 3 layers of H2 molecules need be stored in the
interlayer space of MXene with the structure of Ti2CF2H14. The H2 layers with
graphene-like 2D structure are in solid/liquid state at room temperature, which
is significant in the explore new materials with surprising properties.
|
The representation learning on textual graph is to generate low-dimensional
embeddings for the nodes based on the individual textual features and the
neighbourhood information. Recent breakthroughs on pretrained language models
and graph neural networks push forward the development of corresponding
techniques. The existing works mainly rely on the cascaded model architecture:
the textual features of nodes are independently encoded by language models at
first; the textual embeddings are aggregated by graph neural networks
afterwards. However, the above architecture is limited due to the independent
modeling of textual features. In this work, we propose GraphFormers, where
layerwise GNN components are nested alongside the transformer blocks of
language models. With the proposed architecture, the text encoding and the
graph aggregation are fused into an iterative workflow, making each node's
semantic accurately comprehended from the global perspective. In addition, a
progressive learning strategy is introduced, where the model is successively
trained on manipulated data and original data to reinforce its capability of
integrating information on graph. Extensive evaluations are conducted on three
large-scale benchmark datasets, where GraphFormers outperform the SOTA
baselines with comparable running efficiency.
|
This paper is a continuation of our Dwork crystals series. Here we exploit
the Cartier operation to prove supercongruences for expansion coefficients of
rational functions. We also define excellent Frobenius lifts and show that for
Dwork's families of hypersurfaces such lifts can be approximated p-adically by
rational functions with powers of the first and second Hasse-Witt determinants
in denominators.
|
A significant fraction of white dwarfs (WDs) exhibit signs of ongoing
accretion of refractory elements at rates $\sim10^3$--$10^7$ kg s$^{-1}$, among
which, 37 WDs were detected to harbor dusty debris disks. Such a concurrence
requires not only fertile reservoirs of planetary material, but also a high
duty cycle of metal delivery. It has been commonly suggested that this material
could be supplied by Solar System analogs of Main Belt asteroids or Kuiper Belt
objects. Here we consider the primary progenitors of WD pollutants as a
population of residual high-eccentricity planetesimals, de-volatilized during
the stellar giant phases. Equivalent to the Solar System's long-period comets,
they are scattered to the proximity of WDs by perturbations from remaining
planets, Galactic tides, passing molecular clouds, and nearby stars. These
objects undergo downsizing when they venture within the tidal disruption limit.
We show quantitatively how the breakup condition and fragment sizes are
determined by material strength and gravity. Thereafter, the fragments'
semi-major axes need to decay by at least $\sim$6 orders of magnitude before
their constituents are eventually accreted onto the surface of WDs. We
investigate the orbital evolution of these fragments around WDs and show that
WDs' magnetic fields induce an Alfv\'en-wave drag during their periastron
passages and rapidly circularize their orbits. This process could be
responsible for the observed accretion rates of heavy-elements and the
generation of circum-WD debris disks. A speculative implication is that giant
planets may be common around WDs' progenitors and they may still be bound to
some WDs today.
|
Polarons with different types of electron-phonon coupling have fundamentally
different properties. When the dominant interaction is between the electron
density and lattice displacement, the momentum of the ground state does not
change and the polaron gets exponentially heavy at strong coupling. In
contrast, one-dimensional Peierls/Su-Schrieffer-Heeger (PSSH) polarons with
interaction originating from displacement-modulated hopping feature a shift of
the ground-state momentum to finite values and moderate values of effective
mass as coupling is increased REF[Phys. Rev. Lett. 105, 266605 (2010)]. Based
on Diagrammatic Monte Carlo method, we investigate whether unusual properties
of PSSH polarons depend on the type of the displacement-modulated hopping and
to what degree they survive in higher dimension. We study two different PSSH
models: with bosonic degrees of freedom residing on sites (model A) and bonds
(model B) of the two-dimensional square lattice. For model A, we find that in
both adiabatic and intermediate regimes, the momentum of the ground state
experiences a continuous transition from zero to a finite value as a function
of coupling strength. The transition is driven by quadratic instability of the
dispersion function, implying that effective mass diverges at the critical
point, and then decreases in an anisotropic fashion with increasing coupling.
Unexpectedly, for model B, the momentum of the ground state always stays at
zero and the effective mass increases monotonously with coupling. The increase
is far from exponential and tends to level-off at strong interaction, resulting
in relatively light polarons. Having light polarons in the strong coupling
regime is crucial for the bi-polaron mechanism of high-temperature
superconductivity REF[Phys. Rev. Lett. 121, 247001 (2018)].
|
In the 35 years since the discovery of cuprate superconductors, we have not
yet reached a unified understanding of their properties, including their
material dependence of the superconducting transition temperature
$T_{\text{c}}$. The preceding theoretical and experimental studies have
provided an overall picture of the phase diagram, and some important parameters
for the $T_{\text{c}}$, such as the contribution of the Cu $d_{z^2}$ orbital to
the Fermi surface and the site-energy difference $\Delta_{dp}$ between the Cu
$d_{x^2-y^2}$ and O $p$ orbitals. However, they are somewhat empirical and
limited in scope, always including exceptions, and do not provide a
comprehensive view of the series of cuprates. Here we propose a four-band
$d$-$p$ model as a minimal model to study material dependence in cuprates.
Using the variational Monte Carlo method, we theoretically investigate the
phase diagram for the La$_2$CuO$_4$ and HgBa$_2$CuO$_4$ systems and the
correlation between the key parameters and the superconductivity. Our results
comprehensively account for the empirical correlation between $T_{\text{c}}$
and model parameters, and thus can provide a guideline for new material design.
We also show that the effect of the nearest-neighbor $d$-$d$ Coulomb
interaction $V_{dd}$ is actually quite important for the stability of
superconductivity and phase competition.
|
The multigrid algorithm is an efficient numerical method for solving a
variety of elliptic partial differential equations (PDEs). The method damps
errors at progressively finer grid scales, resulting in faster convergence
compared to standard iterative methods such as Gauss-Seidel. The prolongation,
or coarse-to-fine interpolation operator within the multigrid algorithm lends
itself to a data-driven treatment with ML super resolution, commonly used to
increase the resolution of images. We (i) propose the novel integration of a
super resolution generative adversarial network (GAN) model with the multigrid
algorithm as the prolongation operator and (ii) show that the GAN-interpolation
improves the convergence properties of the multigrid in comparison to cubic
spline interpolation on a class of multiscale PDEs typically solved in physics
and engineering simulations.
|
A striking discovery in the field of network science is that the majority of
real networked systems have some universal structural properties. In generally,
they are simultaneously sparse, scale-free, small-world, and loopy. In this
paper, we investigate the second-order consensus of dynamic networks with such
universal structures subject to white noise at vertices. We focus on the
network coherence $H_{\rm SO}$ characterized in terms of the
$\mathcal{H}_2$-norm of the vertex systems, which measures the mean deviation
of vertex states from their average value. We first study numerically the
coherence of some representative real-world networks. We find that their
coherence $H_{\rm SO}$ scales sublinearly with the vertex number $N$. We then
study analytically $H_{\rm SO}$ for a class of iteratively growing networks --
pseudofractal scale-free webs (PSFWs), and obtain an exact solution to $H_{\rm
SO}$, which also increases sublinearly in $N$, with an exponent much smaller
than 1. To explain the reasons for this sublinear behavior, we finally study
$H_{\rm SO}$ for Sierpin\'ski gaskets, for which $H_{\rm SO}$ grows
superlinearly in $N$, with a power exponent much larger than 1. Sierpin\'ski
gaskets have the same number of vertices and edges as the PSFWs, but do not
display the scale-free and small-world properties. We thus conclude that the
scale-free and small-world, and loopy topologies are jointly responsible for
the observed sublinear scaling of $H_{\rm SO}$.
|
Model quantization is a promising approach to compress deep neural networks
and accelerate inference, making it possible to be deployed on mobile and edge
devices. To retain the high performance of full-precision models, most existing
quantization methods focus on fine-tuning quantized model by assuming training
datasets are accessible. However, this assumption sometimes is not satisfied in
real situations due to data privacy and security issues, thereby making these
quantization methods not applicable. To achieve zero-short model quantization
without accessing training data, a tiny number of quantization methods adopt
either post-training quantization or batch normalization statistics-guided data
generation for fine-tuning. However, both of them inevitably suffer from low
performance, since the former is a little too empirical and lacks training
support for ultra-low precision quantization, while the latter could not fully
restore the peculiarities of original data and is often low efficient for
diverse data generation. To address the above issues, we propose a zero-shot
adversarial quantization (ZAQ) framework, facilitating effective discrepancy
estimation and knowledge transfer from a full-precision model to its quantized
model. This is achieved by a novel two-level discrepancy modeling to drive a
generator to synthesize informative and diverse data examples to optimize the
quantized model in an adversarial learning fashion. We conduct extensive
experiments on three fundamental vision tasks, demonstrating the superiority of
ZAQ over the strong zero-shot baselines and validating the effectiveness of its
main components. Code is available at <https://git.io/Jqc0y>.
|
Several recent end-to-end text-to-speech (TTS) models enabling single-stage
training and parallel sampling have been proposed, but their sample quality
does not match that of two-stage TTS systems. In this work, we present a
parallel end-to-end TTS method that generates more natural sounding audio than
current two-stage models. Our method adopts variational inference augmented
with normalizing flows and an adversarial training process, which improves the
expressive power of generative modeling. We also propose a stochastic duration
predictor to synthesize speech with diverse rhythms from input text. With the
uncertainty modeling over latent variables and the stochastic duration
predictor, our method expresses the natural one-to-many relationship in which a
text input can be spoken in multiple ways with different pitches and rhythms. A
subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a
single speaker dataset, shows that our method outperforms the best publicly
available TTS systems and achieves a MOS comparable to ground truth.
|
Purpose: Iterative Convolutional Neural Networks (CNNs) which resemble
unrolled learned iterative schemes have shown to consistently deliver
state-of-the-art results for image reconstruction problems across different
imaging modalities. However, because these methodes include the forward model
in the architecture, their applicability is often restricted to either
relatively small reconstruction problems or to problems with operators which
are computationally cheap to compute. As a consequence, they have so far not
been applied to dynamic non-Cartesian multi-coil reconstruction problems.
Methods: In this work, we propose a CNN-architecture for image reconstruction
of accelerated 2D radial cine MRI with multiple receiver coils. The network is
based on a computationally light CNN-component and a subsequent conjugate
gradient (CG) method which can be jointly trained end-to-end using an efficient
training strategy. We investigate the proposed training-strategy and compare
our method to other well-known reconstruction techniques with learned and
non-learned regularization methods. Results: Our proposed method outperforms
all other methods based on non-learned regularization. Further, it performs
similar or better than a CNN-based method employing a 3D U-Net and a method
using adaptive dictionary learning. In addition, we empirically demonstrate
that even by training the network with only iteration, it is possible to
increase the length of the network at test time and further improve the
results. Conclusions: End-to-end training allows to highly reduce the number of
trainable parameters of and stabilize the reconstruction network. Further,
because it is possible to change the length of the network at test time, the
need to find a compromise between the complexity of the CNN-block and the
number of iterations in each CG-block becomes irrelevant.
|
In this paper, we address inter-beam inter-cell interference mitigation in 5G
networks that employ millimeter-wave (mmWave), beamforming and non-orthogonal
multiple access (NOMA) techniques. Those techniques play a key role in
improving network capacity and spectral efficiency by multiplexing users on
both spatial and power domains. In addition, the coverage area of multiple
beams from different cells can intersect, allowing more flexibility in
user-cell association. However, the intersection of coverage areas also implies
increased inter-beam inter-cell interference, i.e. interference among beams
formed by nearby cells. Therefore, joint user-cell association and inter-beam
power allocation stand as a promising solution to mitigate inter-beam,
inter-cell interference. In this paper, we consider a 5G mmWave network and
propose a reinforcement learning algorithm to perform joint user-cell
association and inter-beam power allocation to maximize the sum rate of the
network. The proposed algorithm is compared to a uniform power allocation that
equally divides power among beams per cell. Simulation results present a
performance enhancement of 13-30% in network's sum-rate corresponding to the
lowest and highest traffic loads, respectively.
|
The present paper concerns filtered de la Vall\'ee Poussin (VP) interpolation
at the Chebyshev nodes of the four kinds. This approximation model is
interesting for applications because it combines the advantages of the
classical Lagrange polynomial approximation (interpolation and polynomial
preserving) with the ones of filtered approximation (uniform boundedness of the
Lebesgue constants and reduction of the Gibbs phenomenon). Here we focus on
some additional features that are useful in the applications of filtered VP
interpolation. In particular, we analyze the simultaneous approximation
provided by the derivatives of the VP interpolation polynomials. Moreover, we
state the uniform boundedness of VP approximation operators in some Sobolev and
H\"older--Zygmund spaces where several integro--differential models are
uniquely and stably solvable.
|
The concept of compressing deep Convolutional Neural Networks (CNNs) is
essential to use limited computation, power, and memory resources on embedded
devices. However, existing methods achieve this objective at the cost of a drop
in inference accuracy in computer vision tasks. To address such a drawback, we
propose a framework that leverages knowledge distillation along with
customizable block-wise optimization to learn a lightweight CNN structure while
preserving better control over the compression-performance tradeoff.
Considering specific resource constraints, e.g., floating-point operations per
inference (FLOPs) or model-parameters, our method results in a state of the art
network compression while being capable of achieving better inference accuracy.
In a comprehensive evaluation, we demonstrate that our method is effective,
robust, and consistent with results over a variety of network architectures and
datasets, at negligible training overhead. In particular, for the already
compact network MobileNet_v2, our method offers up to 2x and 5.2x better model
compression in terms of FLOPs and model-parameters, respectively, while getting
1.05% better model performance than the baseline network.
|
The impact of the ongoing COVID-19 pandemic is being felt in all spheres of
our lives -- cutting across the boundaries of nation, wealth, religions or
race. From the time of the first detection of infection among the public, the
virus spread though almost all the countries in the world in a short period of
time. With humans as the carrier of the virus, the spreading process
necessarily depends on the their mobility after being infected. Not only in the
primary spreading process, but also in the subsequent spreading of the mutant
variants, human mobility plays a central role in the dynamics. Therefore, on
one hand travel restrictions of varying degree were imposed and are still being
imposed, by various countries both nationally and internationally. On the other
hand, these restrictions have severe fall outs in businesses and livelihood in
general. Therefore, it is an optimization process, exercised on a global scale,
with multiple changing variables. Here we review the techniques and their
effects on optimization or proposed optimizations of human mobility in
different scales, carried out by data driven, machine learning and model
approaches.
|
We show that a bounded domain in a Euclidean space is a $W^{1,1}$-extension
domain if and only if it is a strong $BV$-extension domain. In the planar case,
bounded and strong $BV$-extension domains are shown to be exactly those
$BV$-extension domains for which the set $\partial\Omega \setminus \bigcup_{i}
\overline{\Omega}_i$ is purely $1$-unrectifiable, where $\Omega_i$ are the open
connected components of $\mathbb{R}^2\setminus\overline{\Omega}$.
|
In this paper, we prove the stability of shear flows of Prandtl type as $
\big(U(y/\sqrt{\nu}),0\big)$ for the steady Navier-Stokes equations under a
natural spectral assumption on the linearized NS operator. We develop a direct
energy method combined with compact method to solve the Orr-Sommerfeld
equation.
|
The soliton resolution for the Harry Dym equation is established for initial
conditions in weighted Sobolev space $H^{1,1}(\mathbb{R})$. Combining the
nonlinear steepest descent method and $\bar{\partial}$-derivatives condition,
we obtain that when $\frac{y}{t}<-\epsilon(\epsilon>0)$ the long time
asymptotic expansion of the solution $q(x,t)$ in any fixed cone
\begin{equation}
C\left(y_{1}, y_{2}, v_{1}, v_{2}\right)=\left\{(y, t) \in R^{2} \mid
y=y_{0}+v t, y_{0} \in\left[y_{1}, y_{2}\right], v \in\left[v_{1},
v_{2}\right]\right\} \end{equation} up to an residual error of order
$\mathcal{O}(t^{-1})$. The expansion shows the long time asymptotic behavior
can be described as an $N(I)$-soliton on discrete spectrum whose parameters are
modulated by a sum of localized soliton-soliton interactions as one moves
through the cone and the second term coming from soliton-radiation
interactionson on continuous spectrum.
|
There are many applications of multiphase flow in important fields such as
biological, chemical and power processes. Bubble coalescence is of a
significant importance in simulating multiphase fluid flows. Weber number
($W_e$), Reynolds number (Re) and collision parameter play important role in
the coalescence of bubbles. In the present work, front-tracking method is
applied to simulate bubble coalescence. Moreover, the results are presented for
different collision parameters and changes in the coalescence of bubbles are
discussed
|
A downconversion receiver employing a switch-based N-path filter with reduced
harmonic response around the third- and fifth- LO harmonics is presented. The
N-path filter is placed in a frequency-translation feedback loop that is
effective at the 3rd and the 5th LO harmonics to mitigate harmonic
downconversion. A pulse-width-modulated LO (PWM-LO) clocking scheme is used in
the feedback upconverter to reduce the noise injected around the LO harmonic at
the input of N-path downconverter. The compression resulting from blockers
around the 3rd and the 5th LO harmonics is also suppressed as a result of
reduced harmonic response. Compensation of peak frequency shift of the N-path
response due to parasitic input capacitance is also described.
|
I examine the regime of forward scattering of an energetic particle in a
Plasma medium in thermal equilibrium. Treating the particle as an open quantum
system interacting with a bath, I look at the time evolution of the reduced
density matrix of the system. The kinematic and dynamical time scales that
emerge can exist in several possible hierarchies which can lead to different
EFT formulations. I show that in certain hierarchies, it becomes necessary to
account for arbitrary number of coherent exchanges between the system and the
bath going beyond the independent scattering paradigm. Analytic results are
obtained in certain limits and the formalism is applied for the measurement of
transverse momentum broadening of a quark in a Quark Gluon Plasma medium.
|
In this paper, we develop an in-memory analog computing (IMAC) architecture
realizing both synaptic behavior and activation functions within non-volatile
memory arrays. Spin-orbit torque magnetoresistive random-access memory
(SOT-MRAM) devices are leveraged to realize sigmoidal neurons as well as
binarized synapses. First, it is shown the proposed IMAC architecture can be
utilized to realize a multilayer perceptron (MLP) classifier achieving orders
of magnitude performance improvement compared to previous mixed-signal and
digital implementations. Next, a heterogeneous mixed-signal and mixed-precision
CPU-IMAC architecture is proposed for convolutional neural networks (CNNs)
inference on mobile processors, in which IMAC is designed as a co-processor to
realize fully-connected (FC) layers whereas convolution layers are executed in
CPU. Architecture-level analytical models are developed to evaluate the
performance and energy consumption of the CPU-IMAC architecture. Simulation
results exhibit 6.5% and 10% energy savings for CPU-IMAC based realizations of
LeNet and VGG CNN models, for MNIST and CIFAR-10 pattern recognition tasks,
respectively.
|
Few-shot object detection (FSOD) aims to strengthen the performance of novel
object detection with few labeled samples. To alleviate the constraint of few
samples, enhancing the generalization ability of learned features for novel
objects plays a key role. Thus, the feature learning process of FSOD should
focus more on intrinsical object characteristics, which are invariant under
different visual changes and therefore are helpful for feature generalization.
Unlike previous attempts of the meta-learning paradigm, in this paper, we
explore how to enhance object features with intrinsical characteristics that
are universal across different object categories. We propose a new prototype,
namely universal prototype, that is learned from all object categories. Besides
the advantage of characterizing invariant characteristics, the universal
prototypes alleviate the impact of unbalanced object categories. After
enhancing object features with the universal prototypes, we impose a
consistency loss to maximize the agreement between the enhanced features and
the original ones, which is beneficial for learning invariant object
characteristics. Thus, we develop a new framework of few-shot object detection
with universal prototypes ({FSOD}^{up}) that owns the merit of feature
generalization towards novel objects. Experimental results on PASCAL VOC and MS
COCO show the effectiveness of {FSOD}^{up}. Particularly, for the 1-shot case
of VOC Split2, {FSOD}^{up} outperforms the baseline by 6.8% in terms of mAP.
|
The Injury Severity Score (ISS) is a standard aggregate indicator of the
overall severity of multiple injuries to the human body. This score is
calculated by summing the squares of the three highest values of the
Abbreviated Injury Scale (AIS) grades across six body regions of a trauma
victim. Despite its widespread usage over the past four decades, little is
known in the (mostly medical) literature on the subject about the axiomatic and
statistical properties of this quadratic aggregation score. To bridge this gap,
the present paper studies the ISS from the perspective of recent advances in
decision science. We demonstrate some statistical and axiomatic properties of
the ISS as a multicrtieria aggregation procedure. Our study highlights some
unintended, undesirable properties that stem from arbitrary choices in its
design and that call lead to bias in its use as a patient triage criterion.
|
We consider symmetric second-order differential operators with real
coefficients such that the corresponding differential equation is in the limit
circle case at infinity. Our goal is to construct the theory of self-adjoint
realizations of such operators by an analogy with the case of Jacobi operators.
We introduce a new object, the quasiresolvent of the maximal operator, and use
it to obtain a very explicit formula for the resolvents of all self-adjoint
realizations. In particular, this yields a simple representation for the
Cauchy-Stieltjes transforms of the spectral measures playing the role of the
classical Nevanlinna formula in the theory of Jacobi operators.
|
We postulate that non-perturbative QCD evolution of a single parton in the
vacuum will develop the long-range collective effects of a multi-parton system,
reminiscent of those observed in high-energy hadronic or nuclear interactions
with large final-state particle multiplicity final-state particles.
Proton-Proton collisions at the Large Hadron Collider showed surprising
signatures of a strongly interacting, thermalized quark-gluon plasma, which was
thought only to form in collisions of large nuclear systems. Another puzzle
observed earlier in $e^{+}e^{-}$ collisions is that production yields of
various hadron species appear to follow a thermal-like distribution with a
common temperature. We propose searches for thermal and collective properties
of a single parton propagating in the vacuum using high multiplicity jets in
high-energy elementary collisions. Several observables are studied using the
PYTHIA 8 Monte Carlo event generator. Experimental observation of such
long-range collectivity will offer a new view of non-perturbative QCD dynamics
of multi-parton systems at the smallest scales. Absence of any collective
effect may offer new insights into the role of quantum entanglement in the
observed thermal behavior of particle production in high energy collisions.
|
Real-world applications of machine learning tools in high-stakes domains are
often regulated to be fair, in the sense that the predicted target should
satisfy some quantitative notion of parity with respect to a protected
attribute. However, the exact tradeoff between fairness and accuracy with a
real-valued target is not entirely clear. In this paper, we characterize the
inherent tradeoff between statistical parity and accuracy in the regression
setting by providing a lower bound on the error of any fair regressor. Our
lower bound is sharp, algorithm-independent, and admits a simple
interpretation: when the moments of the target differ between groups, any fair
algorithm has to make an error on at least one of the groups. We further extend
this result to give a lower bound on the joint error of any (approximately)
fair algorithm, using the Wasserstein distance to measure the quality of the
approximation. With our novel lower bound, we also show that the price paid by
a fair regressor that does not take the protected attribute as input is less
than that of a fair regressor with explicit access to the protected attribute.
On the upside, we establish the first connection between individual fairness,
accuracy parity, and the Wasserstein distance by showing that if a regressor is
individually fair, it also approximately verifies the accuracy parity, where
the gap is given by the Wasserstein distance between the two groups. Inspired
by our theoretical results, we develop a practical algorithm for fair
regression through the lens of representation learning, and conduct experiments
on a real-world dataset to corroborate our findings.
|
We correct the faulty formulas given in a previous article and we compute the
defect group for the Iwasawa $\lambda$ invariants attached to the S-ramified
T-decomposed a belian pro-{\ell}-extensions on the Z{\ell}-cyclotomic
extensionof a number field. As a consequence, we extend the results of Itoh,
Mizusawa and Ozaki on tamely ramified Iwasawa modules for the cyclotomic
Z{\ell}-extension of abelian fields.
|
Chip-firing and rotor-routing are two well-studied examples of Abelian
networks. We study the complexity of their respective reachability problems. We
show that the rotor-routing reachability problem is decidable in polynomial
time, and we give a simple characterization of when a chip-and-rotor
configuration is reachable from another one. For chip-firing, it has been known
that the reachability problem is in P if we have a class of graphs whose period
length is polynomial (for example, Eulerian digraphs). Here we show that in the
general case, chip-firing reachability is hard in the sense that if the
chip-firing reachability problem were in P for general digraphs, then the
polynomial hierarchy would collapse to NP.
|
We study an in-flight actuator failure recovery problem for a hexrotor UAV.
The hexrotor may experience external disturbances and modeling error, which are
accounted for in the control design and distinguished from an actuator failure.
A failure of any one actuator occurs during flight and must be identified
quickly and accurately. This is achieved through the use of a multiple-model,
multiple extended high-gain observer (EHGO) based output feedback control
strategy. The family of EHGOs are responsible for estimating states,
disturbances, and are used to select the appropriate model based on the system
dynamics after a failure has occurred. The proposed method is theoretically
analyzed and validated through simulations and experiments.
|
In this short note I restate and simplify the proof of the impossibility of
probabilistic induction from Popper (1992). Other proofs are possible (cf.
Popper (1985)).
|
We propose a trust-region method that solves a sequence of linear integer
programs to tackle integer optimal control problems regularized with a total
variation penalty.
The total variation penalty allows us to prove the existence of minimizers of
the integer optimal control problem. We introduce a local optimality concept
for the problem, which arises from the infinite-dimensional perspective. In the
case of a one-dimensional domain of the control function, we prove convergence
of the iterates produced by our algorithm to points that satisfy first-order
stationarity conditions for local optimality. We demonstrate the theoretical
findings on a computational example.
|
In this study we present a dynamical agent-based model to investigate the
interplay between the socio-economy of and SEIRS-type epidemic spreading over a
geographical area, divided to smaller area districts and further to smallest
area cells. The model treats the populations of cells and authorities of
districts as agents, such that the former can reduce their economic activity
and the latter can recommend economic activity reduction both with the overall
goal to slow down the epidemic spreading. The agents make decisions with the
aim of attaining as high socio-economic standings as possible relative to other
agents of the same type by evaluating their standings based on the local and
regional infection rates, compliance to the authorities' regulations, regional
drops in economic activity, and efforts to mitigate the spread of epidemic. We
find that the willingness of population to comply with authorities'
recommendations has the most drastic effect on the epidemic spreading: periodic
waves spread almost unimpeded in non-compliant populations, while in compliant
ones the spread is minimal with chaotic spreading pattern and significantly
lower infection rates. Health and economic concerns of agents turn out to have
lesser roles, the former increasing their efforts and the latter decreasing
them.
|
Recent work has made significant progress on using implicit functions, as a
continuous representation for 3D rigid object shape reconstruction. However,
much less effort has been devoted to modeling general articulated objects.
Compared to rigid objects, articulated objects have higher degrees of freedom,
which makes it hard to generalize to unseen shapes. To deal with the large
shape variance, we introduce Articulated Signed Distance Functions (A-SDF) to
represent articulated shapes with a disentangled latent space, where we have
separate codes for encoding shape and articulation. We assume no prior
knowledge on part geometry, articulation status, joint type, joint axis, and
joint location. With this disentangled continuous representation, we
demonstrate that we can control the articulation input and animate unseen
instances with unseen joint angles. Furthermore, we propose a Test-Time
Adaptation inference algorithm to adjust our model during inference. We
demonstrate our model generalize well to out-of-distribution and unseen data,
e.g., partial point clouds and real-world depth images.
|
The rapid early spread of COVID-19 in the U.S. was experienced very
differently by different socioeconomic groups and business industries. In this
study, we study aggregate mobility patterns of New York City and Chicago to
identify the relationship between the amount of interpersonal contact between
people in urban neighborhoods and the disparity in the growth of positive cases
among these groups. We introduce an aggregate Contact Exposure Index (CEI) to
measure exposure due to this interpersonal contact and combine it with social
distancing metrics to show its effect on positive case growth. With the help of
structural equations modeling, we find that the effect of exposure on case
growth was consistently positive and that it remained consistently higher in
lower-income neighborhoods, suggesting a causal path of income on case growth
via contact exposure. Using the CEI, schools and restaurants are identified as
high-exposure industries, and the estimation suggests that implementing
specific mobility restrictions on these point-of-interest categories are most
effective. This analysis can be useful in providing insights for government
officials targeting specific population groups and businesses to reduce
infection spread as reopening efforts continue to expand across the nation.
|
This study analyses the actual effect of a representative low-emission zone
(LEZ) in terms of shifting vehicle registrations towards alternative fuel
technologies and its effectiveness for reducing vehicle fleet CO2 emissions.
Vehicle registration data is combined with real life fuel consumption values on
individual vehicle model level, and the impact of the LEZ is then determined
via an econometric approach. The increase in alternative fuel vehicles (AFV)
registration shares due to the LEZ is found to be significant but fosters
rather fossil fuel powered AFV and plug-in hybrid electric vehicles than zero
emission vehicles. This is reflected in the average CO2 emissions of newly
registered vehicles, which do not decrease significantly. In consequence, while
the LEZ is an effective measure for stimulating the shift towards low emission
vehicles, the support of non-electric AFV as low emission vehicles jeopardizes
its effectiveness for decarbonizing the vehicle fleet.
|
The Internet of Things (IoT) devices are highly reliant on cloud systems to
meet their storage and computational demands. However, due to the remote
location of cloud servers, IoT devices often suffer from intermittent Wide Area
Network (WAN) latency which makes execution of delay-critical IoT applications
inconceivable. To overcome this, service providers (SPs) often deploy multiple
fog nodes (FNs) at the network edge that helps in executing offloaded
computations from IoT devices with improved user experience. As the FNs have
limited resources, matching IoT services to FNs while ensuring minimum latency
and energy from an end-user's perspective and maximizing revenue and tasks
meeting deadlines from an SP's standpoint is challenging. Therefore in this
paper, we propose a student project allocation (SPA) based efficient task
offloading strategy called SPATO that takes into account key parameters from
different stakeholders. Thorough simulation analysis shows that SPATO is able
to reduce the offloading energy and latency respectively by 29% and 40% and
improves the revenue by 25% with 99.3% of tasks executing within their
deadline.
|
In the process of decarbonization, the global energy mix is shifting from
fossil fuels to renewables. To study decarbonization pathways, large-scale
energy system models are utilized. These models require accurate data on
renewable generation to develop their full potential. Using different data can
lead to conflicting results and policy advice. In this work, we compare several
datasets that are commonly used to study the transition towards highly
renewable European power system. We find significant differences between these
datasets and cost-difference of about 10% result in the different energy mix.
We conclude that much more attention must be paid to the large uncertainties of
the input data.
|
We propose a novel blocked version of the continuous-time bouncy particle
sampler of [Bouchard-C\^ot\'e et al., 2018] which is applicable to any
differentiable probability density. This alternative implementation is
motivated by blocked Gibbs sampling for state space models [Singh et al., 2017]
and leads to significant improvement in terms of effective sample size per
second, and furthermore, allows for significant parallelization of the
resulting algorithm. The new algorithms are particularly efficient for latent
state inference in high-dimensional state space models, where blocking in both
space and time is necessary to avoid degeneracy of MCMC. The efficiency of our
blocked bouncy particle sampler, in comparison with both the standard
implementation of the bouncy particle sampler and the particle Gibbs algorithm
of Andrieu et al. [2010], is illustrated numerically for both simulated data
and a challenging real-world financial dataset.
|
Squeezed light is a key quantum resource that enables quantum advantages for
sensing, networking, and computing applications. The scalable generation and
manipulation of squeezed light with integrated platforms are highly desired for
the development of quantum technology with continuous variables. In this
letter, we demonstrate squeezed light generation with thin-film lithium niobate
integrated photonics. Para-metric down-conversion is realized with quasi-phase
matching using ferroelectric domain engineering. With sub-wavelength mode
confinement, efficient nonlinear processes can be observed with single-pass
configuration. We measure0.56+-0.09dB quadrature squeezing(~3 dB inferred
on-chip). The single-pass configuration further enables the generation of
squeezed light with large spectral bandwidth up to 7 THz. This work represents
a significant step towards the on-chip implementation of continuous-variable
quantum information processing
|
This paper considers the data association problem for multi-target tracking.
Multiple hypothesis tracking is a popular algorithm for solving this problem
but it is NP-hard and is is quite complicated for a large number of targets or
for tracking maneuvering targets. To improve tracking performance and enhance
robustness, we propose a randomized multiple model multiple hypothesis tracking
method, which has three distinctive advantages. First, it yields a randomized
data association solution which maximizes the expectation of the logarithm of
the posterior probability and can be solved efficiently by linear programming.
Next, the state estimation performance is improved by the random coefficient
matrices Kalman filter, which mitigates the difficulty introduced by randomized
data association, i.e., where the coefficient matrices of the dynamic system
are random. Third, the probability that the target follows a specific dynamic
model is derived by jointly optimizing the multiple possible models and data
association hypotheses, and it does not require prior mode transition
probabilities. Thus, it is more robust for tracking multiple maneuvering
targets. Simulations demonstrate the efficiency and superior results of the
proposed algorithm over interacting multiple model multiple hypothesis
tracking.
|
A new similarity measure for two sets of S-parameters is proposed. It is
constructed with the modified Hausdorff distance applied to S-parameter points
in 3D space with real, imaginary and normalized frequency axes. New
S-parameters similarity measure facilitates automation of the analysis to
measurement validation, comparison of models and measurements obtained with
different tools, as well as finding similar S-parameter models or similar
elements within S-matrices.
|
Brain-Computer Interfaces (BCI) based on motor imagery translate mental motor
images recognized from the electroencephalogram (EEG) to control commands. EEG
patterns of different imagination tasks, e.g. hand and foot movements, are
effectively classified with machine learning techniques using band power
features. Recently, also Convolutional Neural Networks (CNNs) that learn both
effective features and classifiers simultaneously from raw EEG data have been
applied. However, CNNs have two major drawbacks: (i) they have a very large
number of parameters, which thus requires a very large number of training
examples; and (ii) they are not designed to explicitly learn features in the
frequency domain. To overcome these limitations, in this work we introduce
Sinc-EEGNet, a lightweight CNN architecture that combines learnable band-pass
and depthwise convolutional filters. Experimental results obtained on the
publicly available BCI Competition IV Dataset 2a show that our approach
outperforms reference methods in terms of classification accuracy.
|
We study map lattices coupled by collision and show how perturbations of
transfer operators associated with the spatially periodic approximation of the
model can be used to extract information about collisions per lattice unit.
More precisely, we study a map on a finite box of $L$ sites with periodic
boundary conditions, coupled by collision. We derive, via a non-trivial first
order approximation for the leading eigenvalue of the rare event transfer
operator, a formula for the first collision rate and a corresponding first
hitting time law. For the former we show that the formula scales at the order
of $L\cdot\varepsilon^2$, where $\varepsilon$ is the coupling strength, and for
the latter, by tracking the $L$ dependency in our arguments, we show that the
error in the law is of order $O\left(C(L)L\varepsilon^2\cdot|\ln
L\varepsilon^2|\right)$ for a specific function $C(L)$. Finally, we derive an
explicit formula for the first collision rate per lattice unit.
|
Pretrained language models have achieved state-of-the-art performance when
adapted to a downstream NLP task. However, theoretical analysis of these models
is scarce and challenging since the pretraining and downstream tasks can be
very different. We propose an analysis framework that links the pretraining and
downstream tasks with an underlying latent variable generative model of text --
the downstream classifier must recover a function of the posterior distribution
over the latent variables. We analyze head tuning (learning a classifier on top
of the frozen pretrained model) and prompt tuning in this setting. The
generative model in our analysis is either a Hidden Markov Model (HMM) or an
HMM augmented with a latent memory component, motivated by long-term
dependencies in natural language. We show that 1) under certain non-degeneracy
conditions on the HMM, simple classification heads can solve the downstream
task, 2) prompt tuning obtains downstream guarantees with weaker non-degeneracy
conditions, and 3) our recovery guarantees for the memory-augmented HMM are
stronger than for the vanilla HMM because task-relevant information is easier
to recover from the long-term memory. Experiments on synthetically generated
data from HMMs back our theoretical findings.
|
Densest subgraph detection is a fundamental graph mining problem, with a
large number of applications. There has been a lot of work on efficient
algorithms for finding the densest subgraph in massive networks. However, in
many domains, the network is private, and returning a densest subgraph can
reveal information about the network. Differential privacy is a powerful
framework to handle such settings. We study the densest subgraph problem in the
edge privacy model, in which the edges of the graph are private. We present the
first sequential and parallel differentially private algorithms for this
problem. We show that our algorithms have an additive approximation guarantee.
We evaluate our algorithms on a large number of real-world networks, and
observe a good privacy-accuracy tradeoff when the network has high density.
|
Protein-protein interactions are the basis of many important physiological
processes and are currently promising, yet difficult, targets for drug
discovery. In this context, inhibitor of apoptosis proteins (IAPs)-mediated
interactions are pivotal for cancer cell survival; the interaction of the BIR1
domain of cIAP2 with TRAF2 was shown to lead the recruitment of cIAPs to the
TNF receptor, promoting the activation of the NF-\kappa B survival pathway. In
this work, using a combined in silico-in vitro approach, we identified a
drug-like molecule, NF023, able to disrupt cIAP2 interaction with TRAF2. We
demonstrated in vitro its ability to interfere with the assembly of the
cIAP2-BIR1/TRAF2 complex and performed a thorough characterization of the
compound's mode of action through 248 parallel unbiased molecular dynamics
simulations of 300 ns (totaling almost 75 {\mu}s of all-atom sampling), which
identified multiple binding modes to the BIR1 domain of cIAP2 via clustering
and ensemble docking. NF023 is, thus, a promising protein-protein interaction
disruptor, representing a starting point to develop modulators of NF-\kappa
B-mediated cell survival in cancer. This study represents a model procedure
that shows the use of large-scale molecular dynamics methods to typify
promiscuous interactors.
|
We analysed the shadow cast by charged rotating black hole (BH) in presence
of perfect fluid dark matter (PFDM). We studied the null geodesic equations and
obtained the shadow of the charged rotating BH to see the effects of PFDM
parameter $\gamma$, charge $Q$ and rotation parameter $a$, and it is noticed
that the size as well as the shape of BH shadow is affected due to PFDM
parameter, charge and rotation parameter. Thus, it is seen that the presence of
dark matter around a BH affects its spacetime. We also investigated the
influence of all the parameters (PFDM parameter $\gamma$, BHs charge $Q$ and
rotational parameter $a$) on effective potential, energy emission by graphical
representation, and compare all the results with the non rotating case in usual
general relativity. To this end, we have also explored the effect of PFDM on
the deflection angle and the size of Einstein rings.
|
The year 2020 will be remembered for two events of global significance: the
COVID-19 pandemic and 2020 U.S. Presidential Election. In this chapter, we
summarize recent studies using large public Twitter data sets on these issues.
We have three primary objectives. First, we delineate epistemological and
practical considerations when combining the traditions of computational
research and social science research. A sensible balance should be struck when
the stakes are high between advancing social theory and concrete, timely
reporting of ongoing events. We additionally comment on the computational
challenges of gleaning insight from large amounts of social media data. Second,
we characterize the role of social bots in social media manipulation around the
discourse on the COVID-19 pandemic and 2020 U.S. Presidential Election. Third,
we compare results from 2020 to prior years to note that, although bot accounts
still contribute to the emergence of echo-chambers, there is a transition from
state-sponsored campaigns to domestically emergent sources of distortion.
Furthermore, issues of public health can be confounded by political
orientation, especially from localized communities of actors who spread
misinformation. We conclude that automation and social media manipulation pose
issues to a healthy and democratic discourse, precisely because they distort
representation of pluralism within the public sphere.
|
We present a new numerical approach for wave induced dynamic fracture. The
method is based on a discontinuous Galerkin approximation of the first-order
hyperbolic system for elastic waves and a phase-field approximation of brittle
fracture driven by the maximum tension. The algorithm is staggered in time and
combines an implicit midpoint rule for the wave propagation followed by an
implicit Euler step for the phase-field evolution. At fracture, the material is
degraded, and the waves are reflected at the diffusive interfaces. Two and
three-dimensional examples demonstrate the advantages of the proposed method
for the computation of crack growth and spalling initiated by reflected and
superposed waves.
|
Experimental measurements in deep-inelastic scattering and lepton-pair
production on deuterium targets play an important role in the flavor separation
of $u$ and $d$ (anti)quarks in global QCD analyses of the parton distribution
functions (PDFs) of the nucleon. We investigate the impact of theoretical
corrections accounting for the light-nuclear structure of the deuteron upon the
fitted $u, d$-quark, gluon, and other PDFs in the CJ15 and CT18 families of
next-to-leading order CTEQ global analyses. The investigation is done using the
$L_2$ sensitivity statistical method, which provides a common metric to
quantify the strength of experimental constraints on various PDFs and ratios of
PDFs in the two distinct fitting frameworks. Using the $L_2$ sensitivity and
other approaches, we examine the compatibility of deuteron data sets with other
fitted experiments under varied implementations of the deuteron corrections. We
find that freely-fitted deuteron corrections modify the PDF uncertainty at
large momentum fractions and will be relevant for future PDFs affecting
electroweak precision measurements.
|
The Carina Nebula harbors a large population of high-mass stars, including at
least 75 O-type and Wolf-Rayet stars, but the current census is not complete
since further high-mass stars may be hidden in or behind the dense dark clouds
that pervade the association. With the aim of identifying optically obscured O-
and early B-type stars in the Carina Nebula, we performed the first infrared
spectroscopic study of stars in the optically obscured stellar cluster Tr
16-SE, located behind a dark dust lane south of eta Car. We used the
integral-field spectrograph KMOS at the ESO VLT to obtain H- and K-band spectra
with a resolution of R sim 4000 (Delta lambda sim 5 A) for 45 out of the 47
possible OB candidate stars in Tr 16-SE, and we derived spectral types for
these stars. We find 15 stars in Tr 16-SE with spectral types between O5 and B2
(i.e., high-mass stars with M >= 8 Msun, only two of which were known before.
An additional nine stars are classified as (Ae)Be stars (i.e.,
intermediate-mass pre-main-sequence stars), and most of the remaining targets
show clear signatures of being late-type stars and are thus most likely
foreground stars or background giants unrelated to the Carina Nebula. Our
estimates of the stellar luminosities suggest that nine of the 15 O- and early
B-type stars are members of Tr 16-SE, whereas the other six seem to be
background objects. Our study increases the number of spectroscopically
identified high-mass stars (M >= 8 Msun) in Tr 16-SE from two to nine and shows
that Tr 16-SE is one of the larger clusters in the Carina Nebula. Our
identification of three new stars with spectral types between O5 and O7 and
four new stars with spectral types O9 to B1 significantly increases the number
of spectroscopically identified O-type stars in the Carina Nebula.
|
Probabilistic deep learning is deep learning that accounts for uncertainty,
both model uncertainty and data uncertainty. It is based on the use of
probabilistic models and deep neural networks. We distinguish two approaches to
probabilistic deep learning: probabilistic neural networks and deep
probabilistic models. The former employs deep neural networks that utilize
probabilistic layers which can represent and process uncertainty; the latter
uses probabilistic models that incorporate deep neural network components which
capture complex non-linear stochastic relationships between the random
variables. We discuss some major examples of each approach including Bayesian
neural networks and mixture density networks (for probabilistic neural
networks), and variational autoencoders, deep Gaussian processes and deep mixed
effects models (for deep probabilistic models). TensorFlow Probability is a
library for probabilistic modeling and inference which can be used for both
approaches of probabilistic deep learning. We include its code examples for
illustration.
|
The scale of small-field inflation cannot be constrained via primordial
gravitational waves through measurement of tensor-to-scalar ratio $r$. In this
study, I show that if cosmic strings are produced after symmetry breaking at
the end of hilltop supernatural inflation, this small-field inflation model can
be tested through the production of gravitational waves from cosmic strings.
Future experiments of gravitational wave detectors will determine or further
constrain the parameter space in the model.
|
In this work, the structural, electrical, and optical properties of bilayer
SiX (X= N, P, As, and Sb) are studied using density functional theory (DFT).
Five different stacking orders are considered for every compound and their
structural properties are presented. The band structure of these materials
demonstrates that they are indirect semiconductors. The out-of-plane strain has
been applied to tune the bandgap and its electrical properties. The bandgap
increases with tensile strain, whereas, compressive strain leads to
semiconductor-to-metal transition. The sensitivity of the bandgap to the
pressure is investigated and bilayer SiSb demonstrates the highest bandgap
sensitivity to the pressure. These structures exhibit Mexican hat-like valence
band dispersion that can be approved by a singularity in the density of states.
The Mexican-hat coefficient can be tuned by out-of-plane strain. Optical
absorption of these compounds shows that the second and lower valence bands due
to the high density of states display a higher contribution to optical
transitions.
|
Signal propagation in an optical fiber can be described by the nonlinear
Schr\"odinger equation (NLSE). The NLSE has no known closed-form solution,
mostly due to the interaction of dispersion and nonlinearities. In this paper,
we present a novel closed-form approximate model for the nonlinear optical
channel, with applications to passive optical networks. The proposed model is
derived using logarithmic perturbation in the frequency domain on the
group-velocity dispersion (GVD) parameter of the NLSE. The model can be seen as
an improvement of the recently proposed regular perturbation (RP) on the GVD
parameter. RP and logarithmic perturbation (LP) on the nonlinear coefficient
have already been studied in the literature, and are hereby compared with RP on
the GVD parameter and the proposed LP model. As an application of the model, we
focus on passive optical networks. For a 20 km PON at 10 Gbaud, the proposed
model improves upon LP on the nonlinear coefficient by 1.5 dB. For the same
system, a detector based on the proposed LP model reduces the uncoded
bit-error-rate by up to 5.4 times at the same input power or reduces the input
power by 0.4 dB at the same information rate.
|
Rhombohedral dense forms of carbon, rh-C2 (or hexagonal h-C6), and boron
nitride, rh-BN (or hexagonal h-B3N3), are derived from rhombohedral 3R graphite
based on original crystal chemistry scheme backed with full cell geometry
optimization to minimal energy ground state computations within the quantum
density functional theory. Considering throughout hexagonal settings featuring
extended lattices, the calculation of the hexagonal set of elastic constants,
provide results of large bulk moduli i.e. B0(rh-C2) = 438 GPa close to that of
diamond, and B0(rh-BN) = 369 GPa close to that of cubic BN. The hardness
assessment in the framework of three contemporary models enables both phases to
be considered as ultra-hard. From the electronic band structures calculated in
the hexagonal Brillouin zones, 3R graphite is a small-gap semiconductor,
oppositely to rh-C2 that is characterized by a large band gap close to 5 eV, as
well as the two BN phases.
|
Currently, there are more than a dozen Russian-language corpora for sentiment
analysis, differing in the source of the texts, domain, size, number and ratio
of sentiment classes, and annotation method. This work examines publicly
available Russian-language corpora, presents their qualitative and quantitative
characteristics, which make it possible to get an idea of the current landscape
of the corpora for sentiment analysis. The ranking of corpora by annotation
quality is proposed, which can be useful when choosing corpora for training and
testing. The influence of the training dataset on the performance of sentiment
analysis is investigated based on the use of the deep neural network model
BERT. The experiments with review corpora allow us to conclude that on average
the quality of models increases with an increase in the number of training
corpora. For the first time, quality scores were obtained for the corpus of
reviews of ROMIP seminars based on the BERT model. Also, the study proposes the
task of the building a universal model for sentiment analysis.
|
An LBYL (`Look Before You Leap') Network is proposed for end-to-end trainable
one-stage visual grounding. The idea behind LBYL-Net is intuitive and
straightforward: we follow a language's description to localize the target
object based on its relative spatial relation to `Landmarks', which is
characterized by some spatial positional words and some descriptive words about
the object. The core of our LBYL-Net is a landmark feature convolution module
that transmits the visual features with the guidance of linguistic description
along with different directions. Consequently, such a module encodes the
relative spatial positional relations between the current object and its
context. Then we combine the contextual information from the landmark feature
convolution module with the target's visual features for grounding. To make
this landmark feature convolution light-weight, we introduce a dynamic
programming algorithm (termed dynamic max pooling) with low complexity to
extract the landmark feature. Thanks to the landmark feature convolution
module, we mimic the human behavior of `Look Before You Leap' to design an
LBYL-Net, which takes full consideration of contextual information. Extensive
experiments show our method's effectiveness in four grounding datasets.
Specifically, our LBYL-Net outperforms all state-of-the-art two-stage and
one-stage methods on ReferitGame. On RefCOCO and RefCOCO+, Our LBYL-Net also
achieves comparable results or even better results than existing one-stage
methods.
|
When faced with learning challenging new tasks, humans often follow sequences
of steps that allow them to incrementally build up the necessary skills for
performing these new tasks. However, in machine learning, models are most often
trained to solve the target tasks directly.Inspired by human learning, we
propose a novel curriculum learning approach which decomposes challenging tasks
into sequences of easier intermediate goals that are used to pre-train a model
before tackling the target task. We focus on classification tasks, and design
the intermediate tasks using an automatically constructed label hierarchy. We
train the model at each level of the hierarchy, from coarse labels to fine
labels, transferring acquired knowledge across these levels. For instance, the
model will first learn to distinguish animals from objects, and then use this
acquired knowledge when learning to classify among more fine-grained classes
such as cat, dog, car, and truck. Most existing curriculum learning algorithms
for supervised learning consist of scheduling the order in which the training
examples are presented to the model. In contrast, our approach focuses on the
output space of the model. We evaluate our method on several established
datasets and show significant performance gains especially on classification
problems with many labels. We also evaluate on a new synthetic dataset which
allows us to study multiple aspects of our method.
|
Given a monoid $S$ with $E$ any non-empty subset of its idempotents, we
present a novel one-sided version of idempotent completion we call left
$E$-completion. In general, the construction yields a one-sided variant of a
small category called a constellation by Gould and Hollings. Under certain
conditions, this constellation is inductive, meaning that its partial
multiplication may be extended to give a left restriction semigroup, a type of
unary semigroup whose unary operation models domain. We study the properties of
those pairs $S,E$ for which this happens, and characterise those left
restriction semigroups that arise as such left $E$-completions of their monoid
of elements having domain $1$. As first applications, we decompose the left
restriction semigroup of partial functions on the set $X$ and the right
restriction semigroup of left total partitions on $X$ as left and right
$E$-completions respectively of the transformation semigroup $T_X$ on $X$, and
decompose the left restriction semigroup of binary relations on $X$ under
demonic composition as a left $E$-completion of the left-total binary
relations. In many cases, including these three examples, the construction
embeds in a semigroup Zappa-Sz\'{e}p product.
|
Synthesizing data for semantic parsing has gained increasing attention
recently. However, most methods require handcrafted (high-precision) rules in
their generative process, hindering the exploration of diverse unseen data. In
this work, we propose a generative model which features a (non-neural) PCFG
that models the composition of programs (e.g., SQL), and a BART-based
translation model that maps a program to an utterance. Due to the simplicity of
PCFG and pre-trained BART, our generative model can be efficiently learned from
existing data at hand. Moreover, explicitly modeling compositions using PCFG
leads to a better exploration of unseen programs, thus generate more diverse
data. We evaluate our method in both in-domain and out-of-domain settings of
text-to-SQL parsing on the standard benchmarks of GeoQuery and Spider,
respectively. Our empirical results show that the synthesized data generated
from our model can substantially help a semantic parser achieve better
compositional and domain generalization.
|
Detecting pedestrians is a crucial task in autonomous driving systems to
ensure the safety of drivers and pedestrians. The technologies involved in
these algorithms must be precise and reliable, regardless of environment
conditions. Relying solely on RGB cameras may not be enough to recognize road
environments in situations where cameras cannot capture scenes properly. Some
approaches aim to compensate for these limitations by combining RGB cameras
with TOF sensors, such as LIDARs. However, there are few works that address
this problem using exclusively the 3D geometric information provided by LIDARs.
In this paper, we propose a PointNet++ based architecture to detect pedestrians
in dense 3D point clouds. The aim is to explore the potential contribution of
geometric information alone in pedestrian detection systems. We also present a
semi-automatic labeling system that transfers pedestrian and non-pedestrian
labels from RGB images onto the 3D domain. The fact that our datasets have RGB
registered with point clouds enables label transferring by back projection from
2D bounding boxes to point clouds, with only a light manual supervision to
validate results. We train PointNet++ with the geometry of the resulting 3D
labelled clusters. The evaluation confirms the effectiveness of the proposed
method, yielding precision and recall values around 98%.
|
Being generated, the relic neutrino background contained equal fractions of
electron $\nu_e$, muon $\nu_\mu$, and taon $\nu_\tau$ neutrinos. We show that
the gravitational field of our Galaxy and other nearby cosmic objects changes
this composition near the Solar System, enriching it with the heaviest neutrino
$nu_3$. This mass state is almost free of the electron component (only $\sim
2\%$ of $\nu_e$) and contains more muon component than the tau one. As a
result, the relic background becomes enriched with taon and particularly muon
neutrinos. The electron relic neutrinos are the rarest for a terrestrial
observer: instead of $1/3$, the relic background may contain only $\gtrsim
20\%$ of them.
|
If $(X, \le_X)$ is a partially ordered set satisfying certain necessary
conditions for $X$ to be order-isomorphic to the spectrum of a Noetherian
domain of dimension two, we describe a new poset $(\text{str } X,
\le_{\text{str } X})$ that completely determines $X$ up to isomorphism. The
order relation $\le_{\text{str } X}$ imposed on $\text{str } X$ is modeled
after R. Wiegand's well-known "P5" condition that can be used to determine when
a given partially ordered set $(U, \le_U)$ of a certain type is
order-isomorphic to $(\text{Spec } \mathbb Z[x], \subseteq).$
|
We demonstrate a new approach to supercontinuum generation and
carrier-envelope-offset detection in dispersion-engineered nanophotonic
waveguides based on saturated second-harmonic generation of femtosecond pulses.
In contrast with traditional approaches based on self-phase modulation, this
technique simultaneously broadens both harmonics by generating rapid amplitude
modulations of the field envelopes. The generated supercontinuum produces
coherent carrier-envelope-offset beatnotes in the overlap region that remain in
phase across 100's of nanometers of bandwidth while requiring $<$10 picojoules
of pulse energy.
|
Egocentric segmentation has attracted recent interest in the computer vision
community due to their potential in Mixed Reality (MR) applications. While most
previous works have been focused on segmenting egocentric human body parts
(mainly hands), little attention has been given to egocentric objects. Due to
the lack of datasets of pixel-wise annotations of egocentric objects, in this
paper we contribute with a semantic-wise labeling of a subset of 2124 images
from the RGB-D THU-READ Dataset. We also report benchmarking results using
Thundernet, a real-time semantic segmentation network, that could allow future
integration with end-to-end MR applications.
|
The connection of single-phase microgrids (MG) and loads to three-phase MGs
creates power quality problems such as unbalanced voltage and voltage rise at
the point of common coupling (PCC) of the MGs. In this paper, a modified
reverse droop control (MRDC) scheme in the Energy Storage System (ESS) is
proposed to improve the three-phase PCC voltage quality in multi-microgrids
(MMG). The MRDC consists of a reactive power compensator (RPC) and a voltage
compensator. The controller regulates the reactive power and voltage unbalance
of the MMG by using the reactive power produced by the ESS. The effectiveness
of this proposed scheme is verified in real-time simulation using the Opal-RT
OP5600 real-time simulator. The voltage unbalance factor (VUF) at the PCC is
decreased from 3.6 percent to 0.25 percent, while the reactive power is reduced
significantly at the single-phase load.
|
Generative Adversarial Networks (GANs) are machine learning networks based
around creating synthetic data. Voice Conversion (VC) is a subset of voice
translation that involves translating the paralinguistic features of a source
speaker to a target speaker while preserving the linguistic information. The
aim of non-parallel conditional GANs for VC is to translate an acoustic speech
feature sequence from one domain to another without the use of paired data. In
the study reported here, we investigated the interpretability of
state-of-the-art implementations of non-parallel GANs in the domain of VC. We
show that the learned representations in the repeating layers of a particular
GAN architecture remain close to their original random initialised parameters,
demonstrating that it is the number of repeating layers that is more
responsible for the quality of the output. We also analysed the learned
representations of a model trained on one particular dataset when used during
transfer learning on another dataset. This showed extremely high levels of
similarity across the entire network. Together, these results provide new
insight into how the learned representations of deep generative networks change
during learning and the importance in the number of layers.
|
Time-resolved mapping of lattice dynamics in real- and momentum-space is
essential to understand better several ubiquitous phenomena such as heat
transport, displacive phase transition, thermal conductivity, and many more. In
this regard, time-resolved diffraction and microscopy methods are employed to
image the induced lattice dynamics within a pump-probe configuration. In this
work, we demonstrate that inelastic scattering methods, with the aid of
theoretical simulation, are competent to provide similar information as one
could obtain from the time-resolved diffraction and imaging measurements. To
illustrate the robustness of the proposed method, our simulated result of
lattice dynamics in germanium is in excellent agreement with the time-resolved
x-ray diffuse scattering measurement performed using x-ray free-electron laser.
For a given inelastic scattering data in energy and momentum space, the
proposed method is useful to image in-situ lattice dynamics under different
environmental conditions of temperature, pressure, and magnetic field.
Moreover, the technique will profoundly impact where time-resolved diffraction
within the pump-probe setup is not feasible, for instance, in inelastic neutron
scattering.
|
In clinical trials, there often exist multiple historical studies for the
same or related treatment investigated in the current trial. Incorporating
historical data in the analysis of the current study is of great importance, as
it can help to gain more information, improve efficiency, and provide a more
comprehensive evaluation of treatment. Enlightened by the unit information
prior (UIP) concept in the reference Bayesian test, we propose a new
informative prior called UIP from an information perspective that can
adaptively borrow information from multiple historical datasets. We consider
both binary and continuous data and also extend the new UIP methods to linear
regression settings. Extensive simulation studies demonstrate that our method
is comparable to other commonly used informative priors, while the
interpretation of UIP is intuitive and its implementation is relatively easy.
One distinctive feature of UIP is that its construction only requires summary
statistics commonly reported in the literature rather than the patient-level
data. By applying our UIP methods to phase III clinical trials for
investigating the efficacy of memantine in Alzheimer's disease, we illustrate
its ability of adaptively borrowing information from multiple historical
datasets in the real application.
|
The initial period of vaccination shows strong heterogeneity between
countries' vaccinations rollout, both in the terms of the start of the
vaccination process and in the dynamics of the number of people that are
vaccinated. A predominant thesis in the ongoing debate on the drivers of this
observed heterogeneity is that a key determinant of the swift and extensive
vaccine rollout is state capacity. Here, we utilize two measures that quantify
different aspects of the state capacity: i) the external capacity (measured
through the soft power and the economic power of the country) and ii) the
internal capacity (measured via the country's government effectiveness) and
investigate their relationship with the coronavirus vaccination outcome in the
initial period (up to 30th January 2021). By using data on 189 countries and a
two-step Heckman approach, we find that the economic power of the country and
its soft power are robust determinants of whether a country has started with
the vaccination process. In addition, the government effectiveness is a key
factor that determines vaccine roll-out. Altogether, our findings are in line
with the hypothesis that state capacity determines the observed heterogeneity
between countries in the initial period of COVID-19 vaccines rollout.
|
Background: Due to the finite size of the development sample, predicted
probabilities from a risk prediction model are inevitably uncertain. We apply
Value of Information methodology to evaluate the decision-theoretic
implications of prediction uncertainty.
Methods: Adopting a Bayesian perspective, we extend the definition of the
Expected Value of Perfect Information (EVPI) from decision analysis to net
benefit calculations in risk prediction. In the context of model development,
EVPI is the expected gain in net benefit by using the correct predictions as
opposed to predictions from a proposed model. We suggest bootstrap methods for
sampling from the posterior distribution of predictions for EVPI calculation
using Monte Carlo simulations. In a case study, we used subsets of data of
various sizes from a clinical trial for predicting mortality after myocardial
infarction to show how EVPI changes with sample size.
Results: With a sample size of 1,000 and at the pre-specified threshold of 2%
on predicted risks, the gain in net benefit by using the proposed and the
correct models were 0.0006 and 0.0011, respectively, resulting in an EVPI of
0.0005 and a relative EVPI of 87%. EVPI was zero only at unrealistically high
thresholds (>85%). As expected, EVPI declined with larger samples. We summarize
an algorithm for incorporating EVPI calculations into the commonly used
bootstrap method for optimism correction.
Conclusion: Value of Information methods can be applied to explore
decision-theoretic consequences of uncertainty in risk prediction and can
complement inferential methods when developing risk prediction models. R code
for implementing this method is provided.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.