abstract
stringlengths 42
2.09k
|
---|
In this paper we prove that the $r$-th ADO polynomial of a knot, for $r$ a
power of prime number, can be expanded as Vassiliev invariants with values in
$\mathbb{Z}$. Nevertheless this expansion is not unique and not easily
computable. We can obtain a unique computable expansion, but we only get $r$
adic topological Vassiliev invariants as coefficients. To do so, we exploit the
fact that the colored Jones polynomials can be decomposed as Vassiliev
invariants and we tranpose it to ADO using the unified knot invariant
recovering both ADO and colored Jones defined in arXiv:2003.09854. Finally we
prove some asymptotic behavior of the ADO polynomials modulo $r$ as $r$ goes to
infinity.
|
The excess of $\gamma$ rays in the data measured by Fermi-LAT from the
Galactic center region is one of the most intriguing mysteries in Astroparticle
Physics. This Galactic center excess (GCE), has been measured with respect to
different interstellar emission models (IEMs), source catalogs, data selections
and techniques. Although several proposed interpretations have appeared in the
literature, there are not firm conclusions as to its origin. The main
difficulty in solving this puzzle lies in modeling a region of such complexity
and thus precisely measuring the characteristics of the GCE. In this paper, we
use 11 years of Fermi-LAT data, state of the art IEMs, and the newest 4FGL
source catalog to provide precise measurements of the energy spectrum, spatial
morphology, position, and sphericity of the GCE. We find that the GCE has a
spectrum which is peaked at a few GeV and is well fit with a log-parabola. The
normalization of the spectrum changes by roughly $60\%$ when using different
IEMs, data selections and analysis techniques. The spatial distribution of the
GCE is compatible with a dark matter (DM) template produced with a generalized
NFW density profile with slope $\gamma = 1.2-1.3$. No energy evolution is
measured for the GCE morphology between $0.6-30$ GeV at a level larger than
$10\%$ of the $\gamma$ average value, which is 1.25. The analysis of the GCE
modeled with a DM template divided into quadrants shows that the spectrum and
spatial morphology of the GCE is similar in different regions around the
Galactic center. Finally, the GCE centroid is compatible with the Galactic
center, with best-fit position between
$l=[-0.3^{\circ},0.0^{\circ}],b=[-0.1^{\circ},0.0^{\circ}]$, and it is
compatible with a spherical symmetric morphology. In particular, fitting the DM
spatial profile with an ellipsoid gives a major-to-minor axis ratio between
0.8-1.2.
|
Recent observations of extrasolar gas giants suggest super-stellar C/O ratios
in planetary atmospheres, while interior models of observed extrasolar giant
planets additionally suggest high heavy element contents. Furthermore, recent
observations of protoplanetary disks revealed super-solar C/H ratios, which are
explained by inward drifting and evaporating pebbles, enhancing the volatile
content of the disk. We investigate how the inward drift and evaporation of
volatile rich pebbles influences the atmospheric C/O ratio and heavy element
content of giant planets growing by pebble and gas accretion. To achieve this
goal, we perform semi analytical 1D models of protoplanetary disks including
the treatment of viscous evolution and heating, pebble drift and simple
chemistry to simulate the growth of planets from planetary embryos to Jupiter
mass objects by accretion of pebbles and gas while they migrate through the
disk. Our simulations show that the composition of the planetary gas atmosphere
is dominated by the accretion of vapour, originating from inward drifting
evaporating pebbles. This process allows the giant planets to harbour large
heavy element contents. In addition, our model reveals that giant planets
originating further away from the central star have a higher C/O ratio on
average due to the evaporation of methane rich pebbles in the outer disk. These
planets can then also harbour super-solar C/O ratios, in line with exoplanet
observations. However, planets formed in the outer disk harbour a smaller heavy
element content, due to a smaller vapour enrichment of the outer disk. Our
model predicts that giant planets with low/large atmospheric C/O should harbour
a large/low total heavy element content. We further conclude that the inclusion
of pebble evaporation at evaporation lines is a key ingredient to determine the
heavy element content and composition of giant planets.
|
We assume the anisotropic model of the Universe in the framework of varying
speed of light $c$ and varying gravitational constant $G$ theories and study
different types of singularities. For the singularity models, we write the
scale factors in terms of cosmic time and found some conditions for possible
singularities. For future singularities, we assume the forms of varying speed
of light and varying gravitational constant. For regularizing big bang
singularity, we assume two forms of scale factors: sine model and tangent
model. For both the models, we examine the validity of null energy condition
and strong energy condition. Start from the first law of thermodynamics, we
study the thermodynamic behaviours of $n$ number of Universes (i.e.,
Multiverse) for (i) varying $c$, (ii) varying $G$ and (iii) both varying $c$
and $G$ models. We found the total entropies for all the cases in the
anisotropic Multiverse model. We also found the nature of the Multiverse if
total entropy is constant.
|
A new federated learning (FL) framework enabled by large-scale wireless
connectivity is proposed for designing the autonomous controller of connected
and autonomous vehicles (CAVs). In this framework, the learning models used by
the controllers are collaboratively trained among a group of CAVs. To capture
the varying CAV participation in the FL training process and the diverse local
data quality among CAVs, a novel dynamic federated proximal (DFP) algorithm is
proposed that accounts for the mobility of CAVs, the wireless fading channels,
as well as the unbalanced and nonindependent and identically distributed data
across CAVs. A rigorous convergence analysis is performed for the proposed
algorithm to identify how fast the CAVs converge to using the optimal
autonomous controller. In particular, the impacts of varying CAV participation
in the FL process and diverse CAV data quality on the convergence of the
proposed DFP algorithm are explicitly analyzed. Leveraging this analysis, an
incentive mechanism based on contract theory is designed to improve the FL
convergence speed. Simulation results using real vehicular data traces show
that the proposed DFP-based controller can accurately track the target CAV
speed over time and under different traffic scenarios. Moreover, the results
show that the proposed DFP algorithm has a much faster convergence compared to
popular FL algorithms such as federated averaging (FedAvg) and federated
proximal (FedProx). The results also validate the feasibility of the
contract-theoretic incentive mechanism and show that the proposed mechanism can
improve the convergence speed of the DFP algorithm by 40% compared to the
baselines.
|
According to "Social Disorganization" theory, criminal activity increases if
the societal institutions that might be responsible for maintaining order are
weakened. Do large apartment buildings, which often have fairly transient
populations and low levels of community involvement, have disproportionately
high rates of crime? Do these rates differ during the daytime or nighttime,
depending when residents are present, or away from their property? This study
examines four types of "acquisitive" crime in Milwaukee during 2014. Overall,
nighttime crimes are shown to be more dispersed than daytime crimes. A spatial
regression estimation finds that the density of multiunit housing is positively
related to all types of crime except burglaries, but not for all times of day.
Daytime robberies, in particular, increase as the density of multiunit housing
increases.
|
In this paper we present the theory of oscillation numbers and dual
oscillation numbers for continuous Lagrangian paths in $\mathbb{R}^{2n}$. Our
main results include a connection of the oscillation numbers of the given
Lagrangian path with the Lidskii angles of a special symplectic orthogonal
matrix. We also present Sturmian type comparison and separation theorems for
the difference of the oscillation numbers of two continuous Lagrangian paths.
These results, as well as the definition of the oscillation number itself, are
based on the comparative index theory (Elyseeva, 2009). The applications of
these results are directed to the theory of Maslov index of two continuous
Lagrangian paths. We derive a formula for the Maslov index via the Lidskii
angles of a special symplectic orthogonal matrix, and hence we express the
Maslov index as the oscillation number of a certain transformed Lagrangian
path. The results and methods are based on a generalization of the recently
introduced oscillation numbers and dual oscillation numbers for conjoined bases
of linear Hamiltonian systems (Elyseeva, 2019 and 2020) and on the connection
between the comparative index and Lidskii angles of symplectic matrices
(\v{S}epitka and \v{S}imon Hilscher, 2020).
|
We employ the periodic Anderson model with superconducting correlations in
the conduction band at half filling to study the behavior of the in-gap bands
in a heterostructure consisting of a molecular layer deposited on the surface
of a conventional superconductor. We use the dynamical mean-field theory to map
the lattice model on the superconducting single impurity model with
self-consistently determined bath and use the continuous-time hybridization
expansion (CT-HYB) quantum Monte Carlo and the iterative perturbation theory
(IPT) as solvers for the impurity problem. We present phase diagrams for square
and triangular lattice that both show two superconducting phases that differ by
the sign of the induced pairing, in analogy to the $0$ and $\pi$ phases of the
superconducting single impurity Anderson model and discuss the evolution of the
spectral function in the vicinity of the transition. We also discuss the
failure of the IPT for superconducting models with spinful ground state and the
behavior of the average expansion order of the CT-HYB simulation.
|
We consider the problem of service placement at the network edge, in which a
decision maker has to choose between $N$ services to host at the edge to
satisfy the demands of customers. Our goal is to design adaptive algorithms to
minimize the average service delivery latency for customers. We pose the
problem as a Markov decision process (MDP) in which the system state is given
by describing, for each service, the number of customers that are currently
waiting at the edge to obtain the service. However, solving this $N$-services
MDP is computationally expensive due to the curse of dimensionality. To
overcome this challenge, we show that the optimal policy for a single-service
MDP has an appealing threshold structure, and derive explicitly the Whittle
indices for each service as a function of the number of requests from customers
based on the theory of Whittle index policy.
Since request arrival and service delivery rates are usually unknown and
possibly time-varying, we then develop efficient learning augmented algorithms
that fully utilize the structure of optimal policies with a low learning
regret. The first of these is UCB-Whittle, and relies upon the principle of
optimism in the face of uncertainty. The second algorithm, Q-learning-Whittle,
utilizes Q-learning iterations for each service by using a two time scale
stochastic approximation. We characterize the non-asymptotic performance of
UCB-Whittle by analyzing its learning regret, and also analyze the convergence
properties of Q-learning-Whittle. Simulation results show that the proposed
policies yield excellent empirical performance.
|
This work presents a naive algorithm for parameter transfer between different
architectures with a computationally cheap injection technique (which does not
require data). The primary objective is to speed up the training of neural
networks from scratch. It was found in this study that transferring knowledge
from any architecture was superior to Kaiming and Xavier for initialization. In
conclusion, the method presented is found to converge faster, which makes it a
drop-in replacement for classical methods. The method involves: 1) matching:
the layers of the pre-trained model with the targeted model; 2) injection: the
tensor is transformed into a desired shape. This work provides a comparison of
similarity between the current SOTA architectures (ImageNet), by utilising TLI
(Transfer Learning by Injection) score.
|
The majority of current approaches in autonomous driving rely on
High-Definition (HD) maps which detail the road geometry and surrounding area.
Yet, this reliance is one of the obstacles to mass deployment of autonomous
vehicles due to poor scalability of such prior maps. In this paper, we tackle
the problem of online road map extraction via leveraging the sensory system
aboard the vehicle itself. To this end, we design a structured model where a
graph representation of the road network is generated in a hierarchical fashion
within a fully convolutional network. The method is able to handle complex road
topology and does not require a user in the loop.
|
Understanding the critical condition and mechanism of the droplet wetting
transition between Cassie-Baxter state and Wenzel state triggered by an
external electric field is of considerable importance because of its numerous
applications in industry and engineering. However, such a wetting transition on
a patterned surface is still not fully understood, e.g., the effects of
electro-wetting number, geometry of the patterned surfaces, and droplet volume
on the transition have not been systematically investigated. In this paper, we
propose a theoretical model for the Cassie-Baxter- Wenzel wetting transition
triggered by applying an external voltage on a droplet placed on a
mircopillared surface or a porous substrate. It is found that the transition is
realized by lowering the energy barrier created by the intermediate composite
state considerably, which enables the droplet to cross the energy barrier and
complete the transition process. Our calculations also indicate that for fixed
droplet volume, the critical electrowetting number (voltage) will increase
(decrease) along with the surface roughness for a micro-pillar patterned
(porous) surface, and if the surface roughness is fixed, a small droplet tends
to ease the critical electrowetting condition for the transition. Besides,
three dimensional phase diagrams in terms of electrowetting number, surface
roughness, and droplet volume are constructed to illustrate the
Cassie-Baxter-Wenzel wetting transition. Our theoretical model can be used to
explain the previous experimental results about the Cassie-Baxter-Wenzel
wetting transition reported in the literature.
|
Under unexpected conditions or scenarios, autonomous vehicles (AV) are more
likely to follow abnormal unplanned actions, due to the limited set of rules or
amount of experience they possess at that time. Enabling AV to measure the
degree at which their movements are novel in real-time may help to decrease any
possible negative consequences. We propose a method based on the Local Outlier
Factor (LOF) algorithm to quantify this novelty measure. We extracted features
from the inertial measurement unit (IMU) sensor's readings, which captures the
vehicle's motion. We followed a novelty detection approach in which the model
is fitted only using the normal data. Using datasets obtained from real-world
vehicle missions, we demonstrate that the suggested metric can quantify to some
extent the degree of novelty. Finally, a performance evaluation of the model
confirms that our novelty metric can be practical.
|
The minimal integral Mahler measure of a number field $K$,
$M(\mathcal{O}_K)$, is the minimal Mahler measure of a non-torsion primitive
element of $\mathcal{O}_K$. Upper and lower bounds, which depend on the
discriminant, are known. We show that for cubics, the lower bounds are sharp
with respect to its growth as a function of discriminant. We construct an
algorithm to compute $M(\mathcal{O}_K)$ for all cubics with absolute value of
the discriminant bounded by $N$.
|
Coronavirus (COVID-19) is a viral disease caused by severe acute respiratory
syndrome coronavirus 2 (SARS-CoV-2). The spread of COVID-19 seems to have a
detrimental effect on the global economy and health. A positive chest X-ray of
infected patients is a crucial step in the battle against COVID-19. Early
results suggest that abnormalities exist in chest X-rays of patients suggestive
of COVID-19. This has led to the introduction of a variety of deep learning
systems and studies have shown that the accuracy of COVID-19 patient detection
through the use of chest X-rays is strongly optimistic. Deep learning networks
like convolutional neural networks (CNNs) need a substantial amount of training
data. Because the outbreak is recent, it is difficult to gather a significant
number of radiographic images in such a short time. Therefore, in this
research, we present a method to generate synthetic chest X-ray (CXR) images by
developing an Auxiliary Classifier Generative Adversarial Network (ACGAN) based
model called CovidGAN. In addition, we demonstrate that the synthetic images
produced from CovidGAN can be utilized to enhance the performance of CNN for
COVID-19 detection. Classification using CNN alone yielded 85% accuracy. By
adding synthetic images produced by CovidGAN, the accuracy increased to 95%. We
hope this method will speed up COVID-19 detection and lead to more robust
systems of radiology.
|
In the past decades, continuous Doppler radar sensor-based bio-signal
detections have attracted many research interests. A typical example is the
Doppler heartbeat detection. While significant progresses have been achieved,
reliable, time-domain accurate demodulation of bio-signals in the presence of
unavoidable DC offsets remains a technical challenge. Aiming to overcome this
difficulty, we propose in this paper a novel demodulation algorithm that does
not need to trace and eliminate dynamic DC offsets based on approximating
segmented arcs in a quadrature constellation of sampling data to directional
chords. Assisted by the principal component analysis, such chords and their
directions can be deterministically determined. Simulations and experimental
validations showed fully recovery of micron-level pendulum movements and
strongly noised human heartbeats, verifying the effectiveness and accuracy of
the proposed approach.
|
Scene text retrieval aims to localize and search all text instances from an
image gallery, which are the same or similar to a given query text. Such a task
is usually realized by matching a query text to the recognized words, outputted
by an end-to-end scene text spotter. In this paper, we address this problem by
directly learning a cross-modal similarity between a query text and each text
instance from natural images. Specifically, we establish an end-to-end
trainable network, jointly optimizing the procedures of scene text detection
and cross-modal similarity learning. In this way, scene text retrieval can be
simply performed by ranking the detected text instances with the learned
similarity. Experiments on three benchmark datasets demonstrate our method
consistently outperforms the state-of-the-art scene text spotting/retrieval
approaches. In particular, the proposed framework of joint detection and
similarity learning achieves significantly better performance than separated
methods. Code is available at: https://github.com/lanfeng4659/STR-TDSL.
|
Glass-like objects such as windows, bottles, and mirrors exist widely in the
real world. Sensing these objects has many applications, including robot
navigation and grasping. However, this task is very challenging due to the
arbitrary scenes behind glass-like objects. This paper aims to solve the
glass-like object segmentation problem via enhanced boundary learning. In
particular, we first propose a novel refined differential module that outputs
finer boundary cues. We then introduce an edge-aware point-based graph
convolution network module to model the global shape along the boundary. We use
these two modules to design a decoder that generates accurate and clean
segmentation results, especially on the object contours. Both modules are
lightweight and effective: they can be embedded into various segmentation
models. In extensive experiments on three recent glass-like object segmentation
datasets, including Trans10k, MSD, and GDD, our approach establishes new
state-of-the-art results. We also illustrate the strong generalization
properties of our method on three generic segmentation datasets, including
Cityscapes, BDD, and COCO Stuff. Code and models is available at
\url{https://github.com/hehao13/EBLNet}.
|
Recent advance in diffusion models incorporates the Stochastic Differential
Equation (SDE), which brings the state-of-the art performance on image
generation tasks. This paper improves such diffusion models by analyzing the
model at the zero diffusion time. In real datasets, the score function diverges
as the diffusion time ($t$) decreases to zero, and this observation leads an
argument that the score estimation fails at $t=0$ with any neural network
structure. Subsequently, we introduce Unbounded Diffusion Model (UDM) that
resolves the score diverging problem with an easily applicable modification to
any diffusion models. Additionally, we introduce a new SDE that overcomes the
theoretic and practical limitations of Variance Exploding SDE. On top of that,
the introduced Soft Truncation method improves the sample quality by mitigating
the loss scale issue that happens at $t=0$. We further provide a theoretic
result of the proposed method to uncover the behind mechanism of the diffusion
models.
|
This paper proposes a dual-stage, low complexity, and reconfigurable
technique to enhance the speech contaminated by various types of noise sources.
Driven by input data and audio contents, the proposed dual-stage speech
enhancement approach performs a coarse and fine processing in the first-stage
and second-stage, respectively. In this paper, we demonstrate that the proposed
speech enhancement solution significantly enhances the metrics of 3-fold
QUality Evaluation of Speech in Telecommunication (3QUEST) consisting of speech
mean-opinion-score (SMOS) and noise MOS (NMOS) for near-field and far-field
applications. Moreover, the proposed speech enhancement approach greatly
improves both the signal-to-noise ratio (SNR) and subjective listening
experience. For comparisons, the traditional speech enhancement methods reduce
the SMOS although they increase NMOS and SNR. In addition, the proposed speech
enhancement scheme can be easily adopted in both capture path and speech render
path for speech communication and conferencing systems, and voice-trigger
applications.
|
An isolated positive wedge disclination deforms an initially flat elastic
sheet into a perfect cone when the sheet is of infinite extent and is
elastically inextensible. The latter requires the elastic stretching strains to
be vanishingly small. In this paper, rigorous analytical and numerical results
are obtained for the disclination induced deformed shape and stress field of a
bounded F{\"o}ppl-von K{\'a}rm{\'a}n elastic sheet with finite extensibility,
while emphasising the deviations from the perfect cone solution. In particular,
the Gaussian curvature field is no longer localised as a Dirac singularity at
the defect location whenever elastic extensibility is allowed and is
necessarily negative in large regions away from the defect. The stress field,
similarly, has no Dirac singularity in the presence of elastic extensibility.
However, with increasing Young's modulus of the sheet, while keeping the
bending modulus and the domain size fixed, both of these fields tend to develop
a Dirac singularity. Noticeably, in this limiting behaviour, inextensibility
eludes the bounded elastic sheet due to persisting regions of non-trivial
Gaussian curvature away from the defect. Other results in the paper include
studying the effect of specific boundary conditions (free, simply supported, or
partially clamped) on the Gaussian curvature field away from the defect and on
the buckling transition from the flat to a conical solution.
|
The aim of this work is to present the optimization of the gate trench module
for use in vertical GaN devices in terms of cleaning process of the etched
surface of the gate trench, thickness of gate dielectric and magnesium
concentration of the p-GaN layer. The analysis was carried out by comparing the
main DC parameters of devices that differ in surface cleaning process of the
gate trench, gate dielectric thickness, and body layer doping. . On the basis
of experimental results, we report that: (i) a good cleaning process of the
etched GaN surface of the gate trench is a key factor to enhance the device
performance, (ii) a gate dielectric >35-nm SiO2 results in a narrow
distribution for DC characteristics, (iii) lowering the p-doping in the body
layer improves the ON-resistance (RON). Gate capacitance measurements are
performed to further confirm the results. Hypotheses on dielectric
trapping/detrapping mechanisms under positive and negative gate bias are
reported.
|
This paper aims to derive a definition of complexity for a dynamic spherical
system in the background of self-interacting Brans-Dicke gravity. We measure
complexity of the structure in terms of inhomogeneous energy density,
anisotropic pressure and massive scalar field. For this purpose, we formulate
structure scalars by orthogonally splitting the Riemann tensor. We show that
self-gravitating models collapsing homologously follow the simplest mode of
evolution. Furthermore, we demonstrate the effect of scalar field on the
complexity and evolution of non-dissipative as well as dissipative systems. The
criteria under which the system deviates from the initial state of zero
complexity is also discussed. It is concluded that complexity of the sphere
increases in self-interacting Brans-Dicke gravity because the homologous model
is not shear-free.
|
The strut based injector has been found to be one of the most promising
injector designs for supersonic combustor, offering en-hanced mixing of fuel
and air. The mixing and flow field characteristics of the straight (SS) &
tapered strut (TS), with fixed ramp an-gle and height at freestream Mach number
2 in conjunction with fuel injection at Mach 2.3 have been investigated
numerically and reported. In the present investigation, hydrogen (H2) and
ethylene (C2H4) are injected in oncoming supersonic flow from the back of the
strut, where jet to freestream momentum ratio is maintained at 0.79 and 0.69
for H2 & C2H4, respectively. The predicted wall static pressure and species
mole fractions at various downstream locations are compared with the
experimental data for TS case with 0.6 mm jet diameter and found to be in good
agreement. Further, the effect of jet diameter and strut geometry on the near
field mixing in strut ramp configuration is discussed for both the fuel. The
numerical results are assessed based on various parameters for the performance
evaluation of different strut ramp configurations. The SS configuration for
both the injectant is found to be an optimum candidate, also it is observed
that for higher jet diameter larger combustor length is required to achieve
satisfactory near field mixing.
|
Sequence alignment supports numerous tasks in bioinformatics, natural
language processing, pattern recognition, social sciences, and others fields.
While the alignment of two sequences may be performed swiftly in many
applications, the simultaneous alignment of multiple sequences proved to be
naturally more intricate. Although most multiple sequence alignment (MSA)
formulations are NP-hard, several approaches have been developed, as they can
outperform pairwise alignment methods or are necessary for some applications.
Taking into account not only similarities but also the lengths of the
compared sequences (i.e. normalization) can provide better alignment results
than both unnormalized or post-normalized approaches. While some normalized
methods have been developed for pairwise sequence alignment, none have been
proposed for MSA. This work is a first effort towards the development of
normalized methods for MSA.
We discuss multiple aspects of normalized multiple sequence alignment (NMSA).
We define three new criteria for computing normalized scores when aligning
multiple sequences, showing the NP-hardness and exact algorithms for solving
the NMSA using those criteria. In addition, we provide approximation algorithms
for MSA and NMSA for some classes of scoring matrices.
|
Reasoning is one of the major challenges of Human-like AI and has recently
attracted intensive attention from natural language processing (NLP)
researchers. However, cross-modal reasoning needs further research. For
cross-modal reasoning, we observe that most methods fall into shallow feature
matching without in-depth human-like reasoning.The reason lies in that existing
cross-modal tasks directly ask questions for a image. However, human reasoning
in real scenes is often made under specific background information, a process
that is studied by the ABC theory in social psychology. We propose a shared
task named "Premise-based Multimodal Reasoning" (PMR), which requires
participating models to reason after establishing a profound understanding of
background information. We believe that the proposed PMR would contribute to
and help shed a light on human-like in-depth reasoning.
|
Erd\H{o}s, Harary, and Tutte defined the dimension of a graph $G$ as the
smallest natural number $n$ such that $G$ can be embedded in $\mathbb{R}^n$
with each edge a straight line segment of length 1.
Since the proposal of this definition, little has been published on how to
compute the exact dimension of graphs and almost nothing has been published on
graphs that are minor minimal with respect to dimension. This paper develops
both of these areas. In particular, it (1) establishes certain conditions under
which computing the dimension of graph sums is easy and (2) constructs three
infinitely-large classes of graphs that are minor minimal with respect to their
dimension.
|
(1) If $R$ is an affine algebra of dimension $d\geq 4$ over
$\overline{\mathbb{F}}_{p}$ with $p>3$, then the group structure on ${\rm
Um}_d(R)/{\rm E}_d(R)$ is nice. (2) If $R$ is a commutative noetherian ring of
dimension $d\geq 2$ such that ${\rm E}_{d+1}(R)$ acts transitively on ${\rm
Um}_{d+1}(R),$ then the group structure on ${\rm Um}_{d+1}(R[X])/{\rm
E}_{d+1}(R[X])$ is nice.
|
Transformers have shown improved performance when compared to previous
architectures for sequence processing such as RNNs. Despite their sizeable
performance gains, as recently suggested, the model is computationally
expensive to train and with a high parameter budget. In light of this, we
explore parameter-sharing methods in Transformers with a specific focus on
generative models. We perform an analysis of different parameter
sharing/reduction methods and develop the Subformer. Our model combines
sandwich-style parameter sharing, which overcomes naive cross-layer parameter
sharing in generative models, and self-attentive embedding factorization
(SAFE). Experiments on machine translation, abstractive summarization and
language modeling show that the Subformer can outperform the Transformer even
when using significantly fewer parameters.
|
Accurate short range weather forecasting has significant implications for
various sectors. Machine learning based approaches, e.g., deep learning, have
gained popularity in this domain where the existing numerical weather
prediction (NWP) models still have modest skill after a few days. Here we use a
ConvLSTM network to develop a deep learning model for precipitation
forecasting. The crux of the idea is to develop a forecasting model which
involves convolution based feature selection and uses long term memory in the
meteorological fields in conjunction with gradient based learning algorithm.
Prior to using the input data, we explore various techniques to overcome
dataset difficulties. We follow a strategic approach to deal with missing
values and discuss the models fidelity to capture realistic precipitation. The
model resolution used is (25 km). A comparison between 5 years of predicted
data and corresponding observational records for 2 days lead time forecast show
correlation coefficients of 0.67 and 0.42 for lead day 1 and 2 respectively.
The patterns indicate higher correlation over the Western Ghats and Monsoon
trough region (0.8 and 0.6 for lead day 1 and 2 respectively). Further, the
model performance is evaluated based on skill scores, Mean Square Error,
correlation coefficient and ROC curves. This study demonstrates that the
adopted deep learning approach based only on a single precipitation variable,
has a reasonable skill in the short range. Incorporating multivariable based
deep learning has the potential to match or even better the short range
precipitation forecasts based on the state of the art NWP models.
|
We investigate $(0,1)$-matrices that are {\em convex}, which means that the
ones are consecutive in every row and column. These matrices occur in discrete
tomography. The notion of ranked essential sets, known for permutation
matrices, is extended to convex sets. We show a number of results for the class
$\mc{C}(R,S)$ of convex matrices with given row and column sum vectors $R$ and
$S$. Also, it is shown that the ranked essential set uniquely determines a
matrix in $\mc{C}(R,S)$.
|
Identifying objects in an image and their mutual relationships as a scene
graph leads to a deep understanding of image content. Despite the recent
advancement in deep learning, the detection and labeling of visual object
relationships remain a challenging task. This work proposes a novel
local-context aware architecture named relation transformer, which exploits
complex global objects to object and object to edge (relation) interactions.
Our hierarchical multi-head attention-based approach efficiently captures
contextual dependencies between objects and predicts their relationships. In
comparison to state-of-the-art approaches, we have achieved an overall mean
\textbf{4.85\%} improvement and a new benchmark across all the scene graph
generation tasks on the Visual Genome dataset.
|
Quantum imaginary time evolution is a powerful algorithm to prepare ground
states and thermal states on near-term quantum devices. However, algorithmic
errors induced by Trotterization and local approximation severely hinder its
performance. Here we propose a deep-reinforcement-learning-based method to
steer the evolution and mitigate these errors. In our scheme, the well-trained
agent can find the subtle evolution path where most algorithmic errors cancel
out, enhancing the recovering fidelity significantly. We verified the validity
of the method with the transverse-field Ising model and graph maximum cut
problem. Numerical calculations and experiments on a nuclear magnetic resonance
quantum computer illustrated the efficacy. The philosophy of our method,
eliminating errors with errors, sheds new light on error reduction on near-term
quantum devices.
|
Application developers, in our experience, tend to hesitate when dealing with
linked data technologies. To reduce their initial hurdle and enable rapid
prototyping, we propose in this paper a framework for building linked data
applications. Our approach especially considers the participation of web
developers and non-technical users without much prior knowledge about linked
data concepts. Web developers are supported with bidirectional RDF to JSON
conversions and suitable CRUD endpoints. Non-technical users can browse
websites generated from JSON data by means of a template language. A
prototypical open source implementation demonstrates its capabilities.
|
ccc-Autoevolutes are closed constant curvature space curves which are their
own evolutes. A modified Frenet equation produces constant curvature curves
such that the curve on $[0, \pi]$ is congruent to the evolute on $[\pi, 2\pi]$
and vice versa. Closed curves are then congruent to their evolutes. If the
ruled surface spanned by the principal normals between curve and evolute is a
M\"obius band then the curve is its own evolute. We use symmetries to construct
closed curves by solving 2-parameter problems numerically. The smallest
autoevolute which we found is a trefoil knot parametrized by three periods $[0,
6\pi]$.Our smallest closed solution of the ODE is parametrized by two periods.
|
We study a correspondence between the multifractal model of turbulence and
the Navier-Stokes equations in $d$ spatial dimensions by comparing their
respective dissipation length scales. In Kolmogorov's 1941 theory the key
parameter $h$, which is an exponent in the Navier-Stokes invariance scaling, is
fixed at $h=1/3$ but is allowed a spectrum of values in multifractal theory.
Taking into account all derivatives of the Navier-Stokes equations, it is found
that for this correspondence to hold the multifractal spectrum $C(h)$ must be
bounded from below such that $C(h) \geq 1-3h$, which is consistent with the
four-fifths law. Moreover, $h$ must also be bounded from below such that $h
\geq (1-d)/3$. When $d=3$ the allowed range of $h$ is given by $h \geq -2/3$
thereby bounding $h$ away from $h=-1$. The implications of this are discussed.
|
We consider the stochastic scheduling problem of minimizing the expected
makespan on $m$ parallel identical machines. While the (adaptive) list
scheduling policy achieves an approximation ratio of $2$, any (non-adaptive)
fixed assignment policy has performance guarantee $\Omega\left(\frac{\log
m}{\log \log m}\right)$. Although the performance of the latter class of
policies are worse, there are applications in which non-adaptive policies are
desired. In this work, we introduce the two classes of $\delta$-delay and
$\tau$-shift policies whose degree of adaptivity can be controlled by a
parameter. We present a policy - belonging to both classes - which is an
$\mathcal{O}(\log \log m)$-approximation for reasonably bounded parameters. In
other words, an exponential improvement on the performance of any fixed
assignment policy can be achieved when allowing a small degree of adaptivity.
Moreover, we provide a matching lower bound for any $\delta$-delay and
$\tau$-shift policy when both parameters, respectively, are in the order of the
expected makespan of an optimal non-anticipatory policy.
|
Decision trees are among the most popular machine learning models and are
used routinely in applications ranging from revenue management and medicine to
bioinformatics. In this paper, we consider the problem of learning optimal
binary classification trees. Literature on the topic has burgeoned in recent
years, motivated both by the empirical suboptimality of heuristic approaches
and the tremendous improvements in mixed-integer optimization (MIO) technology.
Yet, existing MIO-based approaches from the literature do not leverage the
power of MIO to its full extent: they rely on weak formulations, resulting in
slow convergence and large optimality gaps. To fill this gap in the literature,
we propose an intuitive flow-based MIO formulation for learning optimal binary
classification trees. Our formulation can accommodate side constraints to
enable the design of interpretable and fair decision trees. Moreover, we show
that our formulation has a stronger linear optimization relaxation than
existing methods. We exploit the decomposable structure of our formulation and
max-flow/min-cut duality to derive a Benders' decomposition method to speed-up
computation. We propose a tailored procedure for solving each decomposed
subproblem that provably generates facets of the feasible set of the MIO as
constraints to add to the main problem. We conduct extensive computational
experiments on standard benchmark datasets on which we show that our proposed
approaches are 31 times faster than state-of-the art MIO-based techniques and
improve out of sample performance by up to 8%.
|
Polymers are among the most important materials in the modern society being
found almost in every activity of our daily life. Understanding their chemical
and physical properties lead to improvements of their usage. The correlation
functions are one of most important quantities to understand a physical system.
The characteristic way it behaves describe how the system fluctuates, and much
of the progress achieved to understand complex systems has been due to their
study. Of particular interest in polymer science are the space correlations
which describe its mechanical behavior. In this work I study the stiffness of a
polymer immersed in a magnetic medium and trapped in an optical tweezers. Using
Monte Carlo simulations the correlation function along the chain and the force
in the tweezers are obtained as a function of temperature and density of
magnetic particles. The results show that the correlation decay has two
regimes: an initial very fast decay of order the monomer-monomer spacing and a
power law in the long distance regime. The power law exponent has a minimum at
a temperature $T_{min}$ for any non zero density of magnetic particles
indicating that the system is more correlated in this region. Using a formula
for the persistence length derived from the WLC theory one observed that it has
a maximum at the same temperature. These results suggest that the correlations
in the system may be a combination of exponential and power law.
|
This paper proposes a novel evolutionary algorithm called Epistocracy which
incorporates human socio-political behavior and intelligence to solve complex
optimization problems. The inspiration of the Epistocracy algorithm originates
from a political regime where educated people have more voting power than the
uneducated or less educated. The algorithm is a self-adaptive, and
multi-population optimizer in which the evolution process takes place in
parallel for many populations led by a council of leaders. To avoid stagnation
in poor local optima and to prevent a premature convergence, the algorithm
employs multiple mechanisms such as dynamic and adaptive leadership based on
gravitational force, dynamic population allocation and diversification,
variance-based step-size determination, and regression-based leadership
adjustment. The algorithm uses a stratified sampling method called Latin
Hypercube Sampling (LHS) to distribute the initial population more evenly for
exploration of the search space and exploitation of the accumulated knowledge.
To investigate the performance and evaluate the reliability of the algorithm,
we have used a set of multimodal benchmark functions, and then applied the
algorithm to the MNIST dataset to further verify the accuracy, scalability, and
robustness of the algorithm. Experimental results show that the Epistocracy
algorithm outperforms the tested state-of-the-art evolutionary and swarm
intelligence algorithms in terms of performance, precision, and convergence.
|
We formalize the notion of vector semi-inner products and introduce a class
of vector seminorms which are built from these maps. The classical Pythagorean
theorem and parallelogram law are then generalized to vector seminorms that
have a geometric mean closed vector lattice for codomain. In the special case
that this codomain is a square root closed, semiprime $f$-algebra, we provide a
sharpening of the triangle inequality as well as a condition for equality.
|
Reconfigurable Intelligent Surface (RIS) composed of programmable actuators
is a promising technology, thanks to its capability in manipulating
Electromagnetic (EM) wavefronts. In particular, RISs have the potential to
provide significant performance improvements for wireless networks. However, to
do so, a proper configuration of the reflection coefficients of the unit cells
in the RIS is required. RISs are sophisticated platforms so the design and
fabrication complexity might be uneconomical for single-user scenarios while a
RIS that can service multi-users justifies the costs. For the first time, we
propose an efficient reconfiguration technique providing the multi-beam
radiation pattern. Thanks to the analytical model the reconfiguration profile
is at hand compared to time-consuming optimization techniques. The outcome can
pave the wave for commercial use of multi-user communication beyond 5G
networks. We analyze the performance of our proposed RIS technology for indoor
and outdoor scenarios, given the broadcast mode of operation. The aforesaid
scenarios encompass some of the most challenging scenarios that wireless
networks encounter. We show that our proposed technique provisions sufficient
gains in the observed channel capacity when the users are close to the RIS in
the indoor office environment scenario. Further, we report more than one order
of magnitude increase in the system throughput given the outdoor environment.
The results prove that RIS with the ability to communicate with multiple users
can empower wireless networks with great capacity.
|
We develop a system-level design for the provision of Ancillary Service (AS)
for control of electric power grids by in-vehicle batteries, suitably applied
to Electric Vehicles (EVs) operated in a sharing service. The provision is
called in this paper the multi-objective AS: primary frequency control in a
transmission grid and voltage amplitude regulation in a distribution grid
connected to EVs. The design is based on the ordinary differential equation
model of distribution voltage, which has been recently introduced as a new
physics-based model, and is utilized in this paper for assessing and regulating
the impact of spatiotemporal charging/charging of a large population of EVs to
a distribution grid. Effectiveness of the autonomous V2G design is evaluated
with numerical simulations of realistic models for transmission and
distribution grids with synthetic operation data on EVs in a sharing service.
In addition, we present a hardware-in-the-loop test for evaluating its
feasibility in a situation where inevitable latency is involved due to power,
control, and communication equipments.
|
We present a reanalysis of GW151226, the second binary black hole merger
discovered by the LIGO-Virgo Collaboration. Previous analysis showed that the
best-fit waveform for this event corresponded to the merger of a $\sim 14 \,
M_\odot$ black hole with a $\sim 7.5 \, M_\odot$ companion. In this work, we
perform parameter estimation using a waveform model that includes the effects
of orbital precession and higher-order radiative multipoles, and find that the
mass and spin parameters of GW151226 have bimodal posterior distributions. The
two modes are separated in mass ratio, $q$: the high-$q$ mode ($0.4 \lesssim q
< 1$) is consistent with the results reported in the literature. On the other
hand, the low-$q$ mode ($q \lesssim 0.4$), which describes a binary with
component masses of $\sim 29 \, M_\odot$ and $\sim \, 4.3 M_\odot$, is new. The
low-$q$ mode has several interesting properties: (a) the secondary black hole
mass may fall in the lower mass gap of astrophysical black hole population; and
(b) orbital precession is driven by the primary black hole spin, which has a
dimensionless magnitude as large as $\sim 0.88$ and is tilted away from the
orbital angular momentum at an angle of $\sim 47^\circ$. The new low-$q$ mode
has a log likelihood that is about six points higher than that of the high-$q$
mode, and can therefore affect the astrophysical interpretation of GW151226.
Crucially, we show that the low-$q$ mode disappears if we neglect either higher
multipoles or orbital precession in the parameter estimation. More generally,
this work highlights how incorporating additional physical effects into
waveform models used in parameter estimations can alter the interpretation of
gravitational-wave sources.
|
Let H be a tree. It was proved by Rodl that graphs that do not contain H as
an induced subgraph, and do not contain the complete bipartite graph $K_{t,t}$
as a subgraph, have bounded chromatic number. Kierstead and Penrice
strengthened this, showing that such graphs have bounded degeneracy. Here we
give a further strengthening, proving that for every tree H, the degeneracy is
at most polynomial in t. This answers a question of Bonamy, Pilipczuk,
Rzazewski, Thomasse and Walczak.
|
Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural
networks, have attracted great attentions from researchers and industry. The
most efficient way to train deep SNNs is through ANN-SNN conversion. However,
the conversion usually suffers from accuracy loss and long inference time,
which impede the practical application of SNN. In this paper, we theoretically
analyze ANN-SNN conversion and derive sufficient conditions of the optimal
conversion. To better correlate ANN-SNN and get greater accuracy, we propose
Rate Norm Layer to replace the ReLU activation function in source ANN training,
enabling direct conversion from a trained ANN to an SNN. Moreover, we propose
an optimal fit curve to quantify the fit between the activation value of source
ANN and the actual firing rate of target SNN. We show that the inference time
can be reduced by optimizing the upper bound of the fit curve in the revised
ANN to achieve fast inference. Our theory can explain the existing work on fast
reasoning and get better results. The experimental results show that the
proposed method achieves near loss less conversion with VGG-16,
PreActResNet-18, and deeper structures. Moreover, it can reach 8.6x faster
reasoning performance under 0.265x energy consumption of the typical method.
The code is available at
https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.
|
We consider the problem of detecting signals in the rank-one
signal-plus-noise data matrix models that generalize the spiked Wishart
matrices. We show that the principal component analysis can be improved by
pre-transforming the matrix entries if the noise is non-Gaussian. As an
intermediate step, we prove a sharp phase transition of the largest eigenvalues
of spiked rectangular matrices, which extends the Baik-Ben Arous-P\'ech\'e
(BBP) transition. We also propose a hypothesis test to detect the presence of
signal with low computational complexity, based on the linear spectral
statistics, which minimizes the sum of the Type-I and Type-II errors when the
noise is Gaussian.
|
We generate the perturbative expansion of the single-particle Green's
function and related self-energy for a half-filled single-band Hubbard model on
a square lattice. We invoke algorithmic Matsubara integration to evaluate
single-particle quantities for real and Matsubara frequencies and verify
results through comparison to existing data on the Matsubara axis. With low
order expansions at weak-coupling we observe a number of outcomes expected at
higher orders: the opening of a gap, pseudogap behavior, and Fermi-surface
reconstruction. Based on low-order perturbations we consider the phase diagram
that arises from truncated expansions of the self-energy and Green's function
and their relation via the Dyson equation. From Matsubara axis data we observe
insulating behavior in direct expansions of the Green's function, while the
same order of truncation of the self-energy produces metallic behavior. This
observation is supported by additional calculations for real frequencies. We
attribute this difference to the order in which diagrams are implicitly summed
in the Dyson series. By separating the reducible and irreducible contributions
at each order we show that the reducible diagrams implicitly summed in the
Dyson equation lead to incorrect physics in the half-filled Hubbard model. Our
observations for this particular case lead us to question the utility of the
Dyson equation for any problem that shows a disparity between reducible and
irreducible contributions to the expansion of the Green's function.
|
Assuming the existence of Siegel zeros, we prove that there exists an
increasing sequence of positive integers for which Chowla's Conjecture on
$k$-point correlations of the Liouville function holds. This extends work of
Germ\'an and K\'atai, where they studied the case $k=2$ under identical
hypotheses.
An immediate corollary, which follows from a well-known argument due to
Sarnak, is that Sarnak's Conjecture on M\"obius disjointness holds. More
precisely, assuming the existence of Siegel zeros, there exists a subsequence
of the natural numbers for which the Liouville function is asymptotically
orthogonal to any sequence of topological entropy zero.
|
Modifications to the distribution of charged particles with respect to high
transverse momentum ($p_\mathrm{T}$) jets passing through a quark-gluon plasma
are explored using the CMS detector. Back-to-back dijets are analyzed in
lead-lead and proton-proton collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV via
correlations of charged particles in bins of relative pseudorapidity and
angular distance from the leading and subleading jet axes. In comparing the
lead-lead and proton-proton collision results, modifications to the
charged-particle relative distance distribution and to the momentum
distributions around the jet axis are found to depend on the dijet momentum
balance $x_j$, which is the ratio between the subleading and leading jet
$p_\mathrm{T}$. For events with $x_j$ $\approx$ 1, these modifications are
observed for both the leading and subleading jets. However, while subleading
jets show significant modifications for events with a larger dijet momentum
imbalance, much smaller modifications are found for the leading jets in these
events.
|
A new type of nonlinear dust pulse structures has been observed in afterglow
complex plasma under microgravity condition on board the International Space
Station (ISS). The dust pulses are triggered spontaneously as the plasma is
switched off and the particles start to flow through each other
(uni-directional or counter-streaming) in the presence of a low-frequency
external electric excitation. The pulses are oblique with respect to the
microparticle cloud and appear to be symmetric with respect to the central
axis. A possible explanation of this observation with the spontaneous
development of a double layer in the afterglow of complex plasma is described.
|
Using the gravitational potential and source multipole moments bilinear in
the spins, first computed to next-to-leading order (NLO) in the post-Newtonian
(PN) expansion within the effective field theory (EFT) framework, we complete
here the derivation of the dynamical invariants and flux-balance equations,
including energy and angular momentum. We use these results to calculate
spin-spin effects in the orbital frequency and accumulated phase to NLO for
circular orbits. We also derive the linear momentum and center-of-mass fluxes
and associated kick-velocity, to the highest relevant PN order. We explicitly
demonstrate the equivalence between the quadratic-in-spin source multipoles
obtained using the EFT formalism and those rederived later with more
traditional tools, leading to perfect agreement for spin-spin radiative
observables to NLO among both approaches.
|
In this paper, we study asynchronous federated learning (FL) in a wireless
distributed learning network (WDLN). To allow each edge device to use its local
data more efficiently via asynchronous FL, transmission scheduling in the WDLN
for asynchronous FL should be carefully determined considering system
uncertainties, such as time-varying channel and stochastic data arrivals, and
the scarce radio resources in the WDLN. To address this, we propose a metric,
called an effectivity score, which represents the amount of learning from
asynchronous FL. We then formulate an Asynchronous Learning-aware transmission
Scheduling (ALS) problem to maximize the effectivity score and develop three
ALS algorithms, called ALSA-PI, BALSA, and BALSA-PO, to solve it. If the
statistical information about the uncertainties is known, the problem can be
optimally and efficiently solved by ALSA-PI. Even if not, it can be still
optimally solved by BALSA that learns the uncertainties based on a Bayesian
approach using the state information reported from devices. BALSA-PO
suboptimally solves the problem, but it addresses a more restrictive WDLN in
practice, where the AP can observe a limited state information compared with
the information used in BALSA. We show via simulations that the models trained
by our ALS algorithms achieve performances close to that by an ideal benchmark
and outperform those by other state-of-the-art baseline scheduling algorithms
in terms of model accuracy, training loss, learning speed, and robustness of
learning. These results demonstrate that the adaptive scheduling strategy in
our ALS algorithms is effective to asynchronous FL.
|
We investigate the existence of constant-round post-quantum black-box
zero-knowledge protocols for $\mathbf{NP}$. As a main result, we show that
there is no constant-round post-quantum black-box zero-knowledge argument for
$\mathbf{NP}$ unless $\mathbf{NP}\subseteq \mathbf{BQP}$. As constant-round
black-box zero-knowledge arguments for $\mathbf{NP}$ exist in the classical
setting, our main result points out a fundamental difference between
post-quantum and classical zero-knowledge protocols. Combining previous
results, we conclude that unless $\mathbf{NP}\subseteq \mathbf{BQP}$,
constant-round post-quantum zero-knowledge protocols for $\mathbf{NP}$ exist if
and only if we use non-black-box techniques or relax certain security
requirements such as relaxing standard zero-knowledge to
$\epsilon$-zero-knowledge. Additionally, we also prove that three-round and
public-coin constant-round post-quantum black-box $\epsilon$-zero-knowledge
arguments for $\mathbf{NP}$ do not exist unless $\mathbf{NP}\subseteq
\mathbf{BQP}$.
|
In graph theory, an independent set is a subset of nodes where there are no
two adjacent nodes. The independent set is maximal if no node outside the
independent set can join it. In network applications, maximal independent sets
can be used as cluster heads in ad hoc and wireless sensor networks. In order
to deal with any failure in networks, self-stabilizing algorithms have been
proposed in the literature to calculate the maximal independent set under
different hypotheses. In this paper, we propose a self-stabilizing algorithm to
compute a maximal independent set where nodes of the independent set are far
from each other at least with distance 3. We prove the correctness and the
convergence of the proposed algorithm. Simulation tests show the ability of our
algorithm to find a reduced number of nodes in large scale networks which
allows strong control of networks
|
Cuprates, a member of high-Tc superconductors, have been on the long-debate
on their superconducting mechanism, so that predicting the critical temperature
of cuprates still remains elusive. Herein, using machine learning and first
principle calculations, we predict the maximum superconducting transition
temperature (Tc,max) of hole-doped cuprates and suggest the explicit functional
form for Tc,max with the root-mean-square-error of 3.705 K and the coefficient
of determination R2 of 0.969. We employed two machine learning models; one is a
parametric brute force searching method and another is a non-parametric random
forest regression model. We have found that material dependent parameters such
as the Bader charge of apical oxygen, the bond strength between apical atoms,
and the number of superconducting layers are important features to estimate
Tc,max. Furthermore, we predict the Tc,max of hypothetical cuprates generated
by replacing apical cations with other elements. When Ga is an apical cation,
the predicted Tc,max is the highest among the hypothetical structures with 71,
117, and 131 K for one, two, and three CuO2 layers, respectively. These
findings suggest that machine learning could guide the design of new high-Tc
superconductors in the future.
|
Defining templates of galaxy spectra is useful to quickly characterise new
observations and organise databases from surveys. These templates are usually
built from a pre-defined classification based on other criteria. Aims. We
present an unsupervised classification of 702248 spectra of galaxies and
quasars with redshifts smaller than 0.25 that were retrieved from the Sloan
Digital Sky Survey (SDSS) database, release 7. The spectra were first corrected
for redshift, then wavelet-filtered to reduce the noise, and finally binned to
obtain about 1437 wavelengths per spectrum. The unsupervised clustering
algorithm Fisher-EM, relying on a discriminative latent mixture model, was
applied on these corrected spectra. The full set and several subsets of 100000
and 300000 spectra were analysed. The optimum number of classes given by a
penalised likelihood criterion is 86 classes, of which the 37 most populated
gather 99% of the sample. These classes are established from a subset of 302214
spectra. Using several cross-validation techniques we find that this
classification agrees with the results obtained on the other subsets with an
average misclassification error of about 15%. The large number of very small
classes tends to increase this error rate. In this paper, we do an initial
quick comparison of our classes with literature templates. This is the first
time that an automatic, objective and robust unsupervised classification is
established on such a large number of galaxy spectra. The mean spectra of the
classes can be used as templates for a large majority of galaxies in our
Universe.
|
This paper is a modified chapter of the author's Ph.D. thesis. We introduce
the notions of sequentially approximated types and sequentially approximated
Keisler measures. As the names imply, these are types which can be approximated
by a sequence of realized types and measures which can be approximated by a
sequence of `averaging measures' on tuples of realized types. We show that both
generically stable types (in arbitrary theories) and Keisler measures which are
finitely satisfiable over a countable model (in NIP theories) are sequentially
approximated. We also introduce the notion of a smooth sequence in a measure
over a model and give an equivalent characterization of generically stable
measures (in NIP theories) via this definition. In the last section, we take
the opportunity to generalize the main result of [8].
|
In this paper, a novel multiagent based state transition optimization
algorithm with linear convergence rate named MASTA is constructed. It first
generates an initial population randomly and uniformly. Then, it applies the
basic state transition algorithm (STA) to the population and generates a new
population. After that, It computes the fitness values of all individuals and
finds the best individuals in the new population. Moreover, it performs an
effective communication operation and updates the population. With the above
iterative process, the best optimal solution is found out. Experimental results
based on some common benchmark functions and comparison with some
stat-of-the-art optimization algorithms, the proposed MASTA algorithm has shown
very superior and comparable performance.
|
Open-domain neural dialogue models have achieved high performance in response
ranking and evaluation tasks. These tasks are formulated as a binary
classification of responses given in a dialogue context, and models generally
learn to make predictions based on context-response content similarity.
However, over-reliance on content similarity makes the models less sensitive to
the presence of inconsistencies, incorrect time expressions and other factors
important for response appropriateness and coherence. We propose approaches for
automatically creating adversarial negative training data to help ranking and
evaluation models learn features beyond content similarity. We propose
mask-and-fill and keyword-guided approaches that generate negative examples for
training more robust dialogue systems. These generated adversarial responses
have high content similarity with the contexts but are either incoherent,
inappropriate or not fluent. Our approaches are fully data-driven and can be
easily incorporated in existing models and datasets. Experiments on
classification, ranking and evaluation tasks across multiple datasets
demonstrate that our approaches outperform strong baselines in providing
informative negative examples for training dialogue systems.
|
Molecular modeling is an important topic in drug discovery. Decades of
research have led to the development of high quality scalable molecular force
fields. In this paper, we show that neural networks can be used to train a
universal approximator for energy potential functions. By incorporating a fully
automated training process we have been able to train smooth, differentiable,
and predictive potential functions on large-scale crystal structures. A variety
of tests have also been performed to show the superiority and versatility of
the machine-learned model.
|
Collecting training data from untrusted sources exposes machine learning
services to poisoning adversaries, who maliciously manipulate training data to
degrade the model accuracy. When trained on offline datasets, poisoning
adversaries have to inject the poisoned data in advance before training, and
the order of feeding these poisoned batches into the model is stochastic. In
contrast, practical systems are more usually trained/fine-tuned on sequentially
captured real-time data, in which case poisoning adversaries could dynamically
poison each data batch according to the current model state. In this paper, we
focus on the real-time settings and propose a new attacking strategy, which
affiliates an accumulative phase with poisoning attacks to secretly (i.e.,
without affecting accuracy) magnify the destructive effect of a (poisoned)
trigger batch. By mimicking online learning and federated learning on MNIST and
CIFAR-10, we show that model accuracy significantly drops by a single update
step on the trigger batch after the accumulative phase. Our work validates that
a well-designed but straightforward attacking strategy can dramatically amplify
the poisoning effects, with no need to explore complex techniques.
|
Society is showing signs of strong ideological polarization. When pushed to
seek perspectives different from their own, people often reject diverse ideas
or find them unfathomable. Work has shown that framing controversial issues
using the values of the audience can improve understanding of opposing views.
In this paper, we present our work designing systems for addressing ideological
division through educating U.S. news consumers to engage using a framework of
fundamental human values known as Moral Foundations. We design and implement a
series of new features that encourage users to challenge their understanding of
opposing views, including annotation of moral frames in news articles,
discussion of those frames via inline comments, and recommendations based on
relevant moral frames. We describe two versions of features -- the first
covering a suite of ways to interact with moral framing in news, and the second
tailored towards collaborative annotation and discussion. We conduct a field
evaluation of each design iteration with 71 participants in total over a period
of 6-8 days, finding evidence suggesting users learned to re-frame their
discourse in moral values of the opposing side. Our work provides several
design considerations for building systems to engage with moral framing.
|
The construction of approximate replication strategies for pricing and
hedging of derivative contracts in incomplete markets is a key problem of
financial engineering. Recently Reinforcement Learning algorithms for hedging
under realistic market conditions have attracted significant interest. While
research in the derivatives area mostly focused on variations of $Q$-learning,
in artificial intelligence Monte Carlo Tree Search is the recognized
state-of-the-art method for various planning problems, such as the games of
Hex, Chess, Go,... This article introduces Monte Carlo Tree Search as a method
to solve the stochastic optimal control problem behind the pricing and hedging
tasks. As compared to $Q$-learning it combines Reinforcement Learning with tree
search techniques. As a consequence Monte Carlo Tree Search has higher sample
efficiency, is less prone to over-fitting to specific market models and
generally learns stronger policies faster. In our experiments we find that
Monte Carlo Tree Search, being the world-champion in games like Chess and Go,
is easily capable of maximizing the utility of investor's terminal wealth
without setting up an auxiliary mathematical framework.
|
Voter eligibility in United States elections is determined by a patchwork of
state databases containing information about which citizens are eligible to
vote. Administrators at the state and local level are faced with the
exceedingly difficult task of ensuring that each of their jurisdictions is
properly managed, while also monitoring for improper modifications to the
database. Monitoring changes to Voter Registration Files (VRFs) is crucial,
given that a malicious actor wishing to disrupt the democratic process in the
US would be well-advised to manipulate the contents of these files in order to
achieve their goals. In 2020, we saw election officials perform admirably when
faced with administering one of the most contentious elections in US history,
but much work remains to secure and monitor the election systems Americans rely
on. Using data created by comparing snapshots taken of VRFs over time, we
present a set of methods that make use of machine learning to ease the burden
on analysts and administrators in protecting voter rolls. We first evaluate the
effectiveness of multiple unsupervised anomaly detection methods in detecting
VRF modifications by modeling anomalous changes as sparse additive noise. In
this setting we determine that statistical models comparing administrative
districts within a short time span and non-negative matrix factorization are
most effective for surfacing anomalous events for review. These methods were
deployed during 2019-2020 in our organization's monitoring system and were used
in collaboration with the office of the Iowa Secretary of State. Additionally,
we propose a newly deployed model which uses historical and demographic
metadata to label the likely root cause of database modifications. We hope to
use this model to predict which modifications have known causes and therefore
better identify potentially anomalous modifications.
|
A family $\mathcal{F}$ of elliptic curves defined over number fields is said
to be typically bounded in torsion if the torsion subgroups $E(F)[$tors$]$ of
those elliptic curves $E_{/F}\in \mathcal{F}$ can be made uniformly bounded
after removing from $\mathcal{F}$ those whose number field degrees lie in a
subset of $\mathbb{Z}^+$ with arbitrarily small upper density. For every number
field $F$, we prove unconditionally that the family $\mathcal{E}_F$ of elliptic
curves defined over number fields and with $F$-rational $j$-invariant is
typically bounded in torsion. For any integer $d\in\mathbb{Z}^+$, we also
strengthen a result on typically bounding torsion for the family
$\mathcal{E}_d$ of elliptic curves defined over number fields and with degree
$d$ $j$-invariant.
|
A popular method of improving the throughput of blockchain systems is by
running smaller side blockchains that push the hashes of their blocks onto a
trusted blockchain. Side blockchains are vulnerable to stalling attacks where a
side blockchain node pushes the hash of a block to the trusted blockchain but
makes the block unavailable to other side blockchain nodes. Recently, Sheng et
al. proposed a data availability oracle based on LDPC codes and a data
dispersal protocol as a solution to the above problem. While showing
improvements, the codes and dispersal protocol were designed disjointly which
may not be optimal in terms of the communication cost associated with the
oracle. In this paper, we provide a tailored dispersal protocol and specialized
LDPC code construction based on the Progressive Edge Growth (PEG) algorithm,
called the dispersal-efficient PEG (DE-PEG) algorithm, aimed to reduce the
communication cost associated with the new dispersal protocol. Our new code
construction reduces the communication cost and, additionally, is less
restrictive in terms of system design.
|
Every 19 years, Saturn passes through Jupiter's 'flapping' magnetotail. Here,
we report Chandra X-ray observations of Saturn planned to coincide with this
rare planetary alignment and to analyse Saturn's magnetospheric response when
transitioning to this unique parameter space. We analyse three Director's
Discretionary Time (DDT) observations from the High Resolution Camera (HRC-I)
on-board Chandra, taken on November 19, 21 and 23 2020 with the aim to find
auroral and/or disk emissions. We infer the conditions in the kronian system by
looking at coincident soft X-ray solar flux data from the Geostationary
Operational Environmental Satellite (GOES) and Hubble Space Telescope (HST)
observations of Saturn's ultraviolet (UV) auroral emissions. The large
Saturn-Sun-Earth angle during this time would mean that most flares from the
Earth-facing side of the Sun would not have impacted Saturn. We find no
significant detection of Saturn's disk or auroral emissions in any of our
observations. We calculate the 3$\sigma$ upper band energy flux of Saturn
during this time to be 0.9 - 3.04 $\times$ 10$^{14}$ erg cm$^{-2}$ s$^{-1}$
which agrees with fluxes found from previous modelled spectra of the disk
emissions. We conclude by discussing the implications of this non-detection and
how it is imperative that the next fleet of X-ray telescope (such as Athena and
the Lynx mission concept) continue to observe Saturn with their improved
spatial and spectral resolution and very enhanced sensitivity to help us
finally solve the mysteries behind Saturn's apparently elusive X-ray aurora.
|
The discovery of gravitational wave radiation from merging black holes (BHs)
also uncovered BHs with masses in the range of ~20-160 Msun. In contrast, the
most massive Galactic stellar-mass BH currently known has a mass ~21 Msun.
While low-mass X-ray binaries (LMXBs) will never independently evolve into a
binary BH system, and binary evolution effects can play an important role
explaining the different BH masses found through studies of X-ray binaries and
gravitational wave events, (electromagnetic) selection effects may also play a
role in this discrepancy. Assuming BH LMXBs originate in the Galactic Plane, we
show that the spatial distribution of the current sample of confirmed and
candidate BH LMXBs are both biased to sources that lie at a large distance from
the Plane. Specifically, most of the confirmed and candidate BH LMXBs are found
at a Galactic height larger than 3 times the scale height for massive star
formation. In addition, the confirmed BH LMXBs are found at larger distances to
the Galactic Center than the candidate BH LMXBs. Interstellar absorption makes
candidate BH LMXBs in the Plane and those in the Bulge too faint for a
dynamical mass measurement using current instrumentation. Given the observed
and theoretical evidence for BH natal and/or Blaauw kicks, their relation with
BH mass and binary orbital period, and the relation between outburst recurrence
time and BH mass, the observational selection effects imply that the current
sample of confirmed BH LMXBs is biased against the most massive BHs.
|
We describe a new code and approach using particle-level information to
recast the recent CMS disappearing track searches including all run 2 data.
Notably, the simulation relies on knowledge of the detector geometry, and we
also include the simulation of pileup events directly rather than as an
efficiency function. We validate it against provided acceptances and cutflows,
and use it in combination with heavy stable charged particle searches to place
limits on winos with any proper decay length above a centimetre. We also
provide limits for a simple model of a charged scalar that is only produced in
pairs, that decays to electrons plus an invisible fermion.
|
The study of the classifier's design and it's usage is one of the most
important machine learning areas. With the development of automatic machine
learning methods, various approaches are used to build a robust classifier
model. Due to some difficult implementation and customization complexity,
genetic programming (GP) methods are not often used to construct classifiers.
GP classifiers have several limitations and disadvantages. However, the concept
of "soft" genetic programming (SGP) has been developed, which allows the
logical operator tree to be more flexible and find dependencies in datasets,
which gives promising results in most cases. This article discusses a method
for constructing binary classifiers using the SGP technique. The test results
are presented. Source code - https://github.com/survexman/sgp_classifier.
|
The repetitive tracking task for time-varying systems (TVSs) with
non-repetitive time-varying parameters, which is also called non-repetitive
TVSs, is realized in this paper using iterative learning control (ILC). A
machine learning (ML) based nominal model update mechanism, which utilizes the
linear regression technique to update the nominal model at each ILC trial only
using the current trial information, is proposed for non-repetitive TVSs in
order to enhance the ILC performance. Given that the ML mechanism forces the
model uncertainties to remain within the ILC robust tolerance, an ILC update
law is proposed to deal with non-repetitive TVSs. How to tune parameters inside
ML and ILC algorithms to achieve the desired aggregate performance is also
provided. The robustness and reliability of the proposed method are verified by
simulations. Comparison with current state-of-the-art demonstrates its superior
control performance in terms of controlling precision. This paper broadens ILC
applications from time-invariant systems to non-repetitive TVSs, adopts ML
regression technique to estimate non-repetitive time-varying parameters between
two ILC trials and proposes a detailed parameter tuning mechanism to achieve
desired performance, which are the main contributions.
|
The goal of this paper is to provide a complete representation of regional
linguistic variation on a global scale. To this end, the paper focuses on
removing three constraints that have previously limited work within
dialectology/dialectometry. First, rather than assuming a fixed and incomplete
set of variants, we use Computational Construction Grammar to provide a
replicable and falsifiable set of syntactic features. Second, rather than
assuming a specific area of interest, we use global language mapping based on
web-crawled and social media datasets to determine the selection of national
varieties. Third, rather than looking at a single language in isolation, we
model seven major languages together using the same methods: Arabic, English,
French, German, Portuguese, Russian, and Spanish. Results show that models for
each language are able to robustly predict the region-of-origin of held-out
samples better using Construction Grammars than using simpler syntactic
features. These global-scale experiments are used to argue that new methods in
computational sociolinguistics are able to provide more generalized models of
regional variation that are essential for understanding language variation and
change at scale.
|
We consider a family of free multiplicative Brownian motions $b_{s,\tau}$
parametrized by a real variance parameter $s$ and a complex covariance
parameter $\tau.$ We compute the Brown measure $\mu_{s,\tau}$ of $ub_{s,\tau
},$ where $u$ is a unitary element freely independent of $b_{s,\tau}.$ We find
that $\mu_{s,\tau}$ has a simple structure, with a density in logarithmic
coordinates that is constant in the $\tau$-direction. These results generalize
those of Driver-Hall-Kemp and Ho-Zhong for the case $\tau=s.$ We also establish
a remarkable "model variation phenomenon,'' stating that all the Brown measures
with $s$ fixed and $\tau$ varying are related by push-forward under a natural
family of maps. Our proofs use a first-order nonlinear PDE of Hamilton-Jacobi
type satisfied by the regularized log potential of the Brown measures. Although
this approach is inspired by the PDE method introduced by Driver-Hall-Kemp, our
methods are substantially different at both the technical and conceptual level.
|
We propose a new transmit antenna selection (TAS) technique that can be
beneficial for physical layer security purposes. Specifically, we show that the
conventional TAS criterion based on the legitimate channel state information
(CSI) is not recommended when the average signal-to-noise ratio for the
illegitimate user becomes comparable or superior to that of the legitimate
user. We illustrate that an eavesdropper's based antenna selection technique
outperforms conventional TAS, without explicit knowledge of the eavesdropper's
instantaneous CSI. Analytical expressions and simulation results to support
this comparison are given, showing how this new TAS scheme is a better choice
in scenarios with a strong eavesdropper.
|
The repeating fast radio burst (FRB) localized to a globular cluster in M81
challenges our understanding of FRB models. In this Letter, we explore
dynamical formation scenarios for objects in old globular clusters that may
plausibly power FRBs. Using N-body simulations, we demonstrate that young
neutron stars may form in globular clusters at a rate of up to
$\sim50\,\rm{Gpc}^{-3}\,\rm{yr}^{-1}$ through a combination of binary white
dwarf mergers, white dwarf--neutron star mergers, binary neutron star mergers,
and accretion induced collapse of massive white dwarfs in binary systems. We
consider two FRB emission mechanisms: First, we show that a
magnetically-powered source (e.g., a magnetar with field strength
$\gtrsim10^{14}\,$G) is viable for radio emission efficiencies
$\gtrsim10^{-4}$. This would require magnetic activity lifetimes longer than
the associated spin-down timescales and longer than empirically-constrained
lifetimes of Galactic magnetars. Alternatively, if these dynamical formation
channels produce young rotation-powered neutron stars with spin periods of
$\sim10\,$ms and magnetic fields of $\sim10^{11}\,$G (corresponding to
spin-down lifetimes of $\gtrsim10^5\,$yr), the inferred event rate and
energetics can be reasonably reproduced for order unity duty cycles.
Additionally, we show that recycled millisecond pulsars or low-mass X-ray
binaries similar to those well-observed in Galactic globular clusters may also
be plausible channels, but only if their duty cycle for producing bursts
similar to the M81 FRB is small.
|
There has been an intense recent activity in embedding of very high
dimensional and nonlinear data structures, much of it in the data science and
machine learning literature. We survey this activity in four parts. In the
first part we cover nonlinear methods such as principal curves,
multidimensional scaling, local linear methods, ISOMAP, graph based methods and
kernel based methods. The second part is concerned with topological embedding
methods, in particular mapping topological properties into persistence
diagrams. Another type of data sets with a tremendous growth is very
high-dimensional network data. The task considered in part three is how to
embed such data in a vector space of moderate dimension to make the data
amenable to traditional techniques such as cluster and classification
techniques. The final part of the survey deals with embedding in
$\mathbb{R}^2$, which is visualization. Three methods are presented: $t$-SNE,
UMAP and LargeVis based on methods in parts one, two and three, respectively.
The methods are illustrated and compared on two simulated data sets; one
consisting of a triple of noisy Ranunculoid curves, and one consisting of
networks of increasing complexity and with two types of nodes.
|
We study Martsinkovsky-Russell torsion modules [MaRu20] with pure embeddings
as an abstract elementary class. We give a model-theoretic characterization of
the pure-injective and the $\Sigma$-pure-injective modules relative to the
class of torsion modules assuming that the ring is right semihereditary. Our
characterization of relative $\Sigma$-pure-injective modules strictly extends
the classical charactetization of [GrJe76] and [Zim, 3.6].
We study the limit models of the class and determine when the class is
superstable assuming that the ring is right semihereditary. As a corollary, we
show that the class of torsion abelian groups with pure embeddings is strictly
stable, i.e., stable not superstable.
|
Axions and axion-like particles are bosonic quantum fields. They are often
assumed to follow classical field equations due to their high degeneracy in the
phase space. In this work, we explore the disparity between classical and
quantum field treatments in the context of density and velocity fields of
axions. Once the initial density and velocity field are specified, the
evolution of the axion fluid is unique in the classical field treatment.
However, in the quantum field treatment, there are many quantum states
consistent with the given initial density and velocity field. We show that
evolutions of the density perturbations for these quantum states are not
necessarily identical and, in general, differ from the unique classical
evolution. To illustrate the underlying physics, we consider a system of large
number of bosons in a one-dimensional box, moving under the gravitational
potential of a heavy static point-mass. We ignore the self-interactions between
the bosons here. Starting with homogeneous number density and zero velocity
field, we determine the density perturbations in the linear regime in both
quantum and classical field theories. We find that classical and quantum
evolutions are identical in the linear regime if only one single-particle state
is occupied by all the bosons and the self-interaction is absent. If more than
one single-particle states are occupied, the density perturbations in quantum
evolutions differ from the classical prediction after a certain time which
depends upon the parameters of the system.
|
This note is a continuation of [CMZ21]. We shall show that an ancient Ricci
flow with uniformly bounded Nash entropy must also have uniformly bounded
$\nu$-functional. Consequently, on such an ancient solution there are uniform
logarithmic Sobolev and Sobolev inequalities. We emphasize that the main
theorem in this paper is true so long as the theory in [Bam20c] is valid, and
in particular, when the underlying manifold is closed.
|
The long-term optical, X-ray and $\gamma$-ray data of blazar 3C 279 have been
compiled from $Swift$-XRT, $RXTE$ PCA, $Fermi$-LAT, SMARTS and literature. The
source exhibits strong variability on long time scales. Since 1980s to now, the
optical $R$ band light curve spans above 32 yr, and a possible 5.6-yr-long
quasi-periodic variation component has been found in it. The optical spectral
behavior has been investigated. In the optical band, the mean spectral index is
-1.71. The source exhibits an obvious special spectral behavior. In the low
state, the source shows a clear bluer-when-brighter behavior in a sense that
the optical spectrum turns harder (flatter) when the brightness increases.
While in the high state, the optical spectrum is stable, that means the source
spectral index does not vary with the brightness. The correlation analysis has
been performed among optical, X-ray and $\gamma$-ray energy bands. The result
indicates that the variations of $\gamma$-ray and X-ray bands are well
correlated without time delay on the time scale of days, and their variations
exhibit weak correlations with those of optical band. The variations, especial
outbursts, are simultaneous, but the magnitude of variations is
disproportionate. The detailed analysis reveals that the main outbursts exhibit
strong correlations in different $\gamma$-ray, X-ray and optical bands.
|
We study the generation of harmonics from graphene under the influence of an
artificial magnetic field, generated via bending of a graphene flake. We show
how the Landau level structure induced by the pseudomagnetic field breaks the
centrosymmetry of graphene, thus allowing the generation of even harmonics. We
also show, that depending on the impinging pulse duration, the nonlinear signal
does not only contain the integer harmonics of the impinging pulse, but also
its half-integer ones, due to the peculiar square-root-like nature of Landau
levels in graphene.
|
In the present paper, we give new Frenet formulas for the Bertrand partner
curve by taking the advantage of relations between curvatures and a curve
itself. Then making use of these formulas we write the differential equations
and sufficient conditions of harmonicity of the Bertrand partner curve in terms
of the main curve. Finally, we exemplify our assertions on the curve helix to
see how the formulas we developed work.
|
Since 2014, the NIH funded iDASH (integrating Data for Analysis,
Anonymization, SHaring) National Center for Biomedical Computing has hosted
yearly competitions on the topic of private computing for genomic data. For one
track of the 2020 iteration of this competition, participants were challenged
to produce an approach to federated learning (FL) training of genomic cancer
prediction models using differential privacy (DP), with submissions ranked
according to held-out test accuracy for a given set of DP budgets. More
precisely, in this track, we are tasked with training a supervised model for
the prediction of breast cancer occurrence from genomic data split between two
virtual centers while ensuring data privacy with respect to model transfer via
DP. In this article, we present our 3rd place submission to this competition.
During the competition, we encountered two main challenges discussed in this
article: i) ensuring correctness of the privacy budget evaluation and ii)
achieving an acceptable trade-off between prediction performance and privacy
budget.
|
The present work revisits the classical Wulff problem restricted to
crystalline integrands, a class of surface energies that gives rise to finitely
faceted crystals. The general proof of the Wulff theorem was given by J.E.
Taylor (1978) by methods of Geometric Measure Theory. This work follows a
simpler and direct way through Minkowski Theory by taking advantage of the
convex properties of the considered Wulff shapes.
|
We consider the communication complexity of the Hamming distance of two
strings. Bille et al. [SPIRE 2018] considered the communication complexity of
the longest common prefix (LCP) problem in the setting where the two parties
have their strings in a compressed form, i.e., represented by the Lempel-Ziv 77
factorization (LZ77) with/without self-references. We present a randomized
public-coin protocol for a joint computation of the Hamming distance of two
strings represented by LZ77 without self-references. While our scheme is
heavily based on Bille et al.'s LCP protocol, our complexity analysis is
original which uses Crochemore's C-factorization and Rytter's AVL-grammar. As a
byproduct, we also show that LZ77 with/without self-references are not
monotonic in the sense that their sizes can increase by a factor of 4/3 when a
prefix of the string is removed.
|
Soft robotics has been a trending topic within the robotics community for
almost two decades. However, the available tools for the community to model and
analyze soft robotics artifacts are still limited. This paper presents the
development of a user-friendly MATLAB toolbox, SoRoSim, that integrates the
Geometric Variable Strain model to facilitate the modeling, analysis, and
simulation of hybrid rigid-soft open-chain robotic systems. The toolbox
implements a recursive, two-level nested quadrature scheme to solve the model.
We demonstrate several examples and applications to validate the toolbox and
explore the toolbox's capabilities to efficiently model a vast range of robotic
systems, considering different actuators and external loads, including the
fluid-structure interactions. We think that the soft-robotics research
community will benefit from the SoRoSim toolbox for a wide variety of
applications.
|
When designing large-scale distributed controllers, the information-sharing
constraints between sub-controllers, as defined by a communication topology
interconnecting them, are as important as the controller itself. Controllers
implemented using dense topologies typically outperform those implemented using
sparse topologies, but it is also desirable to minimize the cost of controller
deployment. Motivated by the above, we introduce a compact but expressive graph
recurrent neural network (GRNN) parameterization of distributed controllers
that is well suited for distributed controller and communication topology
co-design. Our proposed parameterization enjoys a local and distributed
architecture, similar to previous Graph Neural Network (GNN)-based
parameterizations, while further naturally allowing for joint optimization of
the distributed controller and communication topology needed to implement it.
We show that the distributed controller/communication topology co-design task
can be posed as an $\ell_1$-regularized empirical risk minimization problem
that can be efficiently solved using stochastic gradient methods. We run
extensive simulations to study the performance of GRNN-based distributed
controllers and show that (a) they achieve performance comparable to GNN-based
controllers while having fewer free parameters, and (b) our method allows for
performance/communication density tradeoff curves to be efficiently
approximated.
|
This work aims to empirically clarify a recently discovered perspective that
label smoothing is incompatible with knowledge distillation. We begin by
introducing the motivation behind on how this incompatibility is raised, i.e.,
label smoothing erases relative information between teacher logits. We provide
a novel connection on how label smoothing affects distributions of semantically
similar and dissimilar classes. Then we propose a metric to quantitatively
measure the degree of erased information in sample's representation. After
that, we study its one-sidedness and imperfection of the incompatibility view
through massive analyses, visualizations and comprehensive experiments on Image
Classification, Binary Networks, and Neural Machine Translation. Finally, we
broadly discuss several circumstances wherein label smoothing will indeed lose
its effectiveness. Project page:
http://zhiqiangshen.com/projects/LS_and_KD/index.html.
|
The present study focuses on identifying the parameters from the Weather
Research and Forecasting (WRF) model that strongly influence the prediction of
tropical cyclones over the Bay of Bengal (BoB) region. Three global sensitivity
analysis (SA) methods namely the Morris One-at-A-Time (MOAT), Multivariate
Adaptive Regression Splines (MARS), and surrogate-based Sobol' are employed to
identify the most sensitive parameters out of 24 tunable parameters
corresponding to seven parameterization schemes of the WRF model. Ten tropical
cyclones across different categories, such as cyclonic storms, severe cyclonic
storms, and very severe cyclonic storms over BoB between 2011 and 2018, are
selected in this study. The sensitivity scores of 24 parameters are evaluated
for eight meteorological variables. The parameter sensitivity results are
consistent across three SA methods for all the variables, and 8 out of the 24
parameters contribute 80%-90% to the overall sensitivity scores. It is found
that the Sobol' method with Gaussian progress regression as a surrogate model
can produce reliable sensitivity results when the available samples exceed 200.
The parameters with which the model simulations have the least RMSE values when
compared with the observations are considered as the optimal parameters.
Comparing observations and model simulations with the default and optimal
parameters shows that predictions with the optimal set of parameters yield a
19.65% improvement in surface wind, 6.5% in surface temperature, and 13.2% in
precipitation predictions, compared to the default set of parameters.
|
Twisted bilayer graphene (TBG) aligned with hexagonal boron nitride (h-BN)
substrate can exhibit an anomalous Hall effect at 3/4 filling due to the
spontaneous valley polarization in valley resolved moir\'e bands with opposite
Chern number [Science 367, 900 (2020), Science 365, 605 (2019)]. It was
observed that a small DC current is able to switch the valley polarization and
reverse the sign of the Hall conductance [Science 367, 900 (2020), Science 365,
605 (2019)]. Here, we discuss the mechanism of the current switching of valley
polarization near the transition temperature, where bulk dissipative transport
dominates. We show that for a sample with rotational symmetry breaking, a DC
current may generate an electron density difference between the two valleys
(valley density difference). The current induced valley density difference in
turn induces a first order transition in the valley polarization. We emphasize
that the inter-valley scattering plays a central role since it is the channel
for exchanging electrons between the two valleys. We further estimate the
valley density difference in the TBG/h-BN system with a microscopic model, and
find a significant enhancement of the effect in the magic angle regime.
|
Bosonic qubits are a promising route to building fault-tolerant quantum
computers on a variety of physical platforms. Studying the performance of
bosonic qubits under realistic gates and measurements is challenging with
existing analytical and numerical tools. We present a novel formalism for
simulating classes of states that can be represented as linear combinations of
Gaussian functions in phase space. This formalism allows us to analyze and
simulate a wide class of non-Gaussian states, transformations and measurements.
We demonstrate how useful classes of bosonic qubits --
Gottesman-Kitaev-Preskill (GKP), cat, and Fock states -- can be simulated using
this formalism, opening the door to investigating the behaviour of bosonic
qubits under Gaussian channels and measurements, non-Gaussian transformations
such as those achieved via gate teleportation, and important non-Gaussian
measurements such as threshold and photon-number detection. Our formalism
enables simulating these situations with levels of accuracy that are not
feasible with existing methods. Finally, we use a method informed by our
formalism to simulate circuits critical to the study of fault-tolerant quantum
computing with bosonic qubits but beyond the reach of existing techniques.
Specifically, we examine how finite-energy GKP states transform under realistic
qubit phase gates; interface with a CV cluster state; and transform under
non-Clifford T gate teleportation using magic states. We implement our
simulation method as a part of the open-source Strawberry Fields Python
library.
|
We start by studying the subgroup structures underlying stabilizer circuits
and we use our results to propose a new normal form for stabilizer circuits.
This normal form is computed by induction using simple conjugation rules in the
Clifford group. It has shape CX-CZ-P-H-CZ-P-H, where CX (resp. CZ) denotes a
layer of $\cnot$ (resp. $\cz$) gates, P a layer of phase gates and H a layer of
Hadamard gates. Then we consider a normal form for stabilizer states and we
show how to reduce the two-qubit gate count in circuits implementing graph
states. Finally we carry out a few numerical tests on classical and quantum
computers in order to show the practical utility of our methods. All the
algorithms described in the paper are implemented in the C language as a Linux
command available on GitHub.
|
This paper proposes a forecast-centric adaptive learning model that engages
with the past studies on the order book and high-frequency data, with
applications to hypothesis testing. In line with the past literature, we
produce brackets of summaries of statistics from the high-frequency bid and ask
data in the CSI 300 Index Futures market and aim to forecast the one-step-ahead
prices. Traditional time series issues, e.g. ARIMA order selection,
stationarity, together with potential financial applications are covered in the
exploratory data analysis, which pave paths to the adaptive learning model. By
designing and running the learning model, we found it to perform well compared
to the top fixed models, and some could improve the forecasting accuracy by
being more stable and resilient to non-stationarity. Applications to hypothesis
testing are shown with a rolling window, and further potential applications to
finance and statistics are outlined.
|
Let $\Omega_n$ denote the class of $n \times n$ doubly stochastic matrices
(each such matrix is entrywise nonnegative and every row and column sum is 1).
We study the diagonals of matrices in $\Omega_n$. The main question is: which
$A \in \Omega_n$ are such that the diagonals in $A$ that avoid the zeros of $A$
all have the same sum of their entries. We give a characterization of such
matrices, and establish several classes of patterns of such matrices.
|
We report an all-optical radio-frequency (RF) spectrum analyzer with a
bandwidth greater than 5 terahertz (THz), based on a 50-cm long spiral
waveguide in a CMOS-compatible high-index doped silica platform. By carefully
mapping out the dispersion profile of the waveguides for different thicknesses,
we identify the optimal design to achieve near zero dispersion in the C-band.
To demonstrate the capability of the RF spectrum analyzer, we measure the
optical output of a femtosecond fiber laser with an ultrafast optical RF
spectrum in the terahertz regime.
|
During an infectious disease pandemic, it is critical to share electronic
medical records or models (learned from these records) across regions. Applying
one region's data/model to another region often have distribution shift issues
that violate the assumptions of traditional machine learning techniques.
Transfer learning can be a solution. To explore the potential of deep transfer
learning algorithms, we applied two data-based algorithms (domain adversarial
neural networks and maximum classifier discrepancy) and model-based transfer
learning algorithms to infectious disease detection tasks. We further studied
well-defined synthetic scenarios where the data distribution differences
between two regions are known. Our experiments show that, in the context of
infectious disease classification, transfer learning may be useful when (1) the
source and target are similar and the target training data is insufficient and
(2) the target training data does not have labels. Model-based transfer
learning works well in the first situation, in which case the performance
closely matched that of the data-based transfer learning models. Still, further
investigation of the domain shift in real world research data to account for
the drop in performance is needed.
|
Structure formation in our Universe creates non-Gaussian random fields that
will soon be observed over almost the entire sky by the Euclid satellite, the
Vera-Rubin observatory, and the Square Kilometre Array. An unsolved problem is
how to analyze best such non-Gaussian fields, e.g. to infer the physical laws
that created them. This problem could be solved if a parametric non-Gaussian
sampling distribution for such fields were known, as this distribution could
serve as likelihood during inference. We therefore create a sampling
distribution for non-Gaussian random fields. Our approach is capable of
handling strong non-Gaussianity, while perturbative approaches such as the
Edgeworth expansion cannot. To imitate cosmological structure formation, we
enforce our fields to be (i) statistically isotropic, (ii) statistically
homogeneous, and (iii) statistically independent at large distances. We
generate such fields via a Monte Carlo Markov Chain technique and find that
even strong non-Gaussianity is not necessarily visible to the human eye. We
also find that sampled marginals for pixel pairs have an almost generic
Gauss-like appearance, even if the joint distribution of all pixels is markedly
non-Gaussian. This apparent Gaussianity is a consequence of the high
dimensionality of random fields. We conclude that vast amounts of non-Gaussian
information can be hidden in random fields that appear nearly Gaussian in
simple tests, and that it would be short-sighted not to try and extract it.
|
The subject of space charge in ionization detectors is reviewed, showing how
the observations and the formalism used to describe the effects have evolved,
starting with applications to calorimeters and reaching recent, large-size time
projection chambers. General scaling laws, and different ways to present and
model the effects are presented. The relation between space-charge effects and
the boundary conditions imposed on the side faces of the detector are
discussed, together with a design solution that mitigates part of the effects.
The implications of the relative size of drift length and transverse detector
size are illustrated. Calibration methods are briefly discussed.
|
We have utilized the finite-difference approach to explore electron-tunneling
properties in gapped graphene through various electrostatic-potential barriers
changing from Gaussian to a triangular envelope function in comparison with a
square potential barrier. Transmission coefficient is calculated numerically
for each case and applied to corresponding tunneling conductance. It is well
known that Klein tunneling in graphene will be greatly reduced in a gapped
graphene. Our results further demonstrate that such a decrease of transmission
can be significantly enhanced for spatially-modulated potential barriers.
Moreover, we investigate the effect from a bias field applied to those barrier
profiles, from which we show that it enables the control of electron flow under
normal incidence. Meanwhile, the suppression of Klein tunneling is found more
severe for a non-square barrier and exhibits a strong dependence on bias-field
polarity for all kinds of barriers. Finally, roles of a point impurity on
electron transmission and conductance are analyzed with a sharp peak appearing
in electron conductance as the impurity atom is placed at the middle of a
square barrier. For narrow triangular and Gaussian barriers, however, the
conductance peaks become significantly broadened, associated with an
enhancement in tunneling conductance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.