abstract
stringlengths 42
2.09k
|
---|
Self-force methods can be applied in calculations of the scatter angle in
two-body hyperbolic encounters, working order by order in the mass ratio
(assumed small) but with no recourse to a weak-field approximation. This, in
turn, can inform ongoing efforts to construct an accurate model of the
general-relativistic binary dynamics via an effective-one-body description and
other semi-analytical approaches. Existing self-force methods are to a large
extent specialised to bound, inspiral orbits. Here we develop a technique for
(numerical) self-force calculations that can efficiently tackle scatter orbits.
The method is based on a time-domain reconstruction of the metric perturbation
from a scalar-like Hertz potential that satisfies the Teukolsky equation, an
idea pursued so far only for bound orbits. The crucial ingredient in this
formulation are certain jump conditions that (each multipole mode of) the Hertz
potential must satisfy along the orbit, in a 1+1-dimensional multipole
reduction of the problem. We obtain a closed-form expression for these jumps,
for an arbitrary geodesic orbit in Schwarzschild spacetime, and present a full
numerical implementation for a scatter orbit. In this paper we focus on method
development, and go only as far as calculating the Hertz potential; a
calculation of the self-force and its physical effects on the scatter orbit
will be the subject of forthcoming work.
|
This study aimed to provide a framework to evaluate team attacking
performances in rugby league using 59,233 plays from 180 Super League matches
via expected possession value (EPV) models. The EPV-308 split the pitch into
308 5m x 5m zones, the EPV-77 split the pitch into 77 10m x 10m zones and the
EPV-19 split the pitch in 19 zones of variable size dependent on the total zone
value generated during a match. Attacking possessions were considered as Markov
Chains, allowing the value of each zone visited to be estimated based on the
outcome of the possession. The Kullback-Leibler Divergence was used to evaluate
the reproducibility of the value generated from each zone (the reward
distribution) by teams between matches. The EPV-308 had the greatest
variability and lowest reproducibility, compared to EPV-77 and EPV-19. When six
previous matches were considered, the team's subsequent match attacking
performances had a similar reward distribution for EPV-19, EPV-77 and EPV-308
on 95 +/- 4%, 51 +/- 12% and 0 +/- 0% of occasions. This study supports the use
of EPV-19 to evaluate team attacking performance in rugby league and provides a
simple framework through which attacking performances can be compared between
teams.
|
Given $X$ a finite nilpotent simplicial set, consider the classifying
fibrations $$ X\to Baut_G^*(X)\to Baut_G(X),\qquad X\to Z\to Baut_{\pi}^*(X),
$$
where $G$ and $\pi$ denote, respectively, subgroups of the free and pointed
homotopy classes of free and pointed self homotopy equivalences of $X$ which
act nilpotently on $H_*(X)$ and $\pi_*(X)$.
We give algebraic models, in terms of complete differential graded Lie
algebras (cdgl's), of the rational homotopy type of these fibrations.
Explicitly, if $L$ is a cdgl model of $X$, there are connected sub cdgl's
$Der^G L$ and $Der^{\pi} L$ of the Lie algebra $Der L$ of derivations of $L$
such that the geometrical realization of the sequences of cdgl morphisms
$$
L\stackrel{ad}{\to} Der^G L\to Der^G L\widetilde\times sL,\qquad L\to
L\widetilde\times Der^{\pi} L\to Der^{\pi} L
$$
have the rational homotopy type of the above classifying fibrations. Among
the consequences we also describe in cdgl terms the Malcev $Q$-completion of
$G$ and $\pi$ together with the rational homotopy type of the classifying
spaces $BG $ and $B\pi$.
|
When people observe events, they are able to abstract key information and
build concise summaries of what is happening. These summaries include
contextual and semantic information describing the important high-level details
(what, where, who and how) of the observed event and exclude background
information that is deemed unimportant to the observer. With this in mind, the
descriptions people generate for videos of different dynamic events can greatly
improve our understanding of the key information of interest in each video.
These descriptions can be captured in captions that provide expanded attributes
for video labeling (e.g. actions/objects/scenes/sentiment/etc.) while allowing
us to gain new insight into what people find important or necessary to
summarize specific events. Existing caption datasets for video understanding
are either small in scale or restricted to a specific domain. To address this,
we present the Spoken Moments (S-MiT) dataset of 500k spoken captions each
attributed to a unique short video depicting a broad range of different events.
We collect our descriptions using audio recordings to ensure that they remain
as natural and concise as possible while allowing us to scale the size of a
large classification dataset. In order to utilize our proposed dataset, we
present a novel Adaptive Mean Margin (AMM) approach to contrastive learning and
evaluate our models on video/caption retrieval on multiple datasets. We show
that our AMM approach consistently improves our results and that models trained
on our Spoken Moments dataset generalize better than those trained on other
video-caption datasets.
|
The problem of uniqueness of universal formulae for (quantum) dimensions of
simple Lie algebras is investigated. We present generic functions, which
multiplied by a universal (quantum) dimension formula, preserve both its
structure and its values at the points from Vogel's table. Connection of some
of these functions with geometrical configurations, such as the famous
Pappus-Brianchon-Pascal $(9_3)_1$ configuration of points and lines, is
established. Particularly, the appropriate realizable configuration
$(144_336_{12})$ (yet to be found) will provide a symmetric non-uniqueness
factor for any universal dimension formula.
|
We introduce a new approach for the study of the Problem of Iterates using
the theory on general ultradifferentiable structures developed in the last
years. Our framework generalizes many of the previous settings including the
Gevrey case and enables us, for the first time, to prove non-analytic Theorems
of Iterates for non-elliptic differential operators. In particular, by
generalizing a Theorem of Baouendi and Metivier we obtain the Theorem of
Iterates for hypoelliptic analytic operators of principal type with respect to
several non-analytic ultradifferentiable structures.
|
Digitalization is forging its path in the architecture, construction,
engineering, operation (AECO) industry. This trend demands not only solutions
for data governance but also sophisticated cyber-physical systems with a high
variety of stakeholder background and very complex requirements. Existing
approaches to general requirements engineering ignore the context of the AECO
industry. This makes it harder for the software engineers usually lacking the
knowledge of the industry context to elicit, analyze and structure the
requirements and to effectively communicate with AECO professionals. To live up
to that task, we present an approach and a tool for collecting AECO-specific
software requirements with the aim to foster reuse and leverage domain
knowledge. We introduce a common scenario space, propose a novel choice of an
ubiquitous language well-suited for this particular industry and develop a
systematic way to refine the scenario ontologies based on the exploration of
the scenario space. The viability of our approach is demonstrated on an
ontology of 20 practical scenarios from a large project aiming to develop a
digital twin of a construction site.
|
In this communication we test the hypothesis that for some initial conditions
the time evolution of surface waves according to the extended KdV equation
(KdV2) exhibits signatures of the deterministic chaos.
|
A novel class of integrable $\sigma$-models interpolating between exact coset
conformal field theories in the IR and hyperbolic spaces in the UV is
constructed. We demonstrate the relation to the asymptotic limit of
$\lambda$-deformed models for cosets of non-compact groups. An integrable model
interpolating between two spacetimes with cosmological and black hole
interpretations and exact conformal field theory descriptions is also provided.
In the process of our work, a new zoom-in limit, distinct from the well known
non-Abelian T-duality limit, is found.
|
In this paper, we first study the well-posedness of a class of McKean-Vlasov
stochastic partial differential equations driven by cylindrical $\alpha$-stable
process, where $\alpha\in(1,2)$. Then by the method of the Khasminskii's time
discretization, we prove the averaging principle of a class of multiscale
McKean-Vlasov stochastic partial differential equations driven by cylindrical
$\alpha$-stable processes. Meanwhile, we obtain a specific strong convergence
rate.
|
The existence of moments of first downwards passage times of a spectrally
negative L\'evy process is governed by the general dynamics of the L\'evy
process, i.e. whether it is drifting to $+\infty$, $-\infty$ or oscillates.
Whenever the L\'evy process drifts to $+\infty$, we prove that the $\kappa$-th
moment of the first passage time (conditioned to be finite) exists if and only
if the $(\kappa+1)$-th moment of the L\'evy jump measure exists, thus
generalizing a result shown earlier by Delbaen for Cram\'er-Lundberg risk
processes \cite{Delbaen1990}. Whenever the L\'evy process drifts to $-\infty$
we prove that all moments of the passage time exist, while for an oscillating
L\'evy process we derive conditions for non-existence of the moments and in
particular we show that no integer moments exist. Moreover we provide general
formulae for integer moments of the first passage time (whenever they exist) in
terms of the scale function of the L\'evy process and its derivatives and
antiderivatives.
|
We present a direct speech-to-speech translation (S2ST) model that translates
speech from one language to speech in another language without relying on
intermediate text generation. Previous work addresses the problem by training
an attention-based sequence-to-sequence model that maps source speech
spectrograms into target spectrograms. To tackle the challenge of modeling
continuous spectrogram features of the target speech, we propose to predict the
self-supervised discrete representations learned from an unlabeled speech
corpus instead. When target text transcripts are available, we design a
multitask learning framework with joint speech and text training that enables
the model to generate dual mode output (speech and text) simultaneously in the
same inference pass. Experiments on the Fisher Spanish-English dataset show
that predicting discrete units and joint speech and text training improve model
performance by 11 BLEU compared with a baseline that predicts spectrograms and
bridges 83% of the performance gap towards a cascaded system. When trained
without any text transcripts, our model achieves similar performance as a
baseline that predicts spectrograms and is trained with text data.
|
In this contribution, we present the implementation of a second-order CASSCF
algorithm in conjunction with the Cholesky decomposition of the two-electron
repulsion integrals. The algorithm, called Norm-Extended Optimization,
guarantees convergence of the optimization, but it involves the full Hessian of
the wavefunction and is therefore computationally expensive. Coupling the
second-order procedure with the Cholesky decomposition leads to a significant
reduction in the computational cost, reduced memory requirements, and an
improved parallel performance. As a result, CASSCF calculations of larger
molecular systems become possible as a routine task. The performance of the new
implementation is illustrated by means of benchmark calculations on molecules
of increasing size, with up to about 3000 basis functions and 14 active
orbitals.
|
Connected and Automated Vehicles (CAVs) are envisioned to transform the
future industrial and private transportation sectors. Due to the complexity of
the systems, functional verification and validation of safety aspects are
essential before the technology merges into the public domain. In recent years,
a scenario-driven approach has gained acceptance for CAVs emphasizing the
requirement of a solid data basis of scenarios. The large-scale research
facility Test Bed Lower Saxony (TFNDS) enables the provision of substantial
information for a database of scenarios on motorways. For that purpose,
however, the scenarios of interest must be identified and categorized in the
collected trajectory data. This work addresses this problem and proposes a
framework for on-ramp scenario identification that also enables for scenario
categorization and assessment. The efficacy of the framework is shown with a
dataset collected on the TFNDS.
|
In this paper, we present an efficient spatial-temporal representation for
video person re-identification (reID). Firstly, we propose a Bilateral
Complementary Network (BiCnet) for spatial complementarity modeling.
Specifically, BiCnet contains two branches. Detail Branch processes frames at
original resolution to preserve the detailed visual clues, and Context Branch
with a down-sampling strategy is employed to capture long-range contexts. On
each branch, BiCnet appends multiple parallel and diverse attention modules to
discover divergent body parts for consecutive frames, so as to obtain an
integral characteristic of target identity. Furthermore, a Temporal Kernel
Selection (TKS) block is designed to capture short-term as well as long-term
temporal relations by an adaptive mode. TKS can be inserted into BiCnet at any
depth to construct BiCnetTKS for spatial-temporal modeling. Experimental
results on multiple benchmarks show that BiCnet-TKS outperforms
state-of-the-arts with about 50% less computations. The source code is
available at https://github.com/ blue-blue272/BiCnet-TKS.
|
Leddar PixSet is a new publicly available dataset (dataset.leddartech.com)
for autonomous driving research and development. One key novelty of this
dataset is the presence of full-waveform data from the Leddar Pixell sensor, a
solid-state flash LiDAR. Full-waveform data has been shown to improve the
performance of perception algorithms in airborne applications but is yet to be
demonstrated for terrestrial applications such as autonomous driving. The
PixSet dataset contains approximately 29k frames from 97 sequences recorded in
high-density urban areas, using a set of various sensors (cameras, LiDARs,
radar, IMU, etc.) Each frame has been manually annotated with 3D bounding
boxes.
|
Deep neural networks are susceptible to poisoning attacks by purposely
polluted training data with specific triggers. As existing episodes mainly
focused on attack success rate with patch-based samples, defense algorithms can
easily detect these poisoning samples. We propose DeepPoison, a novel
adversarial network of one generator and two discriminators, to address this
problem. Specifically, the generator automatically extracts the target class'
hidden features and embeds them into benign training samples. One discriminator
controls the ratio of the poisoning perturbation. The other discriminator works
as the target model to testify the poisoning effects. The novelty of DeepPoison
lies in that the generated poisoned training samples are indistinguishable from
the benign ones by both defensive methods and manual visual inspection, and
even benign test samples can achieve the attack. Extensive experiments have
shown that DeepPoison can achieve a state-of-the-art attack success rate, as
high as 91.74%, with only 7% poisoned samples on publicly available datasets
LFW and CASIA. Furthermore, we have experimented with high-performance defense
algorithms such as autodecoder defense and DBSCAN cluster detection and showed
the resilience of DeepPoison.
|
We consider the system of sticky-reflected Brownian particles on the real
line proposed in [arXiv:1711.03011]. The model is a modification of the
Howitt-Warren flow but now the diffusion rate of particles is inversely
proportional to the mass which they transfer. It is known that the system
consists of a finite number of distinct particles for almost all times. In this
paper, we show that the system also admits an infinite number of distinct
particles on a dense subset of the time interval if and only if the function
responsible for the splitting of particles takes an infinite number of values.
|
In this paper, we present LookOut, a novel autonomy system that perceives the
environment, predicts a diverse set of futures of how the scene might unroll
and estimates the trajectory of the SDV by optimizing a set of contingency
plans over these future realizations. In particular, we learn a diverse joint
distribution over multi-agent future trajectories in a traffic scene that
covers a wide range of future modes with high sample efficiency while
leveraging the expressive power of generative models. Unlike previous work in
diverse motion forecasting, our diversity objective explicitly rewards sampling
future scenarios that require distinct reactions from the self-driving vehicle
for improved safety. Our contingency planner then finds comfortable and
non-conservative trajectories that ensure safe reactions to a wide range of
future scenarios. Through extensive evaluations, we show that our model
demonstrates significantly more diverse and sample-efficient motion forecasting
in a large-scale self-driving dataset as well as safer and less-conservative
motion plans in long-term closed-loop simulations when compared to current
state-of-the-art models.
|
High-dimensional, low sample-size (HDLSS) data problems have been a topic of
immense importance for the last couple of decades. There is a vast literature
that proposed a wide variety of approaches to deal with this situation, among
which variable selection was a compelling idea. On the other hand, a deep
neural network has been used to model complicated relationships and
interactions among responses and features, which is hard to capture using a
linear or an additive model. In this paper, we discuss the current status of
variable selection techniques with the neural network models. We show that the
stage-wise algorithm with neural network suffers from disadvantages such as the
variables entering into the model later may not be consistent. We then propose
an ensemble method to achieve better variable selection and prove that it has
probability tending to zero that a false variable is selected. Then, we discuss
additional regularization to deal with over-fitting and make better regression
and classification. We study various statistical properties of our proposed
method. Extensive simulations and real data examples are provided to support
the theory and methodology.
|
Delays in the availability of vaccines are costly as the pandemic continues.
However, in the presence of adjustment costs firms have an incentive to
increase production capacity only gradually. The existing contracts specify
only a fixed quantity to be supplied over a certain period and thus provide no
incentive for an accelerated buildup in capacity. A high price does not change
this. The optimal contract would specify a decreasing price schedule over time
which can replicate the social optimum.
|
Reaction-Diffusion equations can present solutions in the form of traveling
waves. Such solutions evolve in different spatial and temporal scales and it is
desired to construct numerical methods that can adopt a spatial refinement at
locations with large gradient solutions. In this work we develop a high order
adaptive mesh method based on Chebyshev polynomials with a multidomain approach
for the traveling wave solutions of reaction-diffusion systems, where the
proposed method uses the non-conforming and non-overlapping spectral
multidomain method with the temporal adaptation of the computational mesh.
Contrary to the existing multidomain spectral methods for reaction-diffusion
equations, the proposed multidomain spectral method solves the given PDEs in
each subdomain locally first and the boundary and interface conditions are
solved in a global manner. In this way, the method can be parallelizable and is
efficient for the large reaction-diffusion system. We show that the proposed
method is stable and provide both the one- and two-dimensional numerical
results that show the efficacy of the proposed method.
|
Sentence insertion is a delicate but fundamental NLP problem. Current
approaches in sentence ordering, text coherence, and question answering (QA)
are neither suitable nor good at solving it. In this paper, We propose
InsertGNN, a simple yet effective model that represents the problem as a graph
and adopts the graph Neural Network (GNN) to learn the connection between
sentences. It is also supervised by both the local and global information that
the local interactions of neighboring sentences can be considered. To the best
of our knowledge, this is the first recorded attempt to apply a supervised
graph-structured model in sentence insertion. We evaluate our method in our
newly collected TOEFL dataset and further verify its effectiveness on the
larger arXivdataset using cross-domain learning. The experiments show that
InsertGNN outperforms the unsupervised text coherence method, the topological
sentence ordering approach, and the QA architecture. Specifically, It achieves
an accuracy of 70%, rivaling the average human test scores.
|
We provide a queueing-theoretic framework for job replication schemes based
on the principle "\emph{replicate a job as soon as the system detects it as a
\emph{straggler}}". This is called job \emph{speculation}. Recent works have
analyzed {replication} on arrival, which we refer to as \emph{replication}.
Replication is motivated by its implementation in Google's BigTable. However,
systems such as Apache Spark and Hadoop MapReduce implement speculative job
execution. The performance and optimization of speculative job execution is not
well understood. To this end, we propose a queueing network model for load
balancing where each server can speculate on the execution time of a job.
Specifically, each job is initially assigned to a single server by a frontend
dispatcher. Then, when its execution begins, the server sets a timeout. If the
job completes before the timeout, it leaves the network, otherwise the job is
terminated and relaunched or resumed at another server where it will complete.
We provide a necessary and sufficient condition for the stability of
speculative queueing networks with heterogeneous servers, general job sizes and
scheduling disciplines. We find that speculation can increase the stability
region of the network when compared with standard load balancing models and
replication schemes. We provide general conditions under which timeouts
increase the size of the stability region and derive a formula for the optimal
speculation time, i.e., the timeout that minimizes the load induced through
speculation. We compare speculation with redundant-$d$ and
redundant-to-idle-queue-$d$ rules under an $S\& X$ model. For light loaded
systems, redundancy schemes provide better response times. However, for
moderate to heavy loadings, redundancy schemes can lose capacity and have
markedly worse response times when compared with a speculative scheme.
|
We present the first intensive continuum reverberation mapping study of the
high accretion rate Seyfert galaxy Mrk 110. The source was monitored almost
daily for more than 200 days with the Swift X-ray and UV/optical telescopes,
supported by ground-based observations from Las Cumbres Observatory, the
Liverpool Telescope, and the Zowada Observatory, thus extending the wavelength
coverage to 9100 \r{A}. Mrk 110 was found to be significantly variable at all
wavebands. Analysis of the intraband lags reveals two different behaviours,
depending on the timescale. On timescales shorter than 10 days the lags,
relative to the shortest UV waveband ($\sim1928$ \r{A}), increase with
increasing wavelength up to a maximum of $\sim2$days lag for the longest
waveband ($\sim9100$ \r{A}), consistent with the expectation from disc
reverberation. On longer timescales, however, the g-band lags the Swift BAT
hard X-rays by $\sim10$ days, with the z-band lagging the g-band by a similar
amount, which cannot be explained in terms of simple reprocessing from the
accretion disc. We interpret this result as an interplay between the emission
from the accretion disc and diffuse continuum radiation from the broad line
region.
|
Coronal loop observations have existed for many decades yet the precise shape
of these fundamental coronal structures is still widely debated since the
discovery that they appear to undergo negligible expansion between their
footpoints and apex. In this work a selection of eight EUV loops and their
twenty-two sub-element strands are studied from the second successful flight of
NASA's High resolution Coronal Imager (Hi-C 2.1). Four of the loops correspond
to open fan structures with the other four considered to be magnetically closed
loops. Width analysis is performed on the loops and their sub-resolution
strands using our method of fitting multiple Gaussian profiles to
cross-sectional intensity slices. It is found that whilst the magnetically
closed loops and their sub-element strands do not expand along their observable
length, open fan structures may expand an additional 150% of their initial
width. Following recent work, the Pearson correlation coefficient between peak
intensity and loop/strand width are found to be predominantly positively
correlated for the loops (~88%) and their sub-element strands (~80%). These
results align with the hypothesis of Klimchuk & DeForest that loops and - for
the first time - their sub-element strands have approximately circular
cross-sectional profiles.
|
Searching for novel antiferromagnetic materials with large magnetotransport
response is highly demanded for constructing future spintronic devices with
high stability, fast switching speed, and high density. Here we report a
colossal anisotropic magnetoresistance effect in an antiferromagnetic binary
compound with layered structure rare-earth dichalcogenide EuTe2. The AMR
reaches 40000%, which is 4 orders of magnitude larger than that in conventional
antiferromagnetic alloys. Combined magnetization, resistivity, and theoretical
analysis reveal that the colossal AMR effect is attributed to a novel mechanism
of vector-field tunable band structure, rather than the conventional spin-orbit
coupling mechanism. Moreover, it is revealed that the strong hybridization
between orbitals of Eu-layer with localized spin and Te-layer with itinerant
carriers is extremely important for the large AMR effect. Our results suggest a
new direction towards exploring AFM materials with prominent magnetotransport
properties, which creates an unprecedented opportunity for AFM spintronics
applications.
|
Atomically precise dopant arrays in Si are being pursued for solid-state
quantum computing applications. We propose a guided self-assembly process to
produce atomically precise arrays of single dopant atoms in lieu of
lithographic patterning. We leverage the self-assembled c(4x2) structure formed
on Br- and I-Si(100) and investigate molecular precursor adsorption into the
generated array of single-dimer window (SDW) adsorption sites with density
functional theory (DFT). The adsorption of several technologically relevant
dopant precursors (PH$_3$, BCl$_3$, AlCl$_3$, GaCl$_3$) into SDWs formed with
various resists (H, Cl, Br, I) are explored to identify the effects of steric
interactions. PH$_3$ adsorbed without barrier on all resists studied, while
BCl$_3$ exhibited the largest adsorption barrier, 0.34 eV, with an I resist.
Dense arrays of AlCl$_3$ were found to form within experimentally realizable
conditions demonstrating the potential for the proposed use of guided
self-assembly for atomically precise fabrication of dopant-based devices.
|
Fractionalization is a phenomenon in which strong interactions in a quantum
system drive the emergence of excitations with quantum numbers that are absent
in the building blocks. Outstanding examples are excitations with charge e/3 in
the fractional quantum Hall effect, solitons in one-dimensional conducting
polymers and Majorana states in topological superconductors. Fractionalization
is also predicted to manifest itself in low-dimensional quantum magnets, such
as one-dimensional antiferromagnetic S = 1 chains. The fundamental features of
this system are gapped excitations in the bulk and, remarkably, S = 1/2 edge
states at the chain termini, leading to a four-fold degenerate ground state
that reflects the underlying symmetry-protected topological order. Here, we use
on-surface synthesis to fabricate one-dimensional spin chains that contain the
S = 1 polycyclic aromatic hydrocarbon triangulene as the building block. Using
scanning tunneling microscopy and spectroscopy at 4.5 K, we probe
length-dependent magnetic excitations at the atomic scale in both open-ended
and cyclic spin chains, and directly observe gapped spin excitations and
fractional edge states therein. Exact diagonalization calculations provide
conclusive evidence that the spin chains are described by the S = 1
bilinear-biquadratic Hamiltonian in the Haldane symmetry-protected topological
phase. Our results open a bottom-up approach to study strongly correlated
quantum spin liquid phases in purely organic materials, with the potential for
the realization of measurement-based quantum computation.
|
We compute the 3d N = 2 superconformal indices for 3d/1d coupled systems,
which arise as the worldvolume theories of intersecting surface defects
engineered by Higgsing 5d N = 1 gauge theories. We generalize some known 3d
dualities, including non-Abelian 3d mirror symmetry and 3d/3d correspondence,
to some of the simple 3d/1d coupled systems. Finally we propose a q-Virasoro
construction for the superconformal indices.
|
We study termination of higher-order probabilistic functional programs with
recursion, stochastic conditioning and sampling from continuous distributions.
Reasoning about the termination probability of programs with continuous
distributions is hard, because the enumeration of terminating executions cannot
provide any non-trivial bounds. We present a new operational semantics based on
traces of intervals, which is sound and complete with respect to the standard
sampling-based semantics, in which (countable) enumeration can provide
arbitrarily tight lower bounds. Consequently we obtain the first proof that
deciding almost-sure termination (AST) for programs with continuous
distributions is $\Pi^0_2$-complete. We also provide a compositional
representation of our semantics in terms of an intersection type system.
In the second part, we present a method of proving AST for non-affine
programs, i.e., recursive programs that can, during the evaluation of the
recursive body, make multiple recursive calls (of a first-order function) from
distinct call sites. Unlike in a deterministic language, the number of
recursion call sites has direct consequences on the termination probability.
Our framework supports a proof system that can verify AST for programs that are
well beyond the scope of existing methods.
We have constructed prototype implementations of our method of computing
lower bounds of termination probability, and AST verification.
|
Conventional imaging only records photons directly sent from the object to
the detector, while non-line-of-sight (NLOS) imaging takes the indirect light
into account. Most NLOS solutions employ a transient scanning process, followed
by a physical based algorithm to reconstruct the NLOS scenes. However, the
transient detection requires sophisticated apparatus, with long scanning time
and low robustness to ambient environment, and the reconstruction algorithms
are typically time-consuming and computationally expensive. Here we propose a
new NLOS solution to address the above defects, with innovations on both
equipment and algorithm. We apply inexpensive commercial Lidar for detection,
with much higher scanning speed and better compatibility to real-world imaging.
Our reconstruction framework is deep learning based, with a generative two-step
remapping strategy to guarantee high reconstruction fidelity. The overall
detection and reconstruction process allows for millisecond responses, with
reconstruction precision of millimeter level. We have experimentally tested the
proposed solution on both synthetic and real objects, and further demonstrated
our method to be applicable to full-color NLOS imaging.
|
In this paper we revisit the memristor concept within circuit theory. We
start from the definition of the basic circuit elements, then we introduce the
original formulation of the memristor concept and summarize some of the
controversies on its nature. We also point out the ambiguities resulting from a
non rigorous usage of the flux linkage concept. After concluding that the
memristor is not a fourth basic circuit element, prompted by recent claims in
the memristor literature, we look into the application of the memristor concept
to electrophysiology, realizing that an approach suitable to explain the
observed inductive behavior of the giant squid axon had already been developed
in the 1960s, with the introduction of "time-variant resistors." We also
discuss a recent memristor implementation in which the magnetic flux plays a
direct role, concluding that it cannot strictly qualify as a memristor, because
its $v-i$ curve cannot exactly pinch at the origin. Finally, we present
numerical simulations of a few memristors and memristive systems, focusing on
the behavior in the $\varphi-q$ plane. We show that, contrary to what happens
for the most basic memristor concept, for general memristive systems the
$\varphi-q$ curve is not single-valued or not even closed.
|
Concept drift detectors allow learning systems to maintain good accuracy on
non-stationary data streams. Financial time series are an instance of
non-stationary data streams whose concept drifts (market phases) are so
important to affect investment decisions worldwide. This paper studies how
concept drift detectors behave when applied to financial time series. General
results are: a) concept drift detectors usually improve the runtime over
continuous learning, b) their computational cost is usually a fraction of the
learning and prediction steps of even basic learners, c) it is important to
study concept drift detectors in combination with the learning systems they
will operate with, and d) concept drift detectors can be directly applied to
the time series of raw financial data and not only to the model's accuracy one.
Moreover, the study introduces three simple concept drift detectors, tailored
to financial time series, and shows that two of them can be at least as
effective as the most sophisticated ones from the state of the art when applied
to financial time series. Currently submitted to Pattern Recognition
|
One of the pillars of any machine learning model is its concepts. Using
software engineering, we can engineer these concepts and then develop and
expand them. In this article, we present a SELM framework for Software
Engineering of machine Learning Models. We then evaluate this framework through
a case study. Using the SELM framework, we can improve a machine learning
process efficiency and provide more accuracy in learning with less processing
hardware resources and a smaller training dataset. This issue highlights the
importance of an interdisciplinary approach to machine learning. Therefore, in
this article, we have provided interdisciplinary teams' proposals for machine
learning.
|
Bures distance holds a special place among various distance measures due to
its several distinguished features and finds applications in diverse problems
in quantum information theory. It is related to fidelity and, among other
things, it serves as a bona fide measure for quantifying the separability of
quantum states. In this work, we calculate exact analytical results for the
mean root fidelity and mean square Bures distance between a fixed density
matrix and a random density matrix, and also between two random density
matrices. In the course of derivation, we also obtain spectral density for
product of above pairs of density matrices. We corroborate our analytical
results using Monte Carlo simulations. Moreover, we compare these results with
the mean square Bures distance between reduced density matrices generated using
coupled kicked tops and find very good agreement.
|
Let $\mathcal{L}=-\Delta+\mathit{V}(x)$ be a Schr\"{o}dinger operator, where
$\Delta$ is the Laplacian operator on $\mathbb{R}^{d}$ $(d\geq 3)$, while the
nonnegative potential $\mathit{V}(x)$ belongs to the reverse H\"{o}lder class
$B_{q}, q>d/2$. In this paper, we study weighted compactness of commutators of
some Schr\"{o}dinger operators, which include Riesz transforms, standard
Calder\'{o}n-Zygmund operatos and Littlewood-Paley functions. These results
generalize substantially some well-know results.
|
Dual decomposition is widely utilized in distributed optimization of
multi-agent systems. In practice, the dual decomposition algorithm is desired
to admit an asynchronous implementation due to imperfect communication, such as
time delay and packet drop. In addition, computational errors also exist when
individual agents solve their own subproblems. In this paper, we analyze the
convergence of the dual decomposition algorithm in distributed optimization
when both the asynchrony in communication and the inexactness in solving
subproblems exist. We find that the interaction between asynchrony and
inexactness slows down the convergence rate from $\mathcal{O} ( 1 / k )$ to
$\mathcal{O} ( 1 / \sqrt{k} )$. Specifically, with a constant step size, the
value of objective function converges to a neighborhood of the optimal value,
and the solution converges to a neighborhood of the exact optimal solution.
Moreover, the violation of the constraints diminishes in $\mathcal{O} ( 1 /
\sqrt{k} )$. Our result generalizes and unifies the existing ones that only
consider either asynchrony or inexactness. Finally, numerical simulations
validate the theoretical results.
|
Benford's law is widely used for fraud-detection nowadays. The underlying
assumption for using the law is that a "regular" dataset follows the
significant digit phenomenon. In this paper, we address the scenario where a
shrewd fraudster manipulates a list of numbers in such a way that still
complies with Benford's law. We develop a general family of distributions that
provides several degrees of freedom to such a fraudster such as minimum,
maximum, mean and size of the manipulated dataset. The conclusion further
corroborates the idea that Benford's law should be used with utmost discretion
as a means for fraud detection.
|
We study removable sets for Newtonian Sobolev functions in metric measure
spaces satisfying the usual (local) assumptions of a doubling measure and a
Poincar\'e inequality. In particular, when restricted to Euclidean spaces, a
closed set $E\subset \mathbf{R}^n$ with zero Lebesgue measure is shown to be
removable for $W^{1,p}(\mathbf{R}^n \setminus E)$ if and only if $\mathbf{R}^n
\setminus E$ supports a $p$-Poincar\'e inequality as a metric space. When
$p>1$, this recovers Koskela's result (Ark. Mat. 37 (1999), 291--304), but for
$p=1$, as well as for metric spaces, it seems to be new. We also obtain the
corresponding characterization for the Dirichlet spaces $L^{1,p}$. To be able
to include $p=1$, we first study extensions of Newtonian Sobolev functions in
the case $p=1$ from a noncomplete space $X$ to its completion $\widehat{X}$.
In these results, $p$-path almost open sets play an important role, and we
provide a characterization of them by means of $p$-path open, $p$-quasiopen and
$p$-finely open sets. We also show that there are nonmeasurable $p$-path almost
open subsets of $\mathbf{R}^n$, $n \geq 2$, provided that the continuum
hypothesis is assumed to be true.
Furthermore, we extend earlier results about measurability of functions with
$L^p$-integrable upper gradients, about $p$-quasiopen, $p$-path and $p$-finely
open sets, and about Lebesgue points for $N^{1,1}$-functions, to spaces that
only satisfy local assumptions.
|
Context. The astrometric satellite Gaia is expected to significantly increase
our knowledge as to the properties of the Milky Way. The Gaia Early Data
Release 3 (Gaia EDR3) provides the most precise parallaxes for many OB stars,
which can be used to delineate the Galactic spiral structure. Aims. We
investigate the local spiral structure with the largest sample of
spectroscopically confirmed young OB stars available to date, and we compare it
with what was traced by the parallax measurements of masers. Methods. A sample
consisting of three different groups of massive young stars, including O-B2
stars, O-B0 stars and O-type stars with parallax accuracies better than 10% was
compiled and used in our analysis. Results. The local spiral structures in all
four Galactic quadrants within $\approx$5 kpc of the Sun are clearly delineated
in detail. The revealed Galactic spiral pattern outlines a clear sketch of
nearby spiral arms, especially in the third and fourth quadrants where the
maser parallax data are still absent. These O-type stars densify and extend the
spiral structure constructed by using the Very Long Baseline Interferometry
(VLBI) maser data alone. The clumped distribution of O-type stars also
indicates that the Galaxy spiral structure is inhomogeneous.
|
We present computer simulations about the spatial and temporal evolution of a
1-MeV proton microbeam transmitted through an insulating macrocapillary with
the length of 45 mm and with the inner diameter of 800 {\mu}m. The axis of the
capillary was tilted to 1{\deg} relative to the axis of the incident beam,
which ensured geometrical nontransparency. The simulation is based on the
combination of stochastic (Monte Carlo) and deterministic methods. It involves
(1) random sampling of the initial conditions, according to distributions
generated by the widely used and freely available computer software packages,
SRIM and WINTRAX, (2) the numerical solution of the governing equations for
following the classical trajectory of the projectiles, and (3) the description
of the field-driven charge migration on the surface and in the bulk of the
insulator material. We found that our simulation describes reasonably all of
our previous experimental observations, indicating the functionality and
reliability of the applied model. In addition, we found that at different
phases of the beam transmission, different atomic processes result in the
evolution of the beam distribution. First, in a scattering phase, the multiple
small angle atomic scattering dominates in the beam transmission, resulting in
an outgoing beam into a wide angular range and in a wide energy window. Later,
in a mixed phase, scattering and guiding happens simultaneously, with a
continuously increasing contribution of guiding. Finally, in the phase of the
stabilized, guided transmission, a quadrupolelike focusing effect is observed,
i.e., the transmitted beam is concentrated into a small spot, and the
transmitted protons keep their initial kinetic energy.
|
Large scale projects increasingly operate in complicated settings whilst
drawing on an array of complex data-points, which require precise analysis for
accurate control and interventions to mitigate possible project failure.
Coupled with a growing tendency to rely on new information systems and
processes in change projects, 90% of megaprojects globally fail to achieve
their planned objectives. Renewed interest in the concept of Artificial
Intelligence (AI) against a backdrop of disruptive technological innovations,
seeks to enhance project managers cognitive capacity through the project
lifecycle and enhance project excellence. However, despite growing interest
there remains limited empirical insights on project managers ability to
leverage AI for cognitive load enhancement in complex settings. As such this
research adopts an exploratory sequential linear mixed methods approach to
address unresolved empirical issues on transient adaptations of AI in complex
projects, and the impact on cognitive load enhancement. Initial thematic
findings from semi-structured interviews with domain experts, suggest that in
order to leverage AI technologies and processes for sustainable cognitive load
enhancement with complex data over time, project managers require improved
knowledge and access to relevant technologies that mediate data processes in
complex projects, but equally reflect application across different project
phases. These initial findings support further hypothesis testing through a
larger quantitative study incorporating structural equation modelling to
examine the relationship between artificial intelligence and project managers
cognitive load with project data in complex contexts.
|
The fluid/gravity correspondence establishes how gravitational dynamics, as
dictated by Einstein's field equations, are related to the fluid dynamics,
governed by the relativistic Navier-Stokes equations. In this work the
correspondence is extended, where the duality between incompressible fluids and
gravitational backgrounds with soft hair excitations is implemented. This
construction is set through appropriate boundary conditions to the
gravitational background, leading to a correspondence between generalized
incompressible Navier-Stokes equations and soft hairy horizons.
|
We present an analysis of the lightcurve extracted from Transiting Exoplanet
Survey Satellite Full Frame Images of the double-mode RR Lyrae V338 Boo. We
find that the fundamental mode pulsation is changing in amplitude across the 54
days of observations. The first overtone mode pulsation also changes, but on a
much smaller scale. Harmonics and combinations of the primary pulsation modes
also exhibit unusual behavior. Possible connections with other changes in RR
Lyrae pulsations are discussed, but a full understanding of the cause of the
changes seen in V338 Boo should shed light on some of the most difficult and
unanswered questions in stellar pulsation theory, and astrophysics more
generally.
|
Strontium titanate (SrTiO3) is widely used as a promising photocatalyst due
to its unique band edge alignment with respect to the oxidation and reduction
potential corresponding to oxygen evolution reaction (OER) and hydrogen
evolution reaction (HER). However, further enhancement of the photocatalytic
activity in this material could be envisaged through the effective control of
oxygen vacancy states. This could substantially tune the photoexcited charge
carrier trapping under the influence of elemental functionalization in SrTiO3,
corresponding to the defect formation energy. The charge trapping states in
SrTiO3 decrease through the substitutional doping in Ti sites with p-block
elements like Aluminium (Al) with respect to the relative oxygen vacancies.
With the help of electronic structure calculations based on density functional
theory (DFT) formalism, we have explored the synergistic effect of doping with
both Al and Iridium (Ir) in SrTiO3 from the perspective of defect formation
energy, band edge alignment and the corresponding charge carrier recombination
probability to probe the photoexcited charge carrier trapping that primarily
governs the photocatalytic water splitting process. We have also systematically
investigated the ratio-effect of Ir:Al functionalization on the position of
acceptor levels lying between Fermi and conduction band in oxygen deficient
SrTiO3, which governs the charge carrier recombination and therefore the
corresponding photocatalytic efficiency.
|
We combine $SU(5)$ Grand Unified Theories (GUTs) with $A_4$ modular symmetry
and present a comprehensive analysis of the resulting quark and lepton mass
matrices for all the simplest cases. Classifying the models according to the
representation assignments of the matter fields under $A_4$, we find that there
are seven types of $SU(5)$ models with $A_4$ modular symmetry. We present 53
benchmark models with the fewest free parameters. The parameter space of each
model is scanned to optimize the agreement between predictions and experimental
data, and predictions for the masses and mixing parameters of quarks and
leptons are given at the best fitting points. The best fit predictions for the
leptonic CP violating Dirac phase, the lightest neutrino mass and the
neutrinoless double beta decay parameter when displayed graphically are
observed to cover a wide range of possible values, but are clustered around
particular regions, allowing future neutrino experiments to discriminate
between the different types of models.
|
In this paper, we present an updated version of the NELA-GT-2019 dataset,
entitled NELA-GT-2020. NELA-GT-2020 contains nearly 1.8M news articles from 519
sources collected between January 1st, 2020 and December 31st, 2020. Just as
with NELA-GT-2018 and NELA-GT-2019, these sources come from a wide range of
mainstream news sources and alternative news sources. Included in the dataset
are source-level ground truth labels from Media Bias/Fact Check (MBFC) covering
multiple dimensions of veracity. Additionally, new in the 2020 dataset are the
Tweets embedded in the collected news articles, adding an extra layer of
information to the data. The NELA-GT-2020 dataset can be found at
https://doi.org/10.7910/DVN/CHMUYZ.
|
Studying the collective pairing phenomena in a two-component Fermi gas, we
predict the appearance near the transition temperature $T_c$ of a well-resolved
collective mode of quadratic dispersion. The mode is visible both above and
below $T_c$ in the system's response to a driving pairing field. When
approaching $T_c$ from below, the phononic and pair-breaking branches,
characteristic of the zero temperature behavior, reduce to a very low
energy-momentum region when the pair correlation length reaches its critical
divergent behavior $\xi_{\rm pair}\propto|T_c-T|^{-1/2}$; elsewhere, they are
replaced by the quadratically-dispersed pairing resonance, which thus acts as a
precursor of the phase transition. In the strong-coupling and Bose-Einstein
Condensate regime, this mode is a weakly-damped propagating mode associated to
a Lorentzian resonance. Conversely, in the BCS limit it is a relaxation mode of
pure imaginary eigenenergy. At large momenta, the resonance disappears when it
is reabsorbed by the lower-edge of the pairing continuum. At intermediate
temperatures between 0 and $T_c$, we unify the newly found collective phenomena
near $T_c$ with the phononic and pair-breaking branches predicted from previous
studies, and we exhaustively classify the roots of the analytically continued
dispersion equation, and show that they provided a very good summary of the
pair spectral functions.
|
We study the problem of convergence of the normalized Ricci flow evolving on
a compact manifold $\Omega$ without boundary. In \cite{KS10, KS15} we derived,
via PDE techniques, global-in-time existence of the classical solution and
pre-compactness of the orbit. In this work we show its convergence to
steady-states, using a gradient inequality of {\L}ojasiewicz type. We have thus
an alternative proof of \cite{ha}, but for general manifold $\Omega$ and not
only for unit sphere. As a byproduct of that approach we also derive the rate
of convergence according to this steady-sate being either degenerate or
non-degenerate as a critical point of a related energy functional.
|
Stern's diatomic sequence is a well-studied and simply defined sequence with
many fascinating characteristics. The binary signed-digit (BSD) representation
of integers is used widely in efficient computation, coding theory and other
applications. We link these two objects, showing that the number of $i$-bit
binary signed-digit representations of an integer $n<2^i$ is the
$(2^i-n)^\text{th}$ element in Stern's diatomic sequence.
This correspondence makes the vast range of results known about the Stern
diatomic sequence available for consideration in the study of binary
signed-digit integers, and vice versa. Applications of this relationship
discussed in this paper include a weight-distribution theorem for BSD
representations, linking these representations to Stern polynomials, a
recursion for the number of optimal BSD representations of an integer along
with their Hamming weight, stemming from an easy recursion for the leading
coefficients and degrees of Stern polynomials, and the identification of all
integers having a maximal number of such representations.
|
The electrical behavior of Ni Schottky barrier formed onto heavily doped
(ND>1019 cm-3) n-type phosphorous implanted silicon carbide (4H-SiC) was
investigated, with a focus on the current transport mechanisms in both forward
and reverse bias. The forward current-voltage characterization of Schottky
diodes showed that the predominant current transport is a thermionic-field
emission mechanism. On the other hand, the reverse bias characteristics could
not be described by a unique mechanism. In fact, under moderate reverse bias,
implantation-induced damage is responsible for the temperature increase of the
leakage current, while a pure field emission mechanism is approached with bias
increasing. The potential application of metal/4H-SiC contacts on heavily doped
layers in real devices are discussed.
|
We present an original approach for predicting the static recrystallization
texture development during annealing of deformed crystalline materials. The
microstructure is considered as a population of subgrains and grains whose
sizes and boundary properties determine their growth rates. The model input
parameters are measured directly on orientation maps maps of the deformed
microstructure measured by electron backscattered diffraction. The anisotropy
in subgrain properties then drives a competitive growth giving rise to the
recrystallization texture development. The method is illustrated by a
simulation of the static recrystallization texture development in a hot rolled
ferritic stainless steel. The model predictions are found to be in good
agreement with the experimental measurements, and allow for an in-depth
investigation of the formation sequence of the recrystallization texture. A
distinction is established between the texture components which develop due to
favorable growth conditions and those developing due to their predominance in
the prior deformed state. The high fraction of alpha fibre orientations in the
recrystallized state is shown to be a consequence of their predominance in the
deformed microstructure rather than a preferred growth mechanism. A close
control of the fraction of these orientations before annealing is thus required
to minimize their presence in the recrystallized state.
|
With the increased interest in machine learning, and deep learning in
particular, the use of automatic differentiation has become more wide-spread in
computation. There have been two recent developments to provide the theoretical
support for this types of structure. One approach, due to Abadi and Plotkin,
provides a simple differential programming language. Another approach is the
notion of a reverse differential category. In the present paper we bring these
two approaches together. In particular, we show how an extension of reverse
derivative categories models Abadi and Plotkin's language, and describe how
this categorical model allows one to consider potential improvements to the
operational semantics of the language.
|
As a result of 33 intercontinental Zoom calls, we characterise big Ramsey
degrees of the generic partial order in a similar way as Devlin characterised
big Ramsey degrees of the generic linear order (the order of rationals).
|
Traditional smart grid energy auctions cannot directly be integrated in
blockchain due to its decentralized nature. Therefore, research works are being
carried out to propose efficient decentralized auctions for energy trading.
Since, blockchain is a novel paradigm which ensures trust, but it also comes up
with a curse of high computation and communication complexity which eventually
causes resource scarcity. Therefore, there is a need to develop and encourage
development of greener and computational-friendly auctions to carry out
decentralized energy trading. In this paper, we first provide a thorough
motivation of decentralized auctions over traditional auctions. Afterwards, we
provide in-depth design requirements that can be taken into consideration while
developing such auctions. After that, we analyze technical works that have
developed blockchain based energy auctions from green perspective. Finally, we
summarize the article by providing challenges and possible future research
directions of blockchain based energy auction from green viewpoint.
|
We study the mean-field Ising spin glass model with external field, where the
random symmetric couplings matrix is orthogonally invariant in law. For
sufficiently high temperature, we prove that the replica-symmetric prediction
is correct for the first-order limit of the free energy. Our analysis is an
adaption of a "conditional quenched equals annealed" argument used by
Bolthausen to analyze the high-temperature regime of the
Sherrington-Kirkpatrick model. We condition on a sigma-field that is generated
by the iterates of an Approximate Message Passing algorithm for solving the TAP
equations in this model, whose rigorous state evolution was recently
established.
|
The objective of the paper is to put canonical Lyapunov function(CLF),
canonizing diffeomorphism (CD) and canonical form of dynamical systems (CFDS),
which have led to the generalization of the Lyapunov second method, in
perspective of their high efficiency for Mathematical Modelling and Control
Design. We show how the symbiosis of the ideas of Henri Poincare and Nikolay
Chetaev leads us to CD, CFDS and CLF. Our approach successfully translates into
mathematical modelling and control design for special two-angles synchronized
longitudinal maneuvering of a thrust-vectored aircraft. The essentially
nonlinear five-dimensional mathematical model of the longitudinal flight
dynamics of a thrust-vectored aircraft in a wing-body coordinate system with
two controls, namely the angular deflections of a movable horizontal stabilizer
and a turbojet engine nozzle, is investigated. The wide-sense robust and stable
in the large tracking control law is designed. Its core is the hierarchical
cascade of two controlling attractor-mediators and two controlling terminal
attractors embedded in the extended phase space of the mathematical model of
the aircraft longitudinal motion. The detailed demonstration of the elaborated
technique of designing wide-sense robust tracking control for the nonlinear
multidimensional mathematical model constitutes the quintessence of the paper.
|
In this paper we consider the symmetric Kolmogorov operator $L=\Delta
+\frac{\nabla \mu}{\mu}\cdot \nabla$ on $L^2(\mathbb R^N,d\mu)$, where $\mu$ is
the density of a probability measure on $\mathbb R^N$. Under general conditions
on $\mu$ we prove first weighted Rellich's inequalities with optimal constants
and deduce that the operators $L$ and $-L^2$ with domain $H^2(\mathbb
R^N,d\mu)$ and $H^4(\mathbb R^N,d\mu)$ respectively, generate analytic
semigroups of contractions on $L^2(\mathbb R^N,d\mu)$. We observe that $d\mu$
is the unique invariant measure for the semigroup generated by $-L^2$ and as a
consequence we describe the asymptotic behaviour of such semigroup and obtain
some local positivity properties. As an application we study the
bi-Ornstein-Uhlenbeck operator and its semigroup on $L^2(\mathbb R^N,d\mu)$.
|
We show that the stochastic Schr\"odinger equation (SSE) provides an ideal
way to simulate the quantum mechanical spin dynamics of radical pairs. Electron
spin relaxation effects arising from fluctuations in the spin Hamiltonian are
straightforward to include in this approach, and their treatment can be
combined with a highly efficient stochastic evaluation of the trace over
nuclear spin states that is required to compute experimental observables. These
features are illustrated in example applications to a flavin-tryptophan radical
pair of interest in avian magnetoreception, and to a problem involving
spin-selective radical pair recombination along a molecular wire. In the first
of these examples, the SSE is shown to be both more efficient and more widely
applicable than a recent stochastic implementation of the Lindblad equation,
which only provides a valid treatment of relaxation in the extreme-narrowing
limit. In the second, the exact SSE results are used to assess the accuracy of
a recently-proposed combination of Nakajima-Zwanzig theory for the spin
relaxation and Schulten-Wolynes theory for the spin dynamics, which is
applicable to radical pairs with many more nuclear spins. An appendix analyses
the efficiency of trace sampling in some detail, highlighting the particular
advantages of sampling with SU(N) coherent states.
|
This paper presents a novel approach using sensitivity analysis for
generalizing Differential Dynamic Programming (DDP) to systems characterized by
implicit dynamics, such as those modelled via inverse dynamics and variational
or implicit integrators. It leads to a more general formulation of DDP,
enabling for example the use of the faster recursive Newton-Euler inverse
dynamics. We leverage the implicit formulation for precise and exact contact
modelling in DDP, where we focus on two contributions: (1) Contact dynamics in
acceleration level that enables high-order integration schemes; (2) Formulation
using an invertible contact model in the forward pass and a closed form
solution in the backward pass to improve the numerical resolution of contacts.
The performance of the proposed framework is validated (1) by comparing
implicit versus explicit DDP for the swing-up of a double pendulum, and (2) by
planning motions for two tasks using a single leg model making multi-body
contacts with the environment: standing up from ground, where a priori contact
enumeration is challenging, and maintaining balance under an external
perturbation.
|
An emerging amount of intelligent applications have been developed with the
surge of Machine Learning (ML). Deep Neural Networks (DNNs) have demonstrated
unprecedented performance across various fields such as medical diagnosis and
autonomous driving. While DNNs are widely employed in security-sensitive
fields, they are identified to be vulnerable to Neural Trojan (NT) attacks that
are controlled and activated by the stealthy trigger. We call this vulnerable
model adversarial artificial intelligence (AI). In this paper, we target to
design a robust Trojan detection scheme that inspects whether a pre-trained AI
model has been Trojaned before its deployment. Prior works are oblivious of the
intrinsic property of trigger distribution and try to reconstruct the trigger
pattern using simple heuristics, i.e., stimulating the given model to incorrect
outputs. As a result, their detection time and effectiveness are limited. We
leverage the observation that the pixel trigger typically features spatial
dependency and propose TAD, the first trigger approximation based Trojan
detection framework that enables fast and scalable search of the trigger in the
input space. Furthermore, TAD can also detect Trojans embedded in the feature
space where certain filter transformations are used to activate the Trojan. We
perform extensive experiments to investigate the performance of the TAD across
various datasets and ML models. Empirical results show that TAD achieves a
ROC-AUC score of 0:91 on the public TrojAI dataset 1 and the average detection
time per model is 7:1 minutes.
|
The 4f-electron delocalization plays a key role in the low-temperature
properties of rare-earth metals and intermetallics, including heavy fermions
and mix-valent compounds, and is normally realized by the many-body Kondo
coupling between 4f and conduction electrons. Due to the large onsite Coulomb
repulsion of 4f electrons, the bandwidth-control Mott-type delocalization,
commonly observed in d-electron systems, is difficult in 4f-electron systems
and remains elusive in spectroscopic experiments. Here we demonstrate that the
bandwidth-control orbital-selective delocalization of 4f electrons can be
realized in epitaxial Ce films by thermal annealing, which results in a
metastable surface phase with a reduced layer spacing. The resulting
quasiparticle bands exhibit large dispersion with exclusive 4f character near
E_F and extend reasonably far below the Fermi energy, which can be explained
from the Mott physics. The experimental quasiparticle dispersion agrees
surprisingly well with density-functional theory calculation and also exhibits
unusual temperature dependence, which could be a direct consequence of the
delicate interplay between the bandwidth-control Mott physics and the
coexisting Kondo hybridization. Our work therefore opens up the opportunity to
study the interaction between two well-known localization-delocalization
mechanisms in correlation physics, i.e., Kondo vs Mott, which can be important
for a fundamental understanding of 4f-electron systems.
|
Field transformation rules of the standard fermionic T-duality require
fermionic isometries to anticommute, which leads to complexification of the
Killing spinors and results in complex valued dual backgrounds. We generalize
the field transformations to the setting with non-anticommuting fermionic
isometries and show that the resulting backgrounds are solutions of double
field theory. Explicit examples of non-abelian fermionic T-dualities that
produce real backgrounds are given. Some of our examples can be bosonic
T-dualized into usual supergravity solutions, while the others are genuinely
non-geometric. Comparison with alternative treatment based on sigma models on
supercosets shows consistency.
|
Hybrid data combining both tabular and textual content (e.g., financial
reports) are quite pervasive in the real world. However, Question Answering
(QA) over such hybrid data is largely neglected in existing research. In this
work, we extract samples from real financial reports to build a new large-scale
QA dataset containing both Tabular And Textual data, named TAT-QA, where
numerical reasoning is usually required to infer the answer, such as addition,
subtraction, multiplication, division, counting, comparison/sorting, and the
compositions. We further propose a novel QA model termed TAGOP, which is
capable of reasoning over both tables and text. It adopts sequence tagging to
extract relevant cells from the table along with relevant spans from the text
to infer their semantics, and then applies symbolic reasoning over them with a
set of aggregation operators to arrive at the final answer. TAGOPachieves 58.0%
inF1, which is an 11.1% absolute increase over the previous best baseline
model, according to our experiments on TAT-QA. But this result still lags far
behind performance of expert human, i.e.90.8% in F1. It is demonstrated that
our TAT-QA is very challenging and can serve as a benchmark for training and
testing powerful QA models that address hybrid form data.
|
We discuss transport through an interferometer formed by helical edge states
of the quantum spin Hall insulator. Focusing on effects induced by a strong
magnetic impurity placed in one of the arms of interferometer, we consider the
experimentally relevant case of relatively high temperature as compared to the
level spacing. We obtain the conductance and the spin polarization in the
closed form for arbitrary tunneling amplitude of the contacts and arbitrary
strength of the magnetic impurity. We demonstrate the existence of quantum
effects which do not show up in previously studied case of weak magnetic
disorder. We find optimal conditions for spin filtering and demonstrate that
the spin polarization of outgoing electrons can reach 100%.
|
This paper describes the IDLab submission for the text-independent task of
the Short-duration Speaker Verification Challenge 2021 (SdSVC-21). This speaker
verification competition focuses on short duration test recordings and
cross-lingual trials, along with the constraint of limited availability of
in-domain DeepMine Farsi training data. Currently, both Time Delay Neural
Networks (TDNNs) and ResNets achieve state-of-the-art results in speaker
verification. These architectures are structurally very different and the
construction of hybrid networks looks a promising way forward. We introduce a
2D convolutional stem in a strong ECAPA-TDNN baseline to transfer some of the
strong characteristics of a ResNet based model to this hybrid CNN-TDNN
architecture. Similarly, we incorporate absolute frequency positional encodings
in an SE-ResNet34 architecture. These learnable feature map biases along the
frequency axis offer this architecture a straightforward way to exploit
frequency positional information. We also propose a frequency-wise variant of
Squeeze-Excitation (SE) which better preserves frequency-specific information
when rescaling the feature maps. Both modified architectures significantly
outperform their corresponding baseline on the SdSVC-21 evaluation data and the
original VoxCeleb1 test set. A four system fusion containing the two improved
architectures achieved a third place in the final SdSVC-21 Task 2 ranking.
|
The lack of comprehensive sources of accurate vulnerability data represents a
critical obstacle to studying and understanding software vulnerabilities (and
their corrections). In this paper, we present an approach that combines
heuristics stemming from practical experience and machine-learning (ML) -
specifically, natural language processing (NLP) - to address this problem. Our
method consists of three phases. First, an advisory record containing key
information about a vulnerability is extracted from an advisory (expressed in
natural language). Second, using heuristics, a subset of candidate fix commits
is obtained from the source code repository of the affected project by
filtering out commits that are known to be irrelevant for the task at hand.
Finally, for each such candidate commit, our method builds a numerical feature
vector reflecting the characteristics of the commit that are relevant to
predicting its match with the advisory at hand. The feature vectors are then
exploited for building a final ranked list of candidate fixing commits. The
score attributed by the ML model to each feature is kept visible to the users,
allowing them to interpret of the predictions.
We evaluated our approach using a prototype implementation named Prospector
on a manually curated data set that comprises 2,391 known fix commits
corresponding to 1,248 public vulnerability advisories. When considering the
top-10 commits in the ranked results, our implementation could successfully
identify at least one fix commit for up to 84.03% of the vulnerabilities (with
a fix commit on the first position for 65.06% of the vulnerabilities). In
conclusion, our method reduces considerably the effort needed to search OSS
repositories for the commits that fix known vulnerabilities.
|
Concatenated modal interferometers based multipoint sensing system for
detection of instantaneous amplitude, frequency, and phase of mechanical
vibrations is proposed and demonstrated. The sensor probes are fabricated using
identical photonic crystal fiber (PCF) sections and integrated along a single
fiber channel to act as a compact and efficient sensing system. Individual
probes operate independently producing a resultant signal that is a
superposition of each interferometer response signal. By analyzing the
resultant signals, information about the measurand field at each location is
realized. Such a sensing system would find wide applications at industrial,
infrastructural, and medical fronts for monitoring various unsteady physical
phenomena.
|
Mantaci et al. [TCS 2007] defined the eBWT to extend the definition of the
BWT to a collection of strings, however, since this introduction, it has been
used more generally to describe any BWT of a collection of strings and the
fundamental property of the original definition (i.e., the independence from
the input order) is frequently disregarded. In this paper, we propose a simple
linear-time algorithm for the construction of the original eBWT, which does not
require the preprocessing of Bannai et al. [CPM 2021]. As a byproduct, we
obtain the first linear-time algorithm for computing the BWT of a single string
that uses neither an end-of-string symbol nor Lyndon rotations. We combine our
new eBWT construction with a variation of prefix-free parsing to allow for
scalable construction of the eBWT. We evaluate our algorithm (pfpebwt) on sets
of human chromosomes 19, Salmonella, and SARS-CoV2 genomes, and demonstrate
that it is the fastest method for all collections, with a maximum speedup of
7.6x on the second best method. The peak memory is at most 2x larger than the
second best method. Comparing with methods that are also, as our algorithm,
able to report suffix array samples, we obtain a 57.1x improvement in peak
memory. The source code is publicly available at
https://github.com/davidecenzato/PFP-eBWT.
|
Path sets are spaces of one-sided infinite symbol sequences corresponding to
the one-sided infinite walks beginning at a fixed initial vertex in a directed
labeled graph. Path sets are a generalization of one-sided sofic shifts. This
paper studies decimation operations $\psi_{j, n}(\cdot)$ which extract symbol
sequences in infinite arithmetic progressions (mod n). starting with the symbol
at position j. It also studies a family of n-ary interleaving operations, one
for each arity n, which act on an ordered set $(X_0, X_1, ..., X_{n-1})$ of
one-sided symbol sequences on a finite alphabet A, to produce a set $X$ of all
output sequences obtained by interleaving the symbols of words $x_i$ in each
$X_i$ in arithmetic progressions (mod n). It studies a set of closure
operations relating interleaving and decimation. It reviews basic algorithmic
results on presentations of path sets and existence of a minimal
right-resolving presentation. It gives an algorithm for computing presentations
of decimations of path sets from presentations of path sets, showing the
minimal right-resolving presentation of $\psi_{j,n}(X)$ has at most one more
vertex than a minimal right-resolving presentation of X. It shows that a path
set has only finitely many distinct decimations. It shows the class of path
sets on a fixed alphabet is closed under all interleaving operations, and gives
algorithms for computing presentations of n-fold interleavings of given sets
$X_i$. It studies interleaving factorizations and classifies path sets that
have infinite interleaving factorizations, and gives an algorithm to recognize
them. It shows a finiteness of a process of iterated interleaving
factorizations, which "freezes" factors that have infinite interleavings.
|
We study linear perturbations about static and spherically symmetric black
hole solutions with stealth scalar hair in degenerate higher-order
scalar-tensor (DHOST) theories. We clarify master variables and derive the
quadratic Lagrangian for both odd- and even-parity perturbations. It is shown
that the even modes are in general plagued by gradient instabilities, or
otherwise the perturbations would be strongly coupled. Several possible ways
out are also discussed.
|
Sparse regression is frequently employed in diverse scientific settings as a
feature selection method. A pervasive aspect of scientific data that hampers
both feature selection and estimation is the presence of strong correlations
between predictive features. These fundamental issues are often not appreciated
by practitioners, and jeapordize conclusions drawn from estimated models. On
the other hand, theoretical results on sparsity-inducing regularized regression
such as the Lasso have largely addressed conditions for selection consistency
via asymptotics, and disregard the problem of model selection, whereby
regularization parameters are chosen. In this numerical study, we address these
issues through exhaustive characterization of the performance of several
regression estimators, coupled with a range of model selection strategies.
These estimators and selection criteria were examined across correlated
regression problems with varying degrees of signal to noise, distribution of
the non-zero model coefficients, and model sparsity. Our results reveal a
fundamental tradeoff between false positive and false negative control in all
regression estimators and model selection criteria examined. Additionally, we
are able to numerically explore a transition point modulated by the
signal-to-noise ratio and spectral properties of the design covariance matrix
at which the selection accuracy of all considered algorithms degrades. Overall,
we find that SCAD coupled with BIC or empirical Bayes model selection performs
the best feature selection across the regression problems considered.
|
The aim of this paper is to study in details the regular holonomic $D-$module
introduced in \cite{[B.19]} whose local solutions outside the polar
hyper-surface $\{\Delta(\sigma).\sigma_k = 0 \}$ are given by the local system
generated by the local branches of the multivalued function which is the root
of the universal degree $k$ equation $z^k + \sum_{h=1}^k
(-1)^h.\sigma_h.z^{k-h} = 0 $. Note that it is surprising that this regular
holonomic $D-$module is given by the quotient of $D$ by a left ideal which has
very simple explicit generators despite the fact it necessary encodes the
analogous systems for any root of the universal degree $l$ equation for each $l
\leq k$. Our main result is to relate this $D-$module with the minimal
extension of the irreducible local system associated to the difference of two
branches of the multivalued function defined above. Then we obtain again a very
simple explicit description of this minimal extension in term of the generators
of its left ideal in the Weyl algebra. As an application we show how these
results allow to compute the Taylor expansion of the root near $-1$ of the
equation $z^k + \sum_{h=-1}^k (-1)^h.\sigma_h.z^{k-h} - (-1)^k = 0 $.
|
We consider backward filtrations generated by processes coming from
deterministic and probabilistic cellular automata. We prove that these
filtrations are standard in the classical sense of Vershik's theory, but we
also study them from another point of view that takes into account the
measurepreserving action of the shift map, for which each sigma-algebra in the
filtrations is invariant. This initiates what we call the dynamical
classification of factor filtrations, and the examples we study show that this
classification leads to different results.
|
Quantum state transfer is a very important process in building a quantum
network when information from flying Qubit is transferred to the stationary
Qubit in a node via a quantum state transfer. NV centers due to their long
coherence time and the presence of nearby $13_C$ nuclear spin is an excellent
candidate for multi-Qubit quantum memory. Here we propose a theoretical
description for such a quantum state transfer from a cavity to a nearest
neighbour $13_C$ nuclear spin of a single Nitrogen vacancy center in diamond;
it shows great potential in realizing scalable quantum networks and quantum
simulation. The full Hamiltonian was considered with the zeroth-order and
interaction terms in the Hamiltonian and the theory of effective hamiltonian
theory was applied. We study the time evolution of the combined cavity-$13_C$
state through analytical calculation and simulation using QuTip. Graphs for
state transfer and fidelity measurement are presented here. We show that our
theoretical description verifies a high fidelity quantum state transfer from
the cavity to $13_C$ center by choosing suitable system parameters.
|
It has been proved in [J.-D. Hardtke, J. Math. Phys. Anal. Geom. 16, no.2,
119--137 (2020)] that a K\"othe-Bochner space $E(X)$ is locally
octahedral/locally almost square if $X$ has the respective property and the
simple functions are dense in $E(X)$. Here we show that the result still holds
true without the density assumption. The proof makes use of the
Kuratowski-Ryll-Nardzewski Theorem on measurable selections.
|
Textual escalation detection has been widely applied to e-commerce companies'
customer service systems to pre-alert and prevent potential conflicts.
Similarly, in public areas such as airports and train stations, where many
impersonal conversations frequently take place, acoustic-based escalation
detection systems are also useful to enhance passengers' safety and maintain
public order. To this end, we introduce a system based on acoustic-lexical
features to detect escalation from speech, Voice Activity Detection (VAD) and
label smoothing are adopted to further enhance the performance in our
experiments. Considering a small set of training and development data, we also
employ transfer learning on several wellknown emotional detection datasets,
i.e. RAVDESS, CREMA-D, to learn advanced emotional representations that is then
applied to the conversational escalation detection task. On the development
set, our proposed system achieves 81.5% unweighted average recall (UAR) which
significantly outperforms the baseline with 72.2% UAR.
|
We propose a novel end-to-end solution for video instance segmentation (VIS)
based on transformers. Recently, the per-clip pipeline shows superior
performance over per-frame methods leveraging richer information from multiple
frames. However, previous per-clip models require heavy computation and memory
usage to achieve frame-to-frame communications, limiting practicality. In this
work, we propose Inter-frame Communication Transformers (IFC), which
significantly reduces the overhead for information-passing between frames by
efficiently encoding the context within the input clip. Specifically, we
propose to utilize concise memory tokens as a mean of conveying information as
well as summarizing each frame scene. The features of each frame are enriched
and correlated with other frames through exchange of information between the
precisely encoded memory tokens. We validate our method on the latest benchmark
sets and achieved the state-of-the-art performance (AP 44.6 on YouTube-VIS 2019
val set using the offline inference) while having a considerably fast runtime
(89.4 FPS). Our method can also be applied to near-online inference for
processing a video in real-time with only a small delay. The code will be made
available.
|
Subscription services face a difficult problem when estimating the causal
impact of content launches on acquisition. Customers buy subscriptions, not
individual pieces of content, and once subscribed they may consume many pieces
of content in addition to the one(s) that drew them to the service. In this
paper, we propose a scalable methodology to estimate the incremental
acquisition impact of content launches in a subscription business model when
randomized experimentation is not feasible. Our approach uses simple
assumptions to transform the problem into an equivalent question: what is the
expected consumption rate for new subscribers who did not join due to the
content launch? We estimate this counterfactual rate using the consumption rate
of new subscribers who joined just prior to launch, while making adjustments
for variation related to subscriber attributes, the in-product experience, and
seasonality. We then compare our counterfactual consumption to the actual rate
in order to back out an acquisition estimate. Our methodology provides top-line
impact estimates at the content / day / region grain. Additionally, to enable
subscriber-level attribution, we present an algorithm that assigns specific
individual accounts to add up to the top-line estimate. Subscriber-level
attribution is derived by solving an optimization problem to minimize the
number of subscribers attributed to more than one piece of content, while
maximizing the average propensity to be incremental for subscribers attributed
to each piece of content. Finally, in the absence of definitive ground truth,
we present several validation methods which can be used to assess the
plausibility of impact estimates generated by these methods.
|
We report the effect of 4f electron doping on structural, electrical and
magneto-transport properties of Dy doped half Heusler Y1-x(Dy)xPdBi (x =0, 0.2,
0.5, 1) thin films grown by pulsed laser deposition. The Dy doping leads to
lattice contraction which increases from 0% for the parent x =0 sample to
approx 1.3% for x=1 sample with increase in Dy doping. The electrical transport
measurements show a typical semi-metallic behaviour in the temperature range 3K
to 300K and a sharp drop in resistivity at low temperatures (less than 3K) for
all the samples. Magnetotransport measurements and Shubnikov de-Hass
oscillations at high magnetic fields demonstrate that for these topologically
non-trivial samples, Dy doping induced lattice contraction plays an active role
in modifying the Fermi surface, carrier concentration and the effective
electron mass. There is an uniform suppression of the onset of
superconductivity with increased Dy doping which is possibly related to the
increasing local exchange field arising from the 4f electrons in Dy. Our
results indicate that we can tune various band structure parameters of YPdBi by
f electron doping and strained thin films of Y1-x(Dy)xPdBi show surface
dominated relativistic carrier transport at low temperatures.
|
Asteroseismology using space-based telescopes is vital to our understanding
of stellar structure and evolution. {\textit{CoRoT}}, {\textit{Kepler}}, and
{\textit{TESS}} space telescopes have detected large numbers of solar-like
oscillating evolved stars. %(kaynaklar, Kallinger, vb ). Solar-like oscillation
frequencies have an important role in the determination of fundamental stellar
parameters; in the literature, the relations between the two is established by
the so-called scaling relations. % These scaling relations are in better
agreement with mass and radius of main-sequence stars with large separation
($\Delta\nu$) and frequency of maximum amplitude (${\nu_{\rm max}}$). In this
study, we analyse data obtained from the observation of 15 evolved solar-like
oscillating stars using the {\textit{Kepler}} and ground-based %\textit{CoRoT}
telescopes.
The main purpose of the study is to determine very precisely the fundamental
parameters of evolved stars by constructing interior models using asteroseismic
parameters. We also fit the reference frequencies of models to the
observational reference frequencies caused by the He {\scriptsize II}
ionization zone.
The 15 evolved stars are found to have masses and radii within ranges of
$0.79$-$1.47$ $M_{\rm sun}$ and $1.60$-$3.15$ $R_{\rm sun}$, respectively.
Their model ages range from $2.19$ to $12.75$ Gyr. %Using a number of methods
based on conventional and modified scaling relations and evolutionary models
constructed with using the {\small {MESA}} code, we determine stellar radii,
masses and ages. It is revealed that fitting reference frequencies typically
increase the accuracy of asteroseismic radius, mass, and age. The typical
uncertainties of mass and radius are $\sim$ 3-6 and $\sim$ 1-2 per cent,
respectively. Accordingly, the differences between the model and literature
ages are generally only a few Gyr.
|
Over-the-air computation (AirComp) has recently been recognized as a
promising scheme for a fusion center to achieve fast distributed data
aggregation in wireless networks via exploiting the superposition property of
multiple-access channels. Since it is challenging to provide reliable data
aggregation for a large number of devices using AirComp, in this paper, we
propose to enable AirComp via the cloud radio access network (Cloud-RAN)
architecture, where a large number of antennas are deployed at separate sites
called remote radio heads (RRHs). However, the potential densification gain
provided by Cloud-RAN is generally bottlenecked by the limited capacity of the
fronthaul links connecting the RRHs and the fusion center. To this end, we
formulate a joint design problem for AirComp transceivers and quantization bits
allocation and propose an efficient algorithm to tackle this problem. Our
numerical results shows the advantages of the proposed architecture compared
with the state-of-the-art solutions.
|
This early work aims to allow organizations to diagnose their capacity to
properly adopt microservices through initial milestones of a Microservice
Maturity Model (MiMMo). The objective is to prepare the way towards a general
framework to help companies and industries to determine their microservices
maturity. Organizations lean more and more on distributed web applications and
Line of Business software. This is particularly relevant during the current
Covid-19 crisis, where companies are even more challenged to offer their
services online, targeting a very high level of responsiveness in the face of
rapidly increasing and diverse demands. For this, microservices remain the most
suitable delivery application architectural style. They allow agility not only
on the technical application, as often considered, but on the enterprise
architecture as a whole, influencing the actual financial business of the
company. However, microservices adoption is highly risk-prone and complex.
Before they establish an appropriate migration plan, first and foremost,
companies must assess their degree of readiness to adopt microservices. For
this, MiMMo, a Microservices Maturity Model framework assessment, is proposed
to help companies assess their readiness for the microservice architectural
style, based on their actual situation. MiMMo results from observations of and
experience with about thirty organizations writing software. It conceptualizes
and generalizes the progression paths they have followed to adopt microservices
appropriately. Using the model, an organization can evaluate itself in two
dimensions and five maturity levels and thus: (i) benchmark itself on its
current use of microservices; (ii) project the next steps it needs to achieve a
higher maturity level and (iii) analyze how it has evolved and maintain a
global coherence between technical and business stakes.
|
In this paper we study a neighborhood of generic singularities formed by mean
curvature flow (MCF). We limit our consideration to the singularities modelled
on $\mathbb{S}^3\times\mathbb{R}$ because, compared to the cases
$\mathbb{S}^k\times \mathbb{R}^{l}$ with $l\geq 2$, the present case has the
fewest possibilities to be considered. For various possibilities, we provide a
detailed description for a small, but fixed, neighborhood of singularity, and
prove that a small neighborhood of the singularity is mean convex, and the
singularity is isolated. For the remaining possibilities, we conjecture that an
entire neighborhood of the singularity becomes singular at the time of blowup,
and present evidences to support this conjecture. A key technique is that, when
looking for a dominating direction for the rescaled MCF, we need a normal form
transformation, as a result, the rescaled MCF is parametrized over some chosen
curved cylinder, instead over a standard straight one.
This is a long paper. The introduction is carefully written to present the
key steps and ideas.
|
The evidence for benzonitrile (C$_6$H$_5$CN}) in the starless cloud core
TMC-1 makes high-resolution studies of other aromatic nitriles and their
ring-chain derivatives especially timely. One such species is
phenylpropiolonitrile (3-phenyl-2-propynenitrile, C$_6$H$_5$C$_3$N), whose
spectroscopic characterization is reported here for the first time. The low
resolution (0.5 cm$^{-1}$) vibrational spectrum of C$_6$H$_5$C$_3$N} has been
recorded at far- and mid-infrared wavelengths (50 - 3500 cm$^{-1}$) using a
Fourier Transform interferometer, allowing for the assignment of band centers
of 14 fundamental vibrational bands. The pure rotational spectrum of the
species has been investigated using a chirped-pulse Fourier transform microwave
(FTMW) spectrometer (6 - 18 GHz), a cavity enhanced FTMW instrument (6 - 20
GHz), and a millimeter-wave one (75 - 100 GHz, 140 - 214 GHz). Through the
assignment of more than 6200 lines, accurate ground state spectroscopic
constants (rotational, centrifugal distortion up to octics, and nuclear
quadrupole hyperfine constants) have been derived from our measurements, with a
plausible prediction of the weaker bands through calculations. Interstellar
searches for this highly polar species can now be undertaken with confidence
since the astronomically most interesting radio lines have either been measured
or can be calculated to very high accuracy below 300 GHz.
|
In this work, the order parameter or average magnetization expressions are
obtained for the square and the honeycomb lattices based on recently obtained
magnetization relation, $<\sigma_{0,i}>=
<\!\!\tanh[ \kappa(\sigma_{1,i}+\sigma_{2,i}+\dots +\sigma_{z,i})+H]\!\!> $.
Where, $\kappa$ is the coupling strength and $z$ is the number of nearest
neighbors. $\sigma_{0,i}$ denotes the central spin at the $i^{th}$ site while
$\sigma_{l,i}$, $l=1,2,\dots,z$, are the nearest neighbor spins around the
central spin. In our investigation, inevitably we have to make a conjecture
about the three site correlation function appearing in the obtained relation of
this paper. The conjectured form of the the three spin correlation function is
given by the relation,
$<\!\!\sigma_{1}\sigma_{2}\sigma_{3}\!\!>=a<\sigma>+(1-a)<\sigma>^{(1+\beta^{-1})}$,
here $\beta$ denotes the critical exponent for the average magnetization and
$a$ is positive real number less than one. The relevance of this conjecture is
based on fundamental physical reasoning. In addition, it is tested and
investigated by comparing the obtained relations of this paper with the
previously obtained exact results for the square and honeycomb lattices. It is
seen that the agreements of the obtained average magnetization relations with
those of the previously obtained exact results are unprecedentedly perfect.
|
We consider a node where packets of fixed size are generated at arbitrary
intervals. The node is required to maintain the peak age of information (AoI)
at the monitor below a threshold by transmitting potentially a subset of the
generated packets. At any time, depending on packet availability and current
AoI, the node can choose the packet to transmit, and its transmission speed. We
consider a power function (rate of energy consumption) that is increasing and
convex in transmission speed, and the objective is to minimize the energy
consumption under the peak AoI constraint at all times. For this problem, we
propose a (customized) greedy policy, and analyze its competitive ratio (CR) by
comparing it against an optimal offline policy by deriving some structural
results. We show that for polynomial power functions, the CR upper bound for
the greedy policy is independent of the system parameters, such as the peak
AoI, packet size, time horizon, or the number of packets generated. Also, we
derive a lower bound on the competitive ratio of any causal policy, and show
that for exponential power functions (e.g., Shannon rate function), the
competitive ratio of any causal policy grows exponentially with increase in the
ratio of packet size to peak AoI.
|
By a quasi-connected reductive group (a term of Labesse) over an arbitrary
field we mean an almost direct product of a connected semisimple group and a
quasi-torus (a smooth group of multiplicative type). We show that a linear
algebraic group is quasi-connected reductive if and only if it is isomorphic to
a smooth normal subgroup of a connected reductive group. We compute the first
Galois cohomology set H^1(R,G) of a quasi-connected reductive group G over the
field R of real numbers in terms of a certain action of a subgroup of the Weyl
group on the Galois cohomology of a fundamental quasi-torus of G.
|
We evaluate three leading dependency parser systems from different paradigms
on a small yet diverse subset of languages in terms of their
accuracy-efficiency Pareto front. As we are interested in efficiency, we
evaluate core parsers without pretrained language models (as these are
typically huge networks and would constitute most of the compute time) or other
augmentations that can be transversally applied to any of them. Biaffine
parsing emerges as a well-balanced default choice, with sequence-labelling
parsing being preferable if inference speed (but not training energy cost) is
the priority.
|
We identify an effective proxy for the analytically-unknown second integral
of motion (I_2) for rotating barred or tri-axial potentials. Planar orbits of a
given energy follow a tight sequence in the space of the time-averaged angular
momentum and its amplitude of fluctuation. The sequence monotonically traces
the main orbital families in the Poincare map, even in the presence of resonant
and chaotic orbits. This behavior allows us to define the "Calibrated Angular
Momentum," the average angular momentum normalized by the amplitude of its
fluctuation, as a numerical proxy for I_2. It also implies that the amplitude
of fluctuation in L_z, previously under-appreciated, contains valuable
information. This new proxy allows one to classify orbital families easily and
accurately, even for real orbits in N-body simulations of barred galaxies. It
is a good diagnostic tool of dynamical systems, and may facilitate the
construction of equilibrium models.
|
Vertex connectivity is a well-studied concept in graph theory with numerous
applications. A graph is $k$-connected if it remains connected after removing
any $k-1$ vertices. The vertex connectivity of a graph is the maximum $k$ such
that the graph is $k$-connected. There is a long history of algorithmic
development for efficiently computing vertex connectivity. Recently, two near
linear-time algorithms for small k were introduced by [Forster et al. SODA
2020]. Prior to that, the best known algorithm was one by [Henzinger et al.
FOCS'96] with quadratic running time when k is small.
In this paper, we study the practical performance of the algorithms by
Forster et al. In addition, we introduce a new heuristic on a key subroutine
called local cut detection, which we call degree counting. We prove that the
new heuristic improves space-efficiency (which can be good for caching
purposes) and allows the subroutine to terminate earlier. According to
experimental results on random graphs with planted vertex cuts, random
hyperbolic graphs, and real world graphs with vertex connectivity between 4 and
15, the degree counting heuristic offers a factor of 2-4 speedup over the
original non-degree counting version for most of our data. It also outperforms
the previous state-of-the-art algorithm by Henzinger et al. even on relatively
small graphs.
|
Smart Cities are developing in parallel with the global trend towards
urbanization. The ultimate goal of Smart City projects is to deliver a positive
impact for the citizens and the socio-economic and ecological environment. This
involves the challenge to derive concrete requirements for (technical) projects
from overarching concepts like Quality of Life (QoL) and Subjective Well-Being
(SWB). Linking long-term, impact oriented goals with project outputs and
outcomes is a complex problem. Decision making on requirements and resulting
features of single Smart City projects (or systems) is even more complex since
cities are not like monolithic, hierarchical and well structured systems.
Nevertheless, systems engineering provides concepts which support decision
making in such situations. Complex socio-technical systems such as smart cities
can be characterized as systems of systems (SoS). A SoS is composed of
independently developed systems that nevertheless provide a higher-level
integrated functionality. To add new functionality to a SoS, either existing
systems must be extended or new systems must be developed and integrated. In
both cases, the extension of functionality is usually done in small increments
and structured via software releases. However, the decision which features to
include in the next release is complex and difficult to manage when done
manually. To address this, we make use of the multi-objective next release
problem (MONRP) to search for an optimal set of features for a software release
in a SoS context. In order to refine the search in an early planning phase, we
propose a technique to model and validate the features using the scenario
modeling language for Kotlin (SMLK). This is demonstrated with a
proof-of-concept implementation.
|
As a 3D topological insulator, bismuth selenide (Bi2Se3) has potential
applications for electrically and optically controllable magnetic and
optoelectronic devices. How the carriers interact with lattice is important to
understand the coupling with its topological phase. It is essential to measure
with a time scale smaller than picoseconds for initial interaction. Here we use
an X-ray free-electron laser to perform time-resolved diffraction to study
ultrafast carrier-induced lattice contractions and interlayer modulations in
Bi2Se3 thin films. The lattice contraction depends on the carrier concentration
and is followed by an interlayer expansion accompanied by oscillations. Using
density functional theory (DFT) and the Lifshitz model, the initial contraction
can be explained by van der Waals force modulation of the confined free carrier
layers. Band inversion, related to a topological phase transition, is modulated
by the expansion of the interlayer distance. These results provide insight into
instantaneous topological phases on ultrafast timescales.
|
We investigate artificial compressibility (AC) techniques for the time
discretization of the incompressible Navier-Stokes equations. The space
discretization is based on a lowest-order face-based scheme supporting
polytopal meshes, namely discrete velocities are attached to the mesh faces and
cells, whereas discrete pressures are attached to the mesh cells. This
face-based scheme can be embedded into the framework of hybrid mixed mimetic
schemes and gradient schemes, and has close links to the lowest-order version
of hybrid high-order methods devised for the steady incompressible
Navier-Stokes equations. The AC timestepping uncouples at each time step the
velocity update from the pressure update. The performances of this approach are
compared against those of the more traditional monolithic approach which
maintains the velocity-pressure coupling at each time step. We consider both
first-order and second-order time schemes and either an implicit or an explicit
treatment of the nonlinear convection term. We investigate numerically the CFL
stability restriction resulting from an explicit treatment, both on Cartesian
and polytopal meshes. Finally, numerical tests on large 3D polytopal meshes
highlight the efficiency of the AC approach and the benefits of using
second-order schemes whenever accurate discrete solutions are to be attained.
|
A deluge of recent work has explored equivalences between wide neural
networks and kernel methods. A central theme is that one can analytically find
the kernel corresponding to a given wide network architecture, but despite
major implications for architecture design, no work to date has asked the
converse question: given a kernel, can one find a network that realizes it? We
affirmatively answer this question for fully-connected architectures,
completely characterizing the space of achievable kernels. Furthermore, we give
a surprising constructive proof that any kernel of any wide, deep,
fully-connected net can also be achieved with a network with just one hidden
layer and a specially-designed pointwise activation function. We experimentally
verify our construction and demonstrate that, by just choosing the activation
function, we can design a wide shallow network that mimics the generalization
performance of any wide, deep, fully-connected network.
|
Nowadays, developers often reuse existing APIs to implement their programming
tasks. A lot of API usage patterns are mined to help developers learn API usage
rules. However, there are still many missing variables to be synthesized when
developers integrate the patterns into their programming context. To deal with
this issue, we propose a comprehensive approach to integrate API usage patterns
in this paper. We first perform an empirical study by analyzing how API usage
patterns are integrated in real-world projects. We find the expressions for
variable synthesis is often non-trivial and can be divided into 5 syntax types.
Based on the observation, we promote an approach to help developers
interactively complete API usage patterns. Compared to the existing code
completion techniques, our approach can recommend infrequent expressions
accompanied with their real-world usage examples according to the user intent.
The evaluation shows that our approach could assist users to integrate APIs
more efficiently and complete the programming tasks faster than existing works.
|
We are interested in martingale rearrangement couplings. As introduced by
Wiesel [37] in order to prove the stability of Martingale Optimal Transport
problems, these are projections in adapted Wasserstein distance of couplings
between two probability measures on the real line in the convex order onto the
set of martingale couplings between these two marginals. In reason of the lack
of relative compactness of the set of couplings with given marginals for the
adapted Wasserstein topology, the existence of such a projection is not clear
at all. Under a barycentre dispersion assumption on the original coupling which
is in particular satisfied by the Hoeffding-Fr\'echet or comonotone coupling,
Wiesel gives a clear algorithmic construction of a martingale rearrangement
when the marginals are finitely supported and then gets rid of the finite
support assumption by relying on a rather messy limiting procedure to overcome
the lack of relative compactness. Here, we give a direct general construction
of a martingale rearrangement coupling under the barycentre dispersion
assumption. This martingale rearrangement is obtained from the original
coupling by an approach similar to the construction we gave in [24] of the
inverse transform martingale coupling, a member of a family of martingale
couplings close to the Hoeffding-Fr\'echet coupling, but for a slightly
different injection in the set of extended couplings introduced by Beiglb\"ock
and Juillet [9] and which involve the uniform distribution on [0, 1] in
addition to the two marginals. We last discuss the stability in adapted
Wassertein distance of the inverse transform martingale coupling with respect
to the marginal distributions.
|
Gravitationally lensed extragalactic sources are often subject to statistical
microlensing by stars in the galaxy or cluster lens. Accurate models of the
flux statistics are required for inferring source and lens properties from flux
observations. We derive an accurate semi-analytic approximation for calculating
the mean and variance of the magnification factor, which are applicable to
Gaussian source profiles and arbitrary non-uniform macro lens models, and hence
can save the need to perform expensive numerical simulations. The results are
given as single and double lens-plane integrals with simple, non-oscillatory
integrands, and hence are fast computable using common Monte Carlo integrators.
Employing numerical ray-shooting experiments, we examine the case of a highly
magnified source near a macro fold caustic, and demonstrate the excellent
accuracy of this semi-analytic approximation in the regime of multiple micro
images. Additionally, we point out how the maximum persistent magnification
achievable near a macro caustic is fundamentally limited by the masses and
number density of the foreground microlenses, in addition to the source's
physical size.
|
Electrically interfacing atomically thin transition metal dichalcogenide
semiconductors (TMDSCs) with metal leads is challenging because of undesired
interface barriers, which have drastically constrained the electrical
performance of TMDSC devices for exploring their unconventional physical
properties and realizing potential electronic applications. Here we demonstrate
a strategy to achieve nearly barrier-free electrical contacts with few-layer
TMDSCs by engineering interfacial bonding distortion. The carrier-injection
efficiency of such electrical junction is substantially increased with robust
ohmic behaviors from room to cryogenic temperatures. The performance
enhancements of TMDSC field-effect transistors are well reflected by the
ultralow contact resistance (down to 90 Ohm um in MoS2, towards the quantum
limit), the ultrahigh field-effect mobility (up to 358,000 cm2V-1s-1 in WSe2)
and the prominent transport characteristics at cryogenic temperatures. This
method also offers new possibilities of the local manipulation of structures
and electronic properties for TMDSC device design.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.