abstract
stringlengths 42
2.09k
|
---|
Although van der Waals (vdW) layered MoS2 shows the phase transformation from
the semiconducting 2H-phase to the metallic 1T-phase through chemical lithium
intercalation, vdW MoTe2 is thermodynamically reversible between the 2H- and
1T'-phases, and can be further transformed by energetics, laser irradiation,
strain or pressure, and electrical doping. Here, thickness- and
temperature-dependent optical properties of 1T'-MoTe2 thin films grown by
chemical vapor depsition are investigated via Fourier-transformed infrared
spectroscopy. An optical gap of 28 +/- 2 meV in a 3-layer (or 2 nm) thick
1T'-MoTe2 is clearly observed at a low temperature region below 50K. No
discernible optical bandgap is observed in samples thicker than ~4 nm. The
observed thickness-dependent bandgap results agree with the measured dc
resistivity data; the thickness-dependent 1T'-MoTe2 clearly demonstrates the
metal-semiconductor transition at a crossover below the 2 nm-thick sample.
|
With the advent of quantum computers, many quantum computing algorithms are
being developed. Solving linear system is one of the most fundamental problems
in almost all of science and engineering. Harrow-Hassidim-Lloyd algorithm, a
monumental quantum algorithm for solving linear systems on the gate model
quantum computers, was invented and several advanced variations have been
developed. For a given square matrix $A$ and a vector $\vec{b}$, we will find
unconstrained binary optimization (QUBO) models for a vector $\vec{x}$ that
satisfies $A \vec{x} = \vec{b}$. To formulate QUBO models for a linear system
solving problem,we make use of a linear least-square problem with binary
representation of the solution. We validate those QUBO models on the D-Wave
system and discuss the results. For a simple system, We provide a python code
to calculate the matrix characterizing the relationship between the variables
and to print the test code that can be used directly in D-Wave system.
|
We consider the reducibility problem of cocycles by isometries of Gromov
hyperbolic metric spaces in the Livsic setting. We show that provided that the
boundary cocycle (that acts on a compact space) is reducible in a suitable
H\"older class, then the original cocycle by isometries (that acts on an
unbounded space) is also reducible.
|
We present the electron tunneling transport and spectroscopic characters of a
superconducting Josephson junction with a barrier of single Kitaev quantum spin
liquid (QSL) layer. We find that the dynamical spin correlation features are
well reflected in the direct current differential conductance dI_c/dV of the
single-particle tunneling with an energy shift of superconducting gap sum
2{\Delta}, including the unique spin gap and dressed itinerant Majorana
dispersive band, which can be regarded as evidence of the Kitaev QSL. However,
the zero-voltage Josephson current I_s only displays some residual features of
dynamical spin susceptibility in the Kitaev QSL due to the spin singlet of
Cooper pairs. These results pave a new way to measure the dynamical spinon or
Majorana fermion spectroscopy of the Kitaev and other spin liquid materials.
|
We can only allow human-robot-cooperation in a common work cell if the human
integrity is guaranteed. A surveillance system with multiple cameras can detect
collisions without contact to the human collaborator. A failure safe system
needs to optimally cover the important areas of the robot work cell with safety
overlap. We propose an efficient algorithm for optimally placing and orienting
the cameras in a 3D CAD model of the work cell. In order to evaluate the
quality of the camera constellation in each step, our method simulates the
vision system using a z-buffer rendering technique for image acquisition, a
voxel space for the overlap and a refined visual hull method for a conservative
human reconstruction. The simulation allows to evaluate the quality with
respect to the distortion of images and advanced image analysis in the presence
of static and dynamic visual obstacles such as tables, racks, walls, robots and
people. Our method is ideally suited for maximizing the coverage of multiple
cameras or minimizing an error made by the visual hull and can be extended to
probabilistic space carving.
|
In this article, we introduce the notion of Lie triple centralizer as
follows. Let $\mathcal{A}$ be an algebra, and $\phi:\mathcal{A}\to\mathcal{A}$
be a linear mapping. we say $\phi$ is a Lie triple centralizer whenever
$\phi([[a,b],c])=[[\phi(a),b],c]$ for all $a,b,c\in\mathcal{A}$. Then we
characterize the general form of Lie triple centralizers on generalized matrix
algebra $\mathcal{U}$ and under some mild conditions on $\mathcal{U}$, we
present the necessary and sufficient conditions for Lie triple centralizers to
be proper. As an application of our results, we characterize generalized Lie
triple derivations on generalized matrix algebras.
|
The purpose of this Conference is to present the main lines of base projects
that are founded on research already begun in previous years. In this sense,
this manuscript will present the main lines of research in Diabetes Mellitus
type 1 and Machine Learning techniques in an Internet of Things environment, so
that we can summarize the future lines to be developed as follows: data
collection through biosensors, massive data processing in the cloud,
interconnection of biodevices, local computing vs. cloud computing, and
possibilities of machine learning techniques to predict blood glucose values,
including both variable selection algorithms and predictive techniques.
|
We introduce reachability analysis for the formal examination of robots. We
propose a novel identification method, which preserves reachset conformance of
linear systems. We additionally propose a simultaneous identification and
control synthesis scheme to obtain optimal controllers with formal guarantees.
In a case study, we examine the effectiveness of using reachability analysis to
synthesize a state-feedback controller, a velocity observer, and an output
feedback controller.
|
We present a classification of strict limits of planar BV homeomorphisms. The
authors and S. Hencl showed in a previous work \cite{CHKR} that such mappings
allow for cavitations and fractures singularities but fulfill a suitable
generalization of the INV condition. As pointed out by J. Ball \cite{B}, these
features are physically expected by limit configurations of elastic
deformations. In the present work we develop a suitable generalization of the
\emph{no-crossing} condition introduced by De Philippis and Pratelli in
\cite{PP} to describe weak limits of planar Sobolev homeomorphisms that we call
\emph{BV no-crossing} condition, and we show that a planar mapping satisfies
this property if and only if it can be approximated strictly by homeomorphisms
of bounded variations.
|
Magnetic properties of A2BB'O6 (A = rare or alkaline earth ions; B,B' =
transition metal ions) double perovskites are of great interest due to their
potential spintronic applications. Particularly fascinating is the zero field
cooled exchange bias (ZEB) effect observed for the hole doped La2-xAxCoMnO6
polycrystalline samples. In this work we synthesize La2CoMnO6,
La1.5Ca0.5CoMnO6, and La1.5Sr0.5CoMnO6 single crystals by the floating zone
method and study their magnetic behavior. The three materials are
ferromagnetic. Surprisingly, we observe no zero or even conventional exchange
bias effect for the Ca and Sr doped single crystals, in sharp contrast to
polycrystalline samples. This absence indicates that the lack of grain
boundaries and spin glass-like behavior, not observed in our samples, might be
key ingredients for the spontaneous exchange bias phenomena seen in
polycrystalline samples.
|
Low-rank Tucker and CP tensor decompositions are powerful tools in data
analytics. The widely used alternating least squares (ALS) method, which solves
a sequence of over-determined least squares subproblems, is costly for large
and sparse tensors. We propose a fast and accurate sketched ALS algorithm for
Tucker decomposition, which solves a sequence of sketched rank-constrained
linear least squares subproblems. Theoretical sketch size upper bounds are
provided to achieve $O(\epsilon)$ relative error for each subproblem with two
sketching techniques, TensorSketch and leverage score sampling. Experimental
results show that this new ALS algorithm, combined with a new initialization
scheme based on randomized range finder, yields up to $22.0\%$ relative
decomposition residual improvement compared to the state-of-the-art sketched
randomized algorithm for Tucker decomposition of various synthetic and real
datasets. This Tucker-ALS algorithm is further used to accelerate CP
decomposition, by using randomized Tucker compression followed by CP
decomposition of the Tucker core tensor. Experimental results show that this
algorithm not only converges faster, but also yields more accurate CP
decompositions.
|
The accelerated penetration rate of renewable energy sources (RES) brings
environmental benefits at the expense of increasing operation cost and
undermining the satisfaction of the N-1 security criterion. To address the
latter issue, this paper envisions N-1 security control in RES dominated power
systems through stochastic multi-period AC security constrained optimal power
flow (SCOPF). The paper extends the state-of-the-art, i.e. deterministic and
single time period AC SCOPF, to capture two new dimensions, RES stochasticity
and multiple time periods, as well as emerging sources of flexibility such as
flexible loads (FL) and energy storage systems (ESS). Accordingly, the paper
proposes and solves for the first time a new problem formulation in the form of
stochastic multi-period AC SCOPF (S-MP-SCOPF). The S-MP-SCOPF is formulated as
a non-linear programming (NLP) problem. It computes optimal setpoints of
flexibility resources and other conventional control means for congestion
management and voltage control in day-ahead operation. Another salient feature
of this paper is the comprehensive and accurate modelling, using: AC power flow
model for both pre-contingency and post-contingency states, inter-temporal
constraints for resources such as FL and ESS in a 24-hours time horizon and RES
uncertainties. The importance and performances of the proposed model through a
direct approach, pushing the problem size up to the solver limit, are
illustrated on two test systems of 5 nodes and 60 nodes, respectively, while
future work will develop a tractable algorithm.
|
The determination of efficient collective variables is crucial to the success
of many enhanced sampling methods. As inspired by previous discrimination
approaches, we first collect a set of data from the different metastable
basins. The data are then projected with the help of a neural network into a
low-dimensional manifold in which data from different basins are well
discriminated. This is here guaranteed by imposing that the projected data
follows a preassigned distribution. The collective variables thus obtained lead
to an efficient sampling and often allow reducing the number of collective
variables in a multi-basin scenario. We first check the validity of the method
in two-state systems. We then move to multi-step chemical processes. In the
latter case, at variance with previous approaches, one single collective
variable suffices, leading not only to computational efficiency but to a very
clear representation of the reaction free energy profile.
|
We construct the gauge-invariant electric and magnetic charges in Yang-Mills
theory coupled to cosmological General Relativity (or any other geometric
gravity), extending the flat spacetime construction of Abbott and Deser. For
non-vanishing background gauge fields, the charges receive non-trivial
contribution from the gravity part. In addition, we study the constraints on
the first order perturbation theory and establish the conditions for
linearization instability: that is the validity of the first order perturbation
theory.
|
In the band theory, first-principles calculations, the tight-binding method
and the effective $k\cdot p$ model are usually employed to investigate the
electronic structure of condensed matters. The effective $k\cdot p$ model has a
compact form with a clear physical picture, and first-principles calculations
can give more accurate results. Nowadays, it has been widely recognized to
combine the $k\cdot p$ model and first-principles calculations to explore
topological materials. However, the traditional method to derive the $k\cdot p$
Hamiltonian is complicated and time-consuming by hand. In this work, we
independently develop a programmable algorithm to construct effective $k\cdot
p$ Hamiltonians. Symmetries and orbitals are used as the input information to
produce the one-/two-/three-dimension $k\cdot p$ Hamiltonian in our method, and
the open-source code can be directly downloaded online. At last, we also
demonstrate the application to MnBi$_2$Te$_4$-family magnetic topological
materials.
|
In successful enterprise attacks, adversaries often need to gain access to
additional machines beyond their initial point of compromise, a set of internal
movements known as lateral movement. We present Hopper, a system for detecting
lateral movement based on commonly available enterprise logs. Hopper constructs
a graph of login activity among internal machines and then identifies
suspicious sequences of loginsthat correspond to lateral movement. To
understand the larger context of each login, Hopper employs an inference
algorithm to identify the broader path(s) of movement that each login belongs
to and the causal user responsible for performing a path's logins. Hopper then
leverages this path inference algorithm, in conjunction with a set of detection
rules and a new anomaly scoring algorithm, to surface the login paths most
likely to reflect lateral movement. On a 15-month enterprise dataset consisting
of over 780 million internal logins, Hop-per achieves a 94.5% detection rate
across over 300 realistic attack scenarios, including one red team attack,
while generating an average of <9 alerts per day. In contrast, to detect the
same number of attacks, prior state-of-the-art systems would need to generate
nearly 8x as many false positives.
|
In this paper, we prove the existence of infinite Gibbs Delaunay Potts
tessellations for marked particle configurations. The particle systems has two
types of interaction, a so-called \emph{background potential} ensures that
small and large triangles are excluded in the Delaunay tessellation, and is
similar to the so-called hardcore potential introduced in \cite{Der08}.
Particles carry one of $q$ separate marks. Our main result is that for large
activities and high \emph{type interaction} strength the model has at least $
q$ distinct translation-invariant Gibbs Delaunay Potts tessellations. The main
technique is a coarse-graining procedure using the scales in the system
followed by comparison with site percolation on $ \Z^2 $.
|
Many special classes of simplicial sets, such as the nerves of categories or
groupoids, the 2-Segal sets of Dyckerhoff and Kapranov, and the (discrete)
decomposition spaces of G\'{a}lvez, Kock, and Tonks, are characterized by the
property of sending certain commuting squares in the simplex category $\Delta$
to pullback squares of sets. We introduce weaker analogues of these properties
called completeness conditions, which require squares in $\Delta$ to be sent to
weak pullbacks of sets, defined similarly to pullback squares but without the
uniqueness property of induced maps. We show that some of these completeness
conditions provide a simplicial set with lifts against certain subsets of
simplices first introduced in the theory of database design. We also provide
reduced criteria for checking these properties using factorization results for
pushouts squares in $\Delta$, which we characterize completely, along with
several other classes of squares in $\Delta$. Examples of simplicial sets with
completeness conditions include quasicategories, Kan complexes, many of the
compositories and gleaves of Flori and Fritz, and bar constructions for
algebras of certain classes of monads. The latter is our motivating example
which we discuss in a companion paper.
|
A novel and compact dual band planar antenna for 2.4/5.2/5.8-GHz wireless
local area network(WLAN) applications is proposed and studied in this paper.
The antenna comprises of a T-shaped and a F-shaped element to generate two
resonant modes for dual band operation. The two elements can independently
control the operating frequencies of the two excited resonant modes. The
T-element which is fed directly by a 50 $\Omega$ microstrip line generates a
frequency band at around 5.2 GHz and the antenna parameters can be adjusted to
generate a frequency band at 5.8 GHz as well, thus covering the two higher
bands of WLAN systems individually. By couple-feeding the F-element through the
T-element, a frequency band can be generated at 2.4 GHz to cover the lower band
of WLAN system. Hence, the two elements together are very compact with a total
area of only 11$\times$6.5 mm$^{2}$. A thorough parametric study of key
dimensions in the design has been performed and the results obtained have been
used to present a generalized design approach. Plots of the return loss and
radiation pattern have been given and discussed in detail to show that the
design is a very promising candidate for WLAN applications.
|
In this study, we perform a two-dimensional axisymmetric simulation to assess
the flow characteristics and understand the film cooling process in a dual bell
nozzle. The secondary stream with low temperature is injected at three
different axial locations on the nozzle wall, and the simulations are carried
out to emphasize the impact of injection location (secondary flow) on film
cooling of the dual bell nozzle. The cooling effect is demonstrated through the
temperature and pressure distributions on the nozzle wall or, in-turn, the
separation point movement. Downstream of the injection point, the Mach number
and temperature profiles document the mixing of the main flow and secondary
flow. The inflection region is observed to be the most promising location for
the injection of the secondary flow. We have further investigated the effect of
Mach number of the secondary stream. The current study demonstrates that one
can control the separation point in a dual bell nozzle with the help of
secondary injection (Mach number) so that an optimum amount of thrust can be
achieved.
|
We systematically explored the phase behavior of the hard-core two-scale ramp
model suggested by Jagla[E. A. Jagla, Phys. Rev. E 63, 061501 (2001)] using a
combination of the nested sampling and free energy methods. The sampling
revealed that the phase diagram of the Jagla potential is significantly richer
than previously anticipated, and we identified a family of new crystalline
structures, which is stable over vast regions in the phase diagram. We showed
that the new melting line is located at considerably higher temperature than
the boundary between the low- and high-density liquid phases, which was
previously suggested to lie in a thermodynamically stable region. The newly
identified crystalline phases show unexpectedly complex structural features,
some of which are shared with the high-pressure ice VI phase.
|
dentifying associations among biological variables is a major challenge in
modern quantitative biological research, particularly given the systemic and
statistical noise endemic to biological systems. Drug sensitivity data has
proven to be a particularly challenging field for identifying associations to
inform patient treatment. To address this, we introduce two semi-parametric
variations on the commonly used concordance index: the robust concordance index
and the kernelized concordance index (rCI, kCI), which incorporate measurements
about the noise distribution from the data. We demonstrate that common
statistical tests applied to the concordance index and its variations fail to
control for false positives, and introduce efficient implementations to compute
p-values using adaptive permutation testing. We then evaluate the statistical
power of these coefficients under simulation and compare with Pearson and
Spearman correlation coefficients. Finally, we evaluate the various statistics
in matching drugs across pharmacogenomic datasets. We observe that the rCI and
kCI are better powered than the concordance index in simulation and show some
improvement on real data. Surprisingly, we observe that the Pearson correlation
was the most robust to measurement noise among the different metrics.
|
We discuss significant improvements to our calculation of electroweak (EW)
$t\bar{t}$ hadroproduction in extensions of the Standard Model (SM) with extra
heavy neutral and charged spin-1 resonances using the Recola2 package. We allow
for flavour-non-diagonal $Z'$ couplings and take into account non-resonant
production in the SM and beyond including the contributions with t-channel $W$-
and $W'$-bosons. We include next-to-leading order (NLO) QCD corrections and
consistently match to parton showers with the POWHEG method fully taking into
account the interference effects between SM and new physics amplitudes. We
briefly describe the Contour method and give some information about the Rivet
repository which catalogues particle-level measurements subsequently used by
Contur to set limits on beyond the SM (BSM) theories. We explain how we use our
calculation within Contour in order to set limits on models with additional
heavy gauge bosons using LHC measurements, and illustrate this with an example
using the leptophobic Topcolour (TC) model.
|
The OSIRIS-REx spacecraft encountered the asteroid (101955) Bennu on December
3, 2018, and has since acquired extensive data from the payload of scientific
instruments on board. In 2019, the OSIRIS-REx team selected primary and backup
sample collection sites, called Nightingale and Osprey, respectively. On
October 20, 2020, OSIRIS-REx successfully collected material from Nightingale.
In this work, we apply an unsupervised machine learning classification through
the K-Means algorithm to spectrophotometrically characterize the surface of
Bennu, and in particular Nightingale and Osprey. We first analyze a global
mosaic of Bennu, from which we find four clusters scattered across the surface,
reduced to three when we normalize the images at 550 nm. The three spectral
clusters are associated with boulders and show significant differences in
spectral slope and UV value. We do not see evidence of latitudinal
non-uniformity, which suggests that Bennu's surface is well-mixed. In our
higher-resolution analysis of the primary and backup sample sites, we find
three representative normalized clusters, confirming an inverse correlation
between reflectance and spectral slope (the darkest areas being the reddest
ones) and between b' normalized reflectance and slope. Nightingale and Osprey
are redder than the global surface of Bennu by more than $1\sigma$ from
average, consistent with previous findings, with Nightingale being the reddest
($S' = (- 0.3 \pm 1.0) \times 10^{- 3}$ percent per thousand angstroms). We see
hints of a weak absorption band at 550 nm at the candidate sample sites and
globally, which lends support to the proposed presence of magnetite on Bennu.
|
RR Tel is an interacting binary system in which a hot white dwarf (WD)
accretes matter from a Mira-type variable star via gravitational capture of its
stellar wind. This symbiotic nova shows intense Raman-scattered O VI 1032\r{A}
and 1038\r{A} features at 6825\r{A} and 7082\r{A}. We present high-resolution
optical spectra of RR Tel taken in 2016 and 2017 with the Magellan Inamori
Kyocera Echelle (MIKE) spectrograph at Magellan-Clay telescope, Chile. We aim
to study the stellar wind accretion in RR Tel from the profile analysis of
Raman O VI features. With an asymmetric O VI disk model, we derive a
representative Keplerian speed of $> 35{\rm km~s^{-1}}$, and the corresponding
scale < 0.8 au. The best-fit for the Raman profiles is obtained with a mass
loss rate of the Mira ${\dot M}\sim2\times10^{-6}~{\rm M_{\odot}~yr^{-1}}$ and
a wind terminal velocity $v_{\infty}\sim 20~{\rm km~s^{-1}}$. We compare the
MIKE data with an archival spectrum taken in 2003 with the Fibre-fed Extended
Range Optical Spectrograph (FEROS) at the MPG/ESO 2.2m telescope. It allows us
to highlight the profile variation of the Raman O VI features, indicative of a
change in the density distribution of the O VI disk in the last two decades. We
also report the detection of O VI recombination lines at 3811\r{A} and
3834\r{A}, which are blended with other emission lines. Our profile
decomposition suggests that the recombination of O VII takes place nearer to
the WD than the O VI 1032\r{A} and 1038\r{A} emission region.
|
We analysed publicly available data on place of occurrence of COVID-19 deaths
from national statistical agencies in the UK between March 9 2020 and February
28 2021. We introduce a modified Weibull model that describes the deaths due to
COVID-19 at a national and place of occurrence level. We observe similar trends
in the UK where deaths due to COVID-19 first peak in Homes, followed by
Hospitals and Care Homes 1-2 weeks later in the first and second waves. This is
in line with the infectious period of the disease, indicating a possible
transmission vehicle between the settings. Our results show that the first wave
is characterised by fast growth and a slow reduction after the peak in deaths
due to COVID-19. The second and third waves have the converse property, with
slow growth and a rapid decrease from the peak. This difference may result from
behavioural changes in the population (social distancing, masks, etc). Finally,
we introduce a double logistic model to describe the dynamic proportion of
COVID-19 deaths occurring in each setting. This analysis reveals that the
proportion of COVID-19 deaths occurring in Care Homes increases from the start
of the pandemic and past the peak in total number of COVID-19 deaths in the
first wave. After the catastrophic impact in the first wave, the proportion of
COVID-19 deaths occurring in Care Homes gradually decreased from is maximum
after the first wave indicating residence were better protected in the second
and third waves compared to the first.
|
The membership determination for open clusters in noisy environments of the
Milky Way is still an open problem. In this paper, our main aim is provide the
membership probability of stars using proper motions and parallax values of
stars using Gaia EDR3 astrometry. Apart from the Gaia astrometry, we have also
used other photometric data sets like UKIDSS, WISE, APASS and Pan-STARRS1 in
order to understand cluster properties from optical to mid-infrared regions. We
selected 438 likely members with membership probability higher than $50\%$ and
G$\le$20 mag. We obtained the mean value of proper motion as
$\mu_{x}=1.27\pm0.001$ and $\mu_{y}=-0.73\pm0.002$ mas yr$^{-1}$. The cluster's
radius is determined as 7.5 arcmin (5.67 pc) using radial density profile. Our
analysis suggests that NGC 1348 is located at a distance of $2.6\pm0.05$ kpc.
The mass function slope is found to be $1.30\pm0.18$ in the mass range
1.0$-$4.1 $M_\odot$, which is in fair agreement with Salpeter's value within
the 1$\sigma$ uncertainty. The present study validates that NGC 1348 is a
dynamically relaxed cluster. We computed the apex coordinates $(A, D)$ for NGC
1348 as $(A_\circ, D_\circ)$ = $(-23^{\textrm{o}}.815\pm 0^{\textrm{o}}.135$,
$-22^{\textrm{o}}.228\pm 0^{\textrm{o}}.105)$. In addition, calculations of the
velocity ellipsoid parameters (VEPs), matrix elements $\mu_{ij}$, direction
cosines ($l_j$, $m_j$, $n_j$) and the Galactic longitude of the vertex have
been also conducted in this analysis.
|
Galaxies are the basic structural element of the universe; galaxy formation
theory seeks to explain how these structures came to be. I trace some of the
foundational ideas in galaxy formation, with emphasis on the need for
non-baryonic cold dark matter. Many elements of early theory did not survive
contact with observations of low surface brightness galaxies, leading to the
need for auxiliary hypotheses like feedback. The failure points often trace to
the surprising predictive successes of an alternative to dark matter, the
Modified Newtonian Dynamics (MOND). While dark matter models are flexible in
accommodating observations, they do not provide the predictive capacity of
MOND. If the universe is made of cold dark matter, why does MOND get any
predictions right?
|
We give a short and insightful proof of Gerry Leversha's elegant theorem
regarding the isogonal conjugates of each of the vertices of a non-cyclic
quadrilateral with respect to the triangle formed by the other three. It uses
the Maple package RENE.txt, available from .
http://www.math.rutgers.edu/~zeilberg/tokhniot/RENE.txt
|
Hackathons are events in which diverse teams work together to explore, and
develop solutions, software or even ideas. Hackathons have been recognized not
only as public events for hacking, but also as a corporate mechanism for
innovation. Hackathons are a way for established companies to achieve increased
employee wellbeing as well as being a curator for innovation and developing new
products. Sudden transition to the work-from-home mode caused by the COVID-19
pandemic first put many corporate events requiring collocation, such as
hackathons, temporarily on hold and then motivated companies to find ways to
hold these events virtually. In this paper, we report our findings from
investigating hackathons in the context of a large agile company by first
exploring the general benefits and challenges of hackathons and then trying to
understand how they were affected by the virtual setup. We conducted nine
interviews, surveyed 23 employees and analyzed a hackathon demo. We found that
hackathons provide both individual and organizational benefits of innovation,
personal interests, and acquiring new skills and competences. Several
challenges such as added stress due to stopping the regular work, employees
fearing not having enough contribution to deliver and potential mismatch
between individual and organizational goals were also found. With respect to
the virtual setup, we found that virtual hackathons are not diminishing the
innovation benefits, however, some negative effect surfaced on the social and
networking side.
|
Quantum systems governed by non-Hermitian Hamiltonians with $\PT$ symmetry
are special in having real energy eigenvalues bounded below and unitary time
evolution. We argue that $\PT$ symmetry may also be important and present at
the level of Hermitian quantum field theories because of the process of
renormalisation. In some quantum field theories renormalisation leads to
$\PT$-symmetric effective Lagrangians. We show how $\PT$ symmetry may allow
interpretations that evade ghosts and instabilities present in an
interpretation of the theory within a Hermitian framework. From the study of
examples $\PT$-symmetric interpretation is naturally built into a path integral
formulation of quantum field theory; there is no requirement to calculate
explicitly the $\PT$ norm that occurs in Hamiltonian quantum theory. We discuss
examples where $\PT$-symmetric field theories emerge from Hermitian field
theories due to effects of renormalization. We also consider the effects of
renormalization on field theories that are non-Hermitian but $\PT$-symmetric
from the start.
|
This paper follows the generalisation of the classical theory of Diophantine
approximation to subspaces of $\mathbb{R}^n$ established by W. M. Schmidt in
1967. Let $A$ and $B$ be two subspaces of $\mathbb{R}^n$ of respective
dimensions $d$ and $e$ with $d+e\leqslant n$. The proximity between $A$ and $B$
is measured by $t=\min(d,e)$ canonical angles $0\leqslant \theta_1\leqslant
\cdots\leqslant \theta_t\leqslant \pi/2$; we set $\psi_j(A,B)=\sin\theta_j$. If
$B$ is a rational subspace, his complexity is measured by its height
$H(B)=\mathrm{covol}(B\cap\mathbb{Z}^n)$. We denote by $\mu_n(A\vert e)_j$ the
exponent of approximation defined as the upper bound (possibly equal to
$+\infty$) of the set of $\beta>0$ such that the inequality
$\psi_j(A,B)\leqslant H(B)^{-\beta}$ holds for infinitely many rational
subspaces $B$ of dimension $e$. We are interested in the minimal value
$\mathring{\mu}_n(d\vert e)_j$ taken by $\mu_n(A\vert e)_j$ when $A$ ranges
through the set of subspaces of dimension $d$ of $\mathbb{R}^n$ such that for
all rational subspaces $B$ of dimension $e$ one has $\dim (A\cap B)<j$. We show
that $\mathring{\mu}_4(2\vert 2)_1=3$, $\mathring{\mu}_5(3\vert 2)_1\le 6$ and
$\mathring{\mu}_{2d}(d\vert \ell)_1\leqslant 2d^2/(2d-\ell)$. We also prove a
lower bound in the general case, which implies that $\mathring{\mu}_n(d\vert
d)_d\xrightarrow[n\to+\infty]{} 1/d$.
|
Complex and spinorial techniques of general relativity are used to determine
all the states of the $SU(2)$ invariant quantum mechanical systems in which the
equality holds in the uncertainty relations for the components of the angular
momentum vector operator in two given directions. The expectation values depend
on a discrete `quantum number' and two parameters, one of them is the angle
between the two angular momentum components and the other is the quotient of
the two standard deviations. Allowing the angle between the two angular
momentum components to be arbitrary, \emph{a new genuine quantum mechanical
phenomenon emerges}: It is shown that although the standard deviations change
continuously, one of the expectation values changes \emph{discontinuously} on
this parameter space. Since physically neither of the angular momentum
components is distinguished over the other, this discontinuity suggests that
the genuine parameter space must be a \emph{double cover} of this classical
one: It must be a \emph{Riemann surface} known in connection with the complex
function $\sqrt{z}$. Moreover, the angle between the angular momentum
components plays the role of the parameter of an interpolation between the
continuous range of the expectation values found in the special case of the
orthogonal angular momentum components by Aragone \emph{et al} (J. Phys. A.
{\bf 7} L149 (1974)) and the discrete point spectrum of one angular momentum
component. The consequences in the \emph{simultaneous} measurements of these
angular momentum components are also discussed briefly.
|
The discovery that many classical novae produce detectable GeV $\gamma$-ray
emission has raised the question of the role of shocks in nova eruptions. Here
we use radio observations of nova V809 Cep (Nova Cep 2013) with the Jansky Very
Large Array to show that it produced non-thermal emission indicative of
particle acceleration in strong shocks for more than a month starting about six
weeks into the eruption, quasi-simultaneous with the production of dust.
Broadly speaking, the radio emission at late times -- more than a six months or
so into the eruption -- is consistent with thermal emission from $10^{-4}
M_\odot$ of freely expanding, $10^4$~K ejecta. At 4.6 and 7.4 GHz, however, the
radio light-curves display an initial early-time peak 76 days after the
discovery of the eruption in the optical ($t_0$). The brightness temperature at
4.6 GHz on day 76 was greater than $10^5 K$, an order of magnitude above what
is expected for thermal emission. We argue that the brightness temperature is
the result of synchrotron emission due to internal shocks within the ejecta.
The evolution of the radio spectrum was consistent with synchrotron emission
that peaked at high frequencies before low frequencies, suggesting that the
synchrotron from the shock was initially subject to free-free absorption by
optically thick ionized material in front of the shock. Dust formation began
around day 37, and we suggest that internal shocks in the ejecta were
established prior to dust formation and caused the nucleation of dust.
|
The process of momentum and energy transfer from a massive body moving
through a background medium, known as dynamical friction (DF), is key to our
understanding of many astrophysical systems. We present a series of
high-resolution simulations of gaseous DF using Lagrangian hydrodynamics
solvers, in the state-of-the-art multi-physics code, GIZMO. The numerical setup
is chosen to allow direct comparison to analytic predictions for DF in the
range of Mach 0.2<M<3. We investigate, in detail, the DF drag force, the radial
structure of the wake, and their time evolution. The subsonic forces are shown
to be well resolved, except at Mach numbers close to M=1. The supersonic cases,
close to M=1, significantly under-shoot the predicted force. We find that for
scenarios with 0.7<M<2, between 10%-50% of the expected DF force is missing.
The origin of this deficit is mostly related to the wake structure close to the
perturber, where the density profile of the Mach cone face shows significant
smoothing, which does not improve with time. The spherical expanding
perturbation of the medium is captured well by the hydro scheme, but it is the
sharp density structure, at the transition from Mach cone to average density,
that introduces the mismatch. However, we find a general improvement of the
force deficit with time, though significant differencesremain, in agreement
with other studies. This is due to (1) the structure of the far field wake
being reproduced well, and (2) the fraction of total drag from the far field
wake increasing with time. Dark matter sub-haloes, in typical cosmological
simulations, occupy parameters similar to those tested here, suggesting that
the DF which these sub-haloes experience is significantly underestimated, and
hence their merger rate. Dynamical friction is a relevant benchmark and should
be included as one of the standard hydro tests for astrophysical simulations.
|
The JOREK extended magneto-hydrodynamic (MHD) code is a widely used
simulation code for studying the non-linear dynamics of large-scale
instabilities in divertor tokamak plasmas. Due to the large scale-separation
intrinsic to these phenomena both in space and time, the computational costs
for simulations in realistic geometry and with realistic parameters can be very
high, motivating the investment of considerable effort for optimization. In
this article, a set of developments regarding the JOREK solver and
preconditioner is described, which lead to overall significant benefits for
large production simulations. This comprises in particular enhanced convergence
in highly non-linear scenarios and a general reduction of memory consumption
and computational costs. The developments include faster construction of
preconditioner matrices, a domain decomposition of preconditioning matrices for
solver libraries that can handle distributed matrices, interfaces for
additional solver libraries, an option to use matrix compression methods, and
the implementation of a complex solver interface for the preconditioner. The
most significant development presented consists in a generalization of the
physics based preconditioner to "mode groups", which allows to account for the
dominant interactions between toroidal Fourier modes in highly non-linear
simulations. At the cost of a moderate increase of memory consumption, the
technique can strongly enhance convergence in suitable cases allowing to use
significantly larger time steps. For all developments, benchmarks based on
typical simulation cases demonstrate the resulting improvements.
|
Optimal use and distribution of Covid-19 vaccines involves adjustments of
dosing. Due to the rapidly-evolving pandemic, such adjustments often need to be
introduced before full efficacy data are available. As demonstrated in other
areas of drug development, quantitative systems pharmacology (QSP) is well
placed to guide such extrapolation in a rational and timely manner. Here we
propose for the first time how QSP can be applied real time in the context of
COVID-19 vaccine development.
|
End-to-end optimization capability offers neural image compression (NIC)
superior lossy compression performance. However, distinct models are required
to be trained to reach different points in the rate-distortion (R-D) space. In
this paper, we consider the problem of R-D characteristic analysis and modeling
for NIC. We make efforts to formulate the essential mathematical functions to
describe the R-D behavior of NIC using deep network and statistical modeling.
Thus continuous bit-rate points could be elegantly realized by leveraging such
model via a single trained network. In this regard, we propose a plugin-in
module to learn the relationship between the target bit-rate and the binary
representation for the latent variable of auto-encoder. Furthermore, we model
the rate and distortion characteristic of NIC as a function of the coding
parameter $\lambda$ respectively. Our experiments show our proposed method is
easy to adopt and obtains competitive coding performance with fixed-rate coding
approaches, which would benefit the practical deployment of NIC. In addition,
the proposed model could be applied to NIC rate control with limited bit-rate
error using a single network.
|
Simulation-based inference enables learning the parameters of a model even
when its likelihood cannot be computed in practice. One class of methods uses
data simulated with different parameters to infer an amortized estimator for
the likelihood-to-evidence ratio, or equivalently the posterior function. We
show that this approach can be formulated in terms of mutual information
maximization between model parameters and simulated data. We use this
equivalence to reinterpret existing approaches for amortized inference and
propose two new methods that rely on lower bounds of the mutual information. We
apply our framework to the inference of parameters of stochastic processes and
chaotic dynamical systems from sampled trajectories, using artificial neural
networks for posterior prediction. Our approach provides a unified framework
that leverages the power of mutual information estimators for inference.
|
This paper describes our contribution to the WASSA 2021 shared task on
Empathy Prediction and Emotion Classification. The broad goal of this task was
to model an empathy score, a distress score and the overall level of emotion of
an essay written in response to a newspaper article associated with harm to
someone. We have used the ELECTRA model abundantly and also advanced deep
learning approaches like multi-task learning. Additionally, we also leveraged
standard machine learning techniques like ensembling. Our system achieves a
Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of
0.5528 on sub-task II. We ranked 1st in Emotion Classification sub-task and 3rd
in Empathy Prediction sub-task
|
In this paper, we introduce the \textit{Layer-Peeled Model}, a nonconvex yet
analytically tractable optimization program, in a quest to better understand
deep neural networks that are trained for a sufficiently long time. As the name
suggests, this new model is derived by isolating the topmost layer from the
remainder of the neural network, followed by imposing certain constraints
separately on the two parts of the network. We demonstrate that the
Layer-Peeled Model, albeit simple, inherits many characteristics of
well-trained neural networks, thereby offering an effective tool for explaining
and predicting common empirical patterns of deep learning training. First, when
working on class-balanced datasets, we prove that any solution to this model
forms a simplex equiangular tight frame, which in part explains the recently
discovered phenomenon of neural collapse \cite{papyan2020prevalence}. More
importantly, when moving to the imbalanced case, our analysis of the
Layer-Peeled Model reveals a hitherto unknown phenomenon that we term
\textit{Minority Collapse}, which fundamentally limits the performance of deep
learning models on the minority classes. In addition, we use the Layer-Peeled
Model to gain insights into how to mitigate Minority Collapse. Interestingly,
this phenomenon is first predicted by the Layer-Peeled Model before being
confirmed by our computational experiments.
|
In empirical game-theoretic analysis (EGTA), game models are extended
iteratively through a process of generating new strategies based on learning
from experience with prior strategies. The strategy exploration problem in EGTA
is how to direct this process so to construct effective models with minimal
iteration. A variety of approaches have been proposed in the literature,
including methods based on classic techniques and novel concepts. Comparing the
performance of these alternatives can be surprisingly subtle, depending
sensitively on criteria adopted and measures employed. We investigate some of
the methodological considerations in evaluating strategy exploration, defining
key distinctions and identifying a few general principles based on examples and
experimental observations. In particular, we emphasize the fact that empirical
games create a space of strategies that should be evaluated as a whole. Based
on this fact, we suggest that the minimum regret constrained profile (MRCP)
provides a particularly robust basis for evaluating a space of strategies, and
propose a local search method for MRCP that outperforms previous approaches.
However, the computation of MRCP is not always feasible especially in large
games. In this scenario, we highlight consistency considerations for comparing
across different approaches. Surprisingly, we find that recent works violate
these considerations that are necessary for evaluation, which may result in
misleading conclusions on the performance of different approaches. For proper
evaluation, we propose a new evaluation scheme and demonstrate that our scheme
can reveal the true learning performance of different approaches compared to
previous evaluation methods.
|
Fourier expansion of the integrand in the path integral formula for the
partition function of quantum systems leads to a deterministic expression
which, though still quite complex, is easier to process than the original
functional integral. It therefore can give access to problems that eluded
solution so far. Here we derive the formula; a first application is presented
in "Simultaneous occurrence of off-diagonal long-range order and infinite
permutation cycles in systems of interacting atoms", arXiv:2108.02659
[math-ph].
|
MomentClosure.jl is a Julia package providing automated derivation of the
time-evolution equations of the moments of molecule numbers for virtually any
chemical reaction network using a wide range of moment closure approximations.
It extends the capabilities of modelling stochastic biochemical systems in
Julia and can be particularly useful when exact analytic solutions of the
chemical master equation are unavailable and when Monte Carlo simulations are
computationally expensive.
MomentClosure.jl is freely accessible under the MIT license. Source code and
documentation are available at https://github.com/augustinas1/MomentClosure.jl
|
Micrometer scale colloidal particles that propel in a deterministic fashion
in response to local environmental cues are useful analogs to self-propelling
entities found in nature. Both natural and synthetic active colloidal systems
are often near boundaries or are located in crowded environments. Herein, we
describe experiments in which we measured the influence of hydrogen peroxide
concentration and dispersed polyethylene glycol (PEG) on the clustering
behavior of 5 micrometer catalytic active Janus particles at low concentration.
We found the extent to which clustering occurred in ensembles of active Janus
particles grew with hydrogen peroxide concentration in the absence of PEG. Once
PEG was added, clustering was slightly enhanced at low PEG volume fractions,
but was reduced at higher PEG volumes fractions. The region in which clustering
was mitigated at higher PEG volume fractions corresponded to the region in
which propulsion was previously found to be quenched. Complementary agent based
simulations showed that clustering grew with nominal speed. These data support
the hypothesis that growth of living crystals is enhanced with increases in
propulsion speed, but the addition of PEG will tend to mitigate cluster
formation as a consequence of quenched propulsion at these conditions.
|
It was recently suggested that certain UV-completable supersymmetric actions
can be characterized by the solutions to an auxiliary non-linear sigma-model
with special asymptotic boundary conditions. The space-time of this sigma-model
is the scalar field space of these effective theories while the target space is
a coset space. We study this sigma-model without any reference to a potentially
underlying geometric description. Using a holographic approach reminiscent of
the bulk reconstruction in the AdS/CFT correspondence, we then derive its
near-boundary solutions for a two-dimensional space-time. Specifying a set of $
Sl(2,\mathbb{R})$ boundary data we show that the near-boundary solutions are
uniquely fixed after imposing a single bulk-boundary matching condition. The
reconstruction exploits an elaborate set of recursion relations introduced by
Cattani, Kaplan, and Schmid in the proof of the $Sl(2)$-orbit theorem. We
explicitly solve these recursion relations for three sets of simple boundary
data and show that they model asymptotic periods of a Calabi--Yau threefold
near the conifold point, the large complex structure point, and the Tyurin
degeneration.
|
We study the time evolution of molecular clouds across three Milky Way-like
isolated disc galaxy simulations at a temporal resolution of 1 Myr, and at a
range of spatial resolutions spanning two orders of magnitude in spatial scale
from ~10 pc up to ~1 kpc. The cloud evolution networks generated at the highest
spatial resolution contain a cumulative total of ~80,000 separate molecular
clouds in different galactic-dynamical environments. We find that clouds
undergo mergers at a rate proportional to the crossing time between their
centroids, but that their physical properties are largely insensitive to these
interactions. Below the gas disc scale-height, the cloud lifetime obeys a
scaling relation of the form $\tau_{\rm life} \propto \ell^{-0.3}$ with the
cloud size $\ell$, consistent with over-densities that collapse, form stars,
and are dispersed by stellar feedback. Above the disc scale-height, these
self-gravitating regions are no longer resolved, so the scaling relation
flattens to a constant value of ~13 Myr, consistent with the turbulent crossing
time of the gas disc, as observed in nearby disc galaxies.
|
We continue our investigation, from \cite{dh}, of the ring-theoretic
infiniteness properties of ultrapowers of Banach algebras, studying in this
paper the notion of being purely infinite. It is well known that a
$C^*$-algebra is purely infinite if and only if any of its ultrapower is. We
find examples of Banach algebras, as algebras of operators on Banach spaces,
which do have purely infinite ultrapowers. Our main contribution is the
construction of a "Cuntz-like" Banach $*$-algebra which is purely infinite, but
does not have purely infinite ultrapowers. Our proof of being purely infinite
is combinatorial, but direct, and so differs from the proof for the Cuntz
algebra. We use an indirect method (and not directly computing norm estimates)
to show that this algebra does not have purely infinite ultrapowers.
|
The Eisenbud--Goto conjecture states that $\operatorname{reg}
X\le\operatorname{deg} X -\operatorname{codim} X+1$ for a nondegenerate
irreducible projective variety $X$ over an algebraically closed field. While
this conjecture is known to be false in general, it has been proven in several
special cases, including when $X$ is a projective toric variety of codimension
$2$. We classify the projective toric varieties of codimension $2$ having
maximal regularity, that is, for which equality holds in the Eisenbud--Goto
bound. We also give combinatorial characterizations of the arithmetically
Cohen--Macaulay toric varieties of maximal regularity in characteristic $0$.
|
Instrumental variable methods are among the most commonly used causal
inference approaches to account for unmeasured confounders in observational
studies. The presence of invalid instruments is a major concern for practical
applications and a fast-growing area of research is inference for the causal
effect with possibly invalid instruments. The existing inference methods rely
on correctly separating valid and invalid instruments in a data dependent way.
In this paper, we illustrate post-selection problems of these existing methods.
We construct uniformly valid confidence intervals for the causal effect, which
are robust to the mistakes in separating valid and invalid instruments. Our
proposal is to search for the causal effect such that a sufficient amount of
candidate instruments can be taken as valid. We further devise a novel sampling
method, which, together with searching, lead to a more precise confidence
interval. Our proposed searching and sampling confidence intervals are shown to
be uniformly valid under the finite-sample majority and plurality rules. We
compare our proposed methods with existing inference methods over a large set
of simulation studies and apply them to study the effect of the triglyceride
level on the glucose level over a mouse data set.
|
Explaining the decisions of an Artificial Intelligence (AI) model is
increasingly critical in many real-world, high-stake applications. Hundreds of
papers have either proposed new feature attribution methods, discussed or
harnessed these tools in their work. However, despite humans being the target
end-users, most attribution methods were only evaluated on proxy
automatic-evaluation metrics (Zhang et al. 2018; Zhou et al. 2016; Petsiuk et
al. 2018). In this paper, we conduct the first user study to measure
attribution map effectiveness in assisting humans in ImageNet classification
and Stanford Dogs fine-grained classification, and when an image is natural or
adversarial (i.e., contains adversarial perturbations). Overall, feature
attribution is surprisingly not more effective than showing humans nearest
training-set examples. On a harder task of fine-grained dog categorization,
presenting attribution maps to humans does not help, but instead hurts the
performance of human-AI teams compared to AI alone. Importantly, we found
automatic attribution-map evaluation measures to correlate poorly with the
actual human-AI team performance. Our findings encourage the community to
rigorously test their methods on the downstream human-in-the-loop applications
and to rethink the existing evaluation metrics.
|
Recently, High-Efficiency Video Coding (HEVC/H.265) has been chosen to
replace previous video coding standards, such as H.263 and H.264. Despite the
efficiency of HEVC, it still lacks reliable and practical functionalities to
support authentication and copyright applications. In order to provide this
support, several watermarking techniques have been proposed by many researchers
during the last few years. However, those techniques are still suffering from
many issues that need to be considered for future designs. In this paper, a
Systematic Literature Review (SLR) is introduced to identify HEVC challenges
and potential research directions for interested researchers and developers.
The time scope of this SLR covers all research articles published during the
last six years starting from January 2014 up to the end of April 2020.
Forty-two articles have met the criteria of selection out of 343 articles
published in this area during the mentioned time scope. A new classification
has been drawn followed by an identification of the challenges of implementing
HEVC watermarking techniques based on the analysis and discussion of those
chosen articles. Eventually, recommendations for HEVC watermarking techniques
have been listed to help researchers to improve the existing techniques or to
design new efficient ones.
|
Collective migration of cells and animals often relies on a specialised set
of "leaders", whose role is to steer a population of naive followers towards
some target. We formulate a continuous model to understand the dynamics and
structure of such groups, splitting a population into separate follower and
leader types with distinct orientation responses. We incorporate "leader
influence" via three principal mechanisms: a bias in the orientation of leaders
according to the destination, distinct speeds of movement and distinct levels
of conspicuousness. Using a combination of analysis and numerical computation
on a sequence of models of increasing complexity, we assess the extent to which
leaders successfully shepherd the swarm. While all three mechanisms can lead to
a successfully steered swarm, parameter regime is crucial with non successful
choices generating a variety of unsuccessful attempts, including movement away
from the target, swarm splitting or swarm dispersal.
|
We develop an algebro-geometric formulation for neural networks in machine
learning using the moduli space of framed quiver representations. We find
natural Hermitian metrics on the universal bundles over the moduli which are
compatible with the GIT quotient construction by the general linear group, and
show that their Ricci curvatures give a K\"ahler metric on the moduli.
Moreover, we use toric moment maps to construct activation functions, and prove
the universal approximation theorem for the multi-variable activation function
constructed from the complex projective space.
|
As internet related challenges increase such as cyber-attacks, the need for
safe practises among users to maintain computer system's health and online
security have become imperative, and this is known as cyber-hygiene. Poor
cyber-hygiene among internet users is a very critical issue undermining the
general acceptance and adoption of internet technology. It has become a global
issue and concern in this digital era when virtually all business transactions,
learning, communication and many other activities are performed online. Virus
attack, poor authentication technique, improper file backups and the use of
different social engineering approaches by cyber-attackers to deceive internet
users into divulging their confidential information with the intention to
attack them have serious negative implications on the industries and
organisations, including educational institutions. Moreover, risks associated
with these ugly phenomena are likely to be more in developing countries such as
Nigeria. Thus, authors of this paper undertook an online pilot study among
students and employees of University of Nigeria, Nsukka and a total of 145
responses were received and used for the study. The survey seeks to find out
the effect of age and level of education on the cyber hygiene knowledge and
behaviour of the respondents, and in addition, the type of devices used and
activities they engage in while on the internet. Our findings show wide
adoption of internet in institution of higher learning, whereas, significant
number of the internet users do not have good cyber hygiene knowledge and
behaviour. Hence, our findings can instigate an organised training for students
and employees of higher institutions in Nigeria.
|
Andrews, Lewis and Lovejoy introduced the partition function $PD(n)$ as the
number of partitions of $n$ with designated summands. A bipartition of $n$ is
an ordered pair of partitions $(\pi_1, \pi_2)$ with the sum of all of the parts
being $n$. In this paper, we introduce a generalized crank named the $pd$-crank
for bipartitions with designated summands and give some inequalities for the
$pd$-crank of bipartitions with designated summands modulo 2 and 3. We also
define the $pd$-crank moments weighted by the parity of $pd$-cranks
$\mu_{2k,bd}(-1,n)$ and show the positivity of $(-1)^n\mu_{2k,bd}(-1,n)$. Let
$M_{bd}(m,n)$ denote the number of bipartitions of $n$ with designated summands
with $pd$-crank $m$. We prove a monotonicity property of $pd$-cranks of
bipartitions with designated summands and find that the sequence
$\{M_{bd}(m,n)\}_{|m|\leq n}$ is unimodal for $n\not= 1,5,7$.
|
The future communication will be characterized by ubiquitous connectivity and
security. These features will be essential requirements for the efficient
functioning of the futuristic applications. In this paper, in order to
highlight the impact of blockchain and 6G on the future communication systems,
we categorize these application requirements into two broad groups. In the
first category, called Requirement Group I \mbox{(RG-I)}, we include the
performance-related needs on data rates, latency, reliability and massive
connectivity, while in the second category, called Requirement Group II
\mbox{(RG-II)}, we include the security-related needs on data integrity,
non-repudiability, and auditability. With blockchain and 6G, the network
decentralization and resource sharing would minimize resource under-utilization
thereby facilitating RG-I targets. Furthermore, through appropriate selection
of blockchain type and consensus algorithms, RG-II needs of 6G applications can
also be readily addressed. Through this study, the combination of blockchain
and 6G emerges as an elegant solution for secure and ubiquitous future
communication.
|
A singular perturbation problem from the artificial compressible system to
the incompressible system is considered for a doubly diffusive convection when
a Hopf bifurcation from the motionless state occurs in the incompressible
system. It is proved that the Hopf bifurcation also occurs in the artificial
compressible system for small singular perturbation parameter, called the
artificial Mach number. The time periodic solution branch of the artificial
compressible system is shown to converge to the corresponding bifurcating
branch of the incompressible system in the singular limit of vanishing
artificial Mach number.
|
We present vir, an R package for variational inference with shrinkage priors.
Our package implements variational and stochastic variational algorithms for
linear and probit regression models, the use of which is a common first step in
many applied analyses. We review variational inference and show how the
derivation for a Gibbs sampler can be easily modified to derive a corresponding
variational or stochastic variational algorithm. We provide simulations showing
that, at least for a normal linear model, variational inference can lead to
similar uncertainty quantification as the corresponding Gibbs samplers, while
estimating the model parameters at a fraction of the computational cost. Our
timing experiments show situations in which our algorithms converge faster than
the frequentist LASSO implementations in glmnet while simultaneously providing
superior parameter estimation and variable selection. Hence, our package can be
utilized to quickly explore different combinations of predictors in a linear
model, while providing accurate uncertainty quantification in many applied
situations. The package is implemented natively in R and RcppEigen, which has
the benefit of bypassing the substantial operating system specific overhead of
linking external libraries to work efficiently with R.
|
The use of Cauchy Markov random field priors in statistical inverse problems
can potentially lead to posterior distributions which are non-Gaussian,
high-dimensional, multimodal and heavy-tailed. In order to use such priors
successfully, sophisticated optimization and Markov chain Monte Carlo (MCMC)
methods are usually required. In this paper, our focus is largely on reviewing
recently developed Cauchy difference priors, while introducing interesting new
variants, whilst providing a comparison. We firstly propose a one-dimensional
second order Cauchy difference prior, and construct new first and second order
two-dimensional isotropic Cauchy difference priors. Another new Cauchy prior is
based on the stochastic partial differential equation approach, derived from
Mat\'{e}rn type Gaussian presentation. The comparison also includes Cauchy
sheets. Our numerical computations are based on both maximum a posteriori and
conditional mean estimation.We exploit state-of-the-art MCMC methodologies such
as Metropolis-within-Gibbs, Repelling-Attracting Metropolis, and No-U-Turn
sampler variant of Hamiltonian Monte Carlo. We demonstrate the models and
methods constructed for one-dimensional and two-dimensional deconvolution
problems. Thorough MCMC statistics are provided for all test cases, including
potential scale reduction factors.
|
Theoretical treatments of periodically-driven quantum thermal machines
(PD-QTMs) are largely focused on the limit-cycle stage of operation
characterized by a periodic state of the system. Yet, this regime is not
immediately accessible for experimental verification. Here, we present a
general thermodynamic framework that can handle the performance of PD-QTMs both
before and during the limit-cycle stage of operation. It is achieved by
observing that periodicity may break down at the ensemble average level, even
in the limit-cycle phase. With this observation, and using conventional
thermodynamic expressions for work and heat, we find that a complete
description of the first law of thermodynamics for PD-QTMs requires a new
contribution, which vanishes only in the limit-cycle phase under rather weak
system-bath couplings. Significantly, this contribution is substantial at
strong couplings even at limit cycle, thus largely affecting the behavior of
the thermodynamic efficiency. We demonstrate our framework by simulating a
quantum Otto engine building upon a driven resonant level model. Our results
provide new insights towards a complete description of PD-QTMs, from turn-on to
the limit-cycle stage and, particularly, shed light on the development of
quantum thermodynamics at strong coupling.
|
Segmentation of pathological images is essential for accurate disease
diagnosis. The quality of manual labels plays a critical role in segmentation
accuracy; yet, in practice, the labels between pathologists could be
inconsistent, thus confusing the training process. In this work, we propose a
novel label re-weighting framework to account for the reliability of different
experts' labels on each pixel according to its surrounding features. We further
devise a new attention heatmap, which takes roughness as prior knowledge to
guide the model to focus on important regions. Our approach is evaluated on the
public Gleason 2019 datasets. The results show that our approach effectively
improves the model's robustness against noisy labels and outperforms
state-of-the-art approaches.
|
We present a new method and a large-scale database to detect audio-video
synchronization(A/V sync) errors in tennis videos. A deep network is trained to
detect the visual signature of the tennis ball being hit by the racquet in the
video stream. Another deep network is trained to detect the auditory signature
of the same event in the audio stream. During evaluation, the audio stream is
searched by the audio network for the audio event of the ball being hit. If the
event is found in audio, the neighboring interval in video is searched for the
corresponding visual signature. If the event is not found in the video stream
but is found in the audio stream, A/V sync error is flagged. We developed a
large-scaled database of 504,300 frames from 6 hours of videos of tennis
events, simulated A/V sync errors, and found our method achieves high accuracy
on the task.
|
The so-called improved soft-aided bit-marking algorithm was recently proposed
for staircase codes (SCCs) in the context of fiber optical communications. This
algorithm is known as iSABM-SCC. With the help of channel soft information, the
iSABM-SCC decoder marks bits via thresholds to deal with both miscorrections
and failures of hard-decision (HD) decoding. In this paper, we study iSABM-SCC
focusing on the parameter optimization of the algorithm and its performance
analysis, in terms of the gap to the achievable information rates (AIRs) of HD
codes and the fiber reach enhancement. We show in this paper that the marking
thresholds and the number of modified component decodings heavily affect the
performance of iSABM-SCC, and thus, they need to be carefully optimized. By
replacing standard decoding with the optimized iSABM-SCC decoding, the gap to
the AIRs of HD codes can be reduced to 0.26-1.02 dB for code rates of 0.74-0.87
in the additive white Gaussian noise channel with 8-ary pulse amplitude
modulation. The obtained reach increase is up to 22% for data rates between 401
Gbps and 468 Gbps in an optical fiber channel.
|
The Electron-Ion Collider (EIC) Yellow Report specified parameters for the
general-purpose detector that can deliver the scientific goals delineated by
the EIC White Paper and NAS report. These parameters dictate the tracking
momentum resolution, secondary-vertex resolutions, calorimeter energy
resolutions, as well as $\pi/K/p$ ID. We have incorporated these parameters
into a configuration card for Delphes, which is a widely used "C++ framework,
for performing a fast multipurpose detector response simulation". We include
both the 1.5 T and 3.0 T scenarios. We also show the expected performance for
high-level quantities such as jets, missing transverse energy, charm tagging,
and others. These parametrizations can be easily updated with more refined
Geant4 studies, which provides an efficient way to perform simulations to
benchmark a variety of observables using state-of-the art event generators such
as Pythia8.
|
Building on a result by Tao, we show that a certain type of simple closed
curve in the plane given by the union of the graphs of two $1$-Lipschitz
functions inscribes a square whose sidelength is bounded from below by a
universal constant times the maximum of the difference of the two functions.
|
In this paper the tracking problem of multi-agent systems, in a particular
scenario where a segment of agents entering a sensing-denied environment or
behaving as non-cooperative targets, is considered. The focus is on determining
the optimal sensor precisions while simultaneously promoting sparseness in the
sensor measurements to guarantee a specified estimation performance. The
problem is formulated in the discrete-time centralized Kalman filtering
framework. A semi-definite program subject to linear matrix inequalities is
solved to minimize the trace of precision matrix which is defined to be the
inverse of sensor noise covariance matrix. Simulation results expose a
trade-off between sensor precisions and sensing frequency.
|
Physical systems characterized by a shallow two-body bound or virtual state
are governed at large distances by a continuous-scale invariance, which is
broken to a discrete one when three or more particles come into play. This
symmetry induces a universal behavior for different systems, independent of the
details of the underlying interaction, rooted in the smallness of the ratio
$\ell/a_B \ll 1$, where the length $a_B$ is associated to the binding energy of
the two-body system $E_2=\hbar^2/m a_B^2$ and $\ell$ is the natural length
given by the interaction range. Efimov physics refers to this universal
behavior, which is often hidden by the on-set of system-specific non-universal
effects. In this work we identify universal properties by providing an explicit
link of physical systems to their unitary limit, in which
$a_B\rightarrow\infty$, and show that nuclear systems belong to this class of
universality.
|
An attention matrix of a transformer self-attention sublayer can provably be
decomposed into two components and only one of them (effective attention)
contributes to the model output. This leads us to ask whether visualizing
effective attention gives different conclusions than interpretation of standard
attention. Using a subset of the GLUE tasks and BERT, we carry out an analysis
to compare the two attention matrices, and show that their interpretations
differ. Effective attention is less associated with the features related to the
language modeling pretraining such as the separator token, and it has more
potential to illustrate linguistic features captured by the model for solving
the end-task. Given the found differences, we recommend using effective
attention for studying a transformer's behavior since it is more pertinent to
the model output by design.
|
We propose an affine-mapping based variational Ensemble Kalman filter for
sequential Bayesian filtering problems with generic observation models.
Specifically, the proposed method is formulated as to construct an affine
mapping from the prior ensemble to the posterior one, and the affine mapping is
computed via a variational Bayesian formulation, i.e., by minimizing the
Kullback-Leibler divergence between the transformed distribution through the
affine mapping and the actual posterior. Some theoretical properties of
resulting optimization problem are studied and a gradient descent scheme is
proposed to solve the resulting optimization problem. With numerical examples
we demonstrate that the method has competitive performance against existing
methods.
|
In this work we present a novel, robust transition generation technique that
can serve as a new tool for 3D animators, based on adversarial recurrent neural
networks. The system synthesizes high-quality motions that use
temporally-sparse keyframes as animation constraints. This is reminiscent of
the job of in-betweening in traditional animation pipelines, in which an
animator draws motion frames between provided keyframes. We first show that a
state-of-the-art motion prediction model cannot be easily converted into a
robust transition generator when only adding conditioning information about
future keyframes. To solve this problem, we then propose two novel additive
embedding modifiers that are applied at each timestep to latent representations
encoded inside the network's architecture. One modifier is a time-to-arrival
embedding that allows variations of the transition length with a single model.
The other is a scheduled target noise vector that allows the system to be
robust to target distortions and to sample different transitions given fixed
keyframes. To qualitatively evaluate our method, we present a custom
MotionBuilder plugin that uses our trained model to perform in-betweening in
production scenarios. To quantitatively evaluate performance on transitions and
generalizations to longer time horizons, we present well-defined in-betweening
benchmarks on a subset of the widely used Human3.6M dataset and on LaFAN1, a
novel high quality motion capture dataset that is more appropriate for
transition generation. We are releasing this new dataset along with this work,
with accompanying code for reproducing our baseline results.
|
The use of machine learning to develop intelligent software tools for
interpretation of radiology images has gained widespread attention in recent
years. The development, deployment, and eventual adoption of these models in
clinical practice, however, remains fraught with challenges. In this paper, we
propose a list of key considerations that machine learning researchers must
recognize and address to make their models accurate, robust, and usable in
practice. Namely, we discuss: insufficient training data, decentralized
datasets, high cost of annotations, ambiguous ground truth, imbalance in class
representation, asymmetric misclassification costs, relevant performance
metrics, generalization of models to unseen datasets, model decay, adversarial
attacks, explainability, fairness and bias, and clinical validation. We
describe each consideration and identify techniques to address it. Although
these techniques have been discussed in prior research literature, by freshly
examining them in the context of medical imaging and compiling them in the form
of a laundry list, we hope to make them more accessible to researchers,
software developers, radiologists, and other stakeholders.
|
The transverse field in the quantum Ising chain is linearly ramped from the
para- to the ferromagnetic phase across the quantum critical point at a rate
characterized by a quench time $\tau_Q$. We calculate a connected kink-kink
correlator in the final state at zero transverse field. The correlator is a sum
of two terms: a negative (anti-bunching) Gaussian that depends on the
Kibble-Zurek (KZ) correlation length only and a positive term that depends on a
second longer scale of length. The second length is made longer by dephasing of
the state excited near the critical point during the following ramp across the
ferromagnetic phase. This interpretation is corroborated by considering a
linear ramp that is halted in the ferromagnetic phase for a finite waiting time
and then continued at the same rate as before the halt. The extra time
available for dephasing increases the second scale of length that
asymptotically grows linearly with the waiting time. The dephasing also
suppresses magnitude of the second term making it negligible for waiting times
much longer than $\tau_Q$. The same dephasing can be obtained with a smooth
ramp that slows down in the ferromagnetic phase. Assuming sufficient dephasing
we obtain also higher order kink correlators and the ferromagnetic correlation
function.
|
A Sidon space is a subspace of an extension field over a base field in which
the product of any two elements can be factored uniquely, up to constants. This
paper proposes a new public-key cryptosystem of the multivariate type which is
based on Sidon spaces, and has the potential to remain secure even if quantum
supremacy is attained. This system, whose security relies on the hardness of
the well-known MinRank problem, is shown to be resilient to several
straightforward algebraic attacks. In particular, it is proved that the two
popular attacks on the MinRank problem, the kernel attack, and the minor
attack, succeed only with exponentially small probability. The system is
implemented in software, and its hardness is demonstrated experimentally.
|
Mortality risk is a major concern to patients have just been discharged from
the intensive care unit (ICU). Many studies have been directed to construct
machine learning models to predict such risk. Although these models are highly
accurate, they are less amenable to interpretation and clinicians are typically
unable to gain further insights into the patients' health conditions and the
underlying factors that influence their mortality risk. In this paper, we use
patients' profiles extracted from the MIMIC-III clinical database to construct
risk calculators based on different machine learning techniques such as
logistic regression, decision trees, random forests and multilayer perceptrons.
We perform an extensive benchmarking study that compares the most salient
features as predicted by various methods. We observe a high degree of agreement
across the considered machine learning methods; in particular, the cardiac
surgery recovery unit, age, and blood urea nitrogen levels are commonly
predicted to be the most salient features for determining patients' mortality
risks. Our work has the potential for clinicians to interpret risk predictions.
|
Photon detection at microwave frequency is of great interest due to its
application in quantum computation information science and technology. Herein
are results from studying microwave response in a topological superconducting
quantum interference device (SQUID) realized in Dirac semimetal Cd3As2. The
temperature dependence and microwave power dependence of the SQUID junction
resistance are studied, from which we obtain an effective temperature at each
microwave power level. It is observed the effective temperature increases with
the microwave power. This observation of microwave response may pave the way
for single photon detection at the microwave frequency in topological quantum
materials.
|
Probabilistic programming languages aim to describe and automate Bayesian
modeling and inference. Modern languages support programmable inference, which
allows users to customize inference algorithms by incorporating guide programs
to improve inference performance. For Bayesian inference to be sound, guide
programs must be compatible with model programs. One pervasive but challenging
condition for model-guide compatibility is absolute continuity, which requires
that the model and guide programs define probability distributions with the
same support.
This paper presents a new probabilistic programming language that guarantees
absolute continuity, and features general programming constructs, such as
branching and recursion. Model and guide programs are implemented as coroutines
that communicate with each other to synchronize the set of random variables
they sample during their execution. Novel guide types describe and enforce
communication protocols between coroutines. If the model and guide are
well-typed using the same protocol, then they are guaranteed to enjoy absolute
continuity. An efficient algorithm infers guide types from code so that users
do not have to specify the types. The new programming language is evaluated
with an implementation that includes the type-inference algorithm and a
prototype compiler that targets Pyro. Experiments show that our language is
capable of expressing a variety of probabilistic models with nontrivial control
flow and recursion, and that the coroutine-based computation does not introduce
significant overhead in actual Bayesian inference.
|
In this paper we propose a novel data augmentation approach for visual
content domains that have scarce training datasets, compositing synthetic 3D
objects within real scenes. We show the performance of the proposed system in
the context of object detection in thermal videos, a domain where 1) training
datasets are very limited compared to visible spectrum datasets and 2) creating
full realistic synthetic scenes is extremely cumbersome and expensive due to
the difficulty in modeling the thermal properties of the materials of the
scene. We compare different augmentation strategies, including state of the art
approaches obtained through RL techniques, the injection of simulated data and
the employment of a generative model, and study how to best combine our
proposed augmentation with these other techniques.Experimental results
demonstrate the effectiveness of our approach, and our single-modality detector
achieves state-of-the-art results on the FLIR ADAS dataset.
|
We investigate a Sobolev map $f$ from a finite dimensional RCD space $(X,
\dist_X, \meas_X)$ to a finite dimensional non-collapsed compact RCD space $(Y,
\dist_Y, \mathcal{H}^N)$. If the image $f(X)$ is smooth in a weak sense (which
is satisfied if $f_{\sharp}\meas_X$ is absolutely continuous with respect to
the Hausdorff measure $\mathcal{H}^N$, or if $(Y, \dist_Y, \mathcal{H}^N)$ is
smooth in a weak sense), then the pull-back $f^*g_Y$ of the Riemannian metric
$g_Y$ of $(Y, \dist_Y, \mathcal{H}^N)$ is well-defined as an $L^1$-tensor on
$X$, the minimal weak upper gradient $G_f$ of $f$ can be written by using
$f^*g_Y$, and it coincides with the local slope for $\meas_X$-almost everywhere
points in $X$ when $f$ is Lipschitz. In particular the last statement gives a
nonlinear analogue of Cheeger's differentiability theorem for Lipschitz
functions on metric measure spaces. Moreover these results allow us to define
the energy of $f$. The energy coincides with the Korevaar-Schoen energy.In
order to achieve this, we use a smoothing of $g_Y$ via the heat kernel
embedding $\Phi_t:Y \hookrightarrow L^2(Y, \mathcal{H}^N)$, which is
established by Ambrosio-Portegies-Tewodrose and the first named author.
Moreover we improve the regularity of $\Phi_t$, which plays a key role. We show
also that $(Y, \dist_Y)$ is isometric to the $N$-dimensional standard unit
sphere in $\mathbb{R}^{N+1}$ and $f$ is a minimal isometric immersion if and
only if $(X, \dist_X, \meas_X)$ is non-collapsed up to a multiplication of a
constant to $\meas_X$, and $f$ is an eigenmap whose eigenvalues coincide with
the essential dimension of $(X, \dist_X, \meas_X)$, which gives a positive
answer to a remaining problem from a previous work by the first named author.
|
Graph-based analyses have gained a lot of relevance in the past years due to
their high potential in describing complex systems by detailing the actors
involved, their relations and their behaviours. Nevertheless, in scenarios
where these aspects are evolving over time, it is not easy to extract valuable
information or to characterize correctly all the actors. In this study, a two
phased approach for exploiting the potential of graph structures in the
cybersecurity domain is presented. The main idea is to convert a network
classification problem into a graph-based behavioural one. We extract these
graph structures that can represent the evolution of both normal and attack
entities and apply a temporal dissection approach in order to highlight their
micro-dynamics. Further, three clustering techniques are applied to the normal
entities in order to aggregate similar behaviours, mitigate the imbalance
problem and reduce noisy data. Our approach suggests the implementation of two
promising deep learning paradigms for entity classification based on Graph
Convolutional Networks.
|
Within Transformer, self-attention is the key module to learn powerful
context-aware representations. However, self-attention suffers from quadratic
memory requirements with respect to the sequence length, which limits us to
process longer sequence on GPU. In this work, we propose sequence parallelism,
a memory efficient parallelism method to help us break input sequence length
limitation and train with longer sequence on GPUs. Compared with existing
parallelism, our approach no longer requires a single device to hold the whole
sequence. Specifically, we split the input sequence into multiple chunks and
feed each chunk into its corresponding device (i.e. GPU). To compute the
attention output, we communicate attention embeddings among GPUs. Inspired by
ring all-reduce, we integrated ring-style communication with self-attention
calculation and proposed Ring Self-Attention (RSA). Our implementation is fully
based on PyTorch. Without extra compiler or library changes, our approach is
compatible with data parallelism and pipeline parallelism. Experiments show
that sequence parallelism performs well when scaling with batch size and
sequence length. Compared with tensor parallelism, our approach achieved
$13.7\times$ and $3.0\times$ maximum batch size and sequence length
respectively when scaling up to 64 NVIDIA P100 GPUs. We plan to integrate our
sequence parallelism with data, pipeline and tensor parallelism to further
train large-scale models with 4D parallelism in our future work.
|
Even as the understanding of the mechanism behind correlated insulating
states in magic-angle twisted bilayer graphene converges towards various kinds
of spontaneous symmetry breaking, the metallic "normal state" above the
insulating transition temperature remains mysterious, with its excessively high
entropy and linear-in-temperature resistivity. In this work, we focus on the
effects of fluctuations of the order-parameters describing correlated
insulating states at integer fillings of the low-energy flat bands on charge
transport. Motivated by the observation of heterogeneity in the order-parameter
landscape at zero magnetic field in certain samples, we conjecture the
existence of frustrating extended range interactions in an effective Ising
model of the order-parameters on a triangular lattice. The competition between
short-distance ferromagnetic interactions and frustrating extended range
antiferromagnetic interactions leads to an emergent length scale that forms
stripe-like mesoscale domains above the ordering transition. The gapless
fluctuations of these heterogeneous configurations are found to be responsible
for the linear-in-temperature resistivity as well as the enhanced low
temperature entropy. Our insights link experimentally observed
linear-in-temperature resistivity and enhanced entropy to the strength of
frustration, or equivalently, to the emergence of mesoscopic length scales
characterizing order-parameter domains.
|
The Controller Area Network (CAN) bus works as an important protocol in the
real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust
architecture. The risk of IVN devices has still been insecure and vulnerable
due to the complex data-intensive architectures which greatly increase the
accessibility to unauthorized networks and the possibility of various types of
cyberattacks. Therefore, the detection of cyberattacks in IVN devices has
become a growing interest. With the rapid development of IVNs and evolving
threat types, the traditional machine learning-based IDS has to update to cope
with the security requirements of the current environment. Nowadays, the
progression of deep learning, deep transfer learning, and its impactful outcome
in several areas has guided as an effective solution for network intrusion
detection. This manuscript proposes a deep transfer learning-based IDS model
for IVN along with improved performance in comparison to several other existing
models. The unique contributions include effective attribute selection which is
best suited to identify malicious CAN messages and accurately detect the normal
and abnormal activities, designing a deep transfer learning-based LeNet model,
and evaluating considering real-world data. To this end, an extensive
experimental performance evaluation has been conducted. The architecture along
with empirical analyses shows that the proposed IDS greatly improves the
detection accuracy over the mainstream machine learning, deep learning, and
benchmark deep transfer learning models and has demonstrated better performance
for real-time IVN security.
|
Symmetry is among the most fundamental and powerful concepts in nature, whose
existence is usually taken as given, without explanation. We explore whether
symmetry can be derived from more fundamental principles from the perspective
of quantum information. Starting with a two-qubit system, we show there are
only two minimally entangling logic gates: the Identity and the SWAP, where
SWAP interchanges the two states of the qubits. We further demonstrate that,
when viewed as an entanglement operator in the spin-space, the $S$-matrix in
the two-body scattering of fermions in the $s$-wave channel is uniquely
determined by unitarity and rotational invariance to be a linear combination of
the Identity and the SWAP. Realizing a minimally entangling $S$-matrix would
give rise to global symmetries, as exemplified in Wigner's spin-flavor symmetry
and Schr\"odinger's conformal invariance in low energy Quantum Chromodynamics.
For $N_q$ species of qubit, the Identity gate is associated with an
$[SU(2)]^{N_q}$ symmetry, which is enlarged to $SU(2N_q)$ when there is a
species-universal coupling constant.
|
The presence of A/F-type {\it Kepler} hybrid stars extending across the
entire $\delta$ Sct-$\gamma$ Dor instability strips and beyond remains largely
unexplained. In order to better understand these particular stars, we performed
a multi-epoch spectroscopic study of 49 candidate A/F-type hybrid stars and one
cool(er) hybrid object detected by the {\it Kepler} mission. We determined a
lower limit of 27 % for the multiplicity fraction. For six spectroscopic
systems, we also reported long-term variations of the time delays. For four
systems, the time delay variations are fully coherent with those of the radial
velocities and can be attributed to orbital motion. We aim to improve the
orbital solutions for those systems with long orbital periods (order of 4-6
years) among the {\it Kepler} hybrid stars. The orbits are computed based on a
simultaneous modelling of the RVs obtained with high-resolution spectrographs
and the photometric time delays derived from time-dependent frequency analyses
of the {\it Kepler} light curves. We refined the orbital solutions of four
spectroscopic systems with A/F-type {\it Kepler} hybrid component stars: KIC
4480321, 5219533, 8975515 and KIC 9775454. Simultaneous modelling of both data
types analysed together enabled us to improve the orbital solutions, obtain
more robust and accurate information on the mass ratio, and identify the
component with the short-period $\delta$ Sct-type pulsations. In several cases,
we were also able to derive new constraints for the minimum component masses.
From a search for regular frequency patterns in the high-frequency regime of
the Fourier transforms of each system, we found no evidence of tidal splitting
among the triple systems with close (inner) companions. However, some systems
exhibit frequency spacings which can be explained by the mechanism of
rotational splitting.
|
Multi-type recurrent events are often encountered in medical applications
when two or more different event types could repeatedly occur over an
observation period. For example, patients may experience recurrences of
multi-type nonmelanoma skin cancers in a clinical trial for skin cancer
prevention. The aims in those applications are to characterize features of the
marginal processes, evaluate covariate effects, and quantify both the
within-subject recurrence dependence and the dependence among different event
types. We use copula-frailty models to analyze correlated recurrent events of
different types. Parameter estimation and inference are carried out by using a
Monte Carlo expectation-maximization (MCEM) algorithm, which can handle a
relatively large (i.e., three or more) number of event types. Performances of
the proposed methods are evaluated via extensive simulation studies. The
developed methods are used to model the recurrences of skin cancer with
different types.
|
We introduce AOT, an anonymous communication system based on mix network
architecture that uses oblivious transfer (OT) to deliver messages. Using OT to
deliver messages helps AOT resist blending ($n-1$) attacks and helps AOT
preserve receiver anonymity, even if a covert adversary controls all nodes in
AOT. AOT comprises three levels of nodes, where nodes at each level perform a
different function and can scale horizontally. The sender encrypts their
payload and a tag, derived from a secret shared between the sender and
receiver, with the public key of a Level-2 node and sends them to a Level-1
node. On a public bulletin board, Level-3 nodes publish tags associated with
messages ready to be retrieved. Each receiver checks the bulletin board,
identifies tags, and receives the associated messages using OT. A receiver can
receive their messages even if the receiver is offline when messages are ready.
Through what we call a "handshake" process, communicants can use the AOT
protocol to establish shared secrets anonymously. Users play an active role in
contributing to the unlinkability of messages: periodically, users initiate
requests to AOT to receive dummy messages, such that an adversary cannot
distinguish real and dummy requests.
|
With the continuous rise of malicious campaigns and the exploitation of new
attack vectors, it is necessary to assess the efficacy of the defensive
mechanisms used to detect them. To this end, the contribution of our work is
twofold. First, it introduces a new method for obfuscating malicious code to
bypass all static checks of multi-engine scanners, such as VirusTotal.
Interestingly, our approach to generating the malicious executables is not
based on introducing a new packer but on the augmentation of the capabilities
of an existing and widely used tool for packaging Python, PyInstaller but can
be used for all similar packaging tools. As we prove, the problem is deeper and
inherent in almost all antivirus engines and not PyInstaller specific. Second,
our work exposes significant issues of well-known sandboxes that allow malware
to evade their checks. As a result, we show that stealth and evasive malware
can be efficiently developed, bypassing with ease state of the art malware
detection tools without raising any alert.
|
We present a toolkit of directed distances between quantile functions. By
employing this, we solve some new optimal transport (OT) problems which e.g.
considerably flexibilize some prominent OTs expressed through Wasserstein
distances.
|
Creating safe concurrent algorithms is challenging and error-prone. For this
reason, a formal verification framework is necessary especially when those
concurrent algorithms are used in safety-critical systems. The goal of this
guide is to provide resources for beginners to get started in their journey of
formal verification using the powerful tool Iris. The difference between this
guide and many others is that it provides (i) an in-depth explanation of
examples and tactics, (ii) an explicit discussion of separation logic, and
(iii) a thorough coverage of Iris and Coq. References to other guides and to
papers are included throughout to provide readers with resources through which
to continue their learning.
|
We consider the problem of identity testing of Markov chains based on a
single trajectory of observations under the distance notion introduced by
Daskalakis et al. [2018a] and further analyzed by Cherapanamjeri and Bartlett
[2019]. Both works made the restrictive assumption that the Markov chains under
consideration are symmetric. In this work we relax the symmetry assumption to
the more natural assumption of reversibility, still assuming that both the
reference and the unknown Markov chains share the same stationary distribution.
|
This article considers the optimal control of the SIR model with both
transmission and treatment uncertainty. It follows the model presented in Gatto
and Schellhorn (2021). We make four significant improvements on the latter
paper. First, we prove the existence of a solution to the model. Second, our
interpretation of the control is more realistic: while in Gatto and Schellhorn
the control $\alpha$ is the proportion of the population that takes a basic
dose of treatment, so that $\alpha >1$ occurs only if some patients take more
than a basic dose, in our paper, $\alpha$ is constrained between zero and one,
and represents thus the proportion of the population undergoing treatment.
Third, we provide a complete solution for the moderate infection regime (with
constant treatment). Finally, we give a thorough interpretation of the control
in the moderate infection regime, while Gatto and Schellhorn focussed on the
interpretation of the low infection regime. Finally, we compare the efficiency
of our control to curb the COVID-19 epidemic to other types of control.
|
Double sided auctions are widely used in financial markets to match demand
and supply. Prior works on double sided auctions have focused primarily on
single quantity trade requests. We extend various notions of double sided
auctions to incorporate multiple quantity trade requests and provide fully
formalized matching algorithms for double sided auctions with their correctness
proofs. We establish new uniqueness theorems that enable automatic detection of
violations in an exchange program by comparing its output with that of a
verified program. All proofs are formalized in the Coq proof assistant without
adding any axiom to the system. We extract verified OCaml and Haskell programs
that can be used by the exchanges and the regulators of the financial markets.
We demonstrate the practical applicability of our work by running the verified
program on real market data from an exchange to automatically check for
violations in the exchange algorithm.
|
In this paper, we study the Combinatorial Pure Exploration problem with the
Bottleneck reward function (CPE-B) under the fixed-confidence (FC) and
fixed-budget (FB) settings. In CPE-B, given a set of base arms and a collection
of subsets of base arms (super arms) following a certain combinatorial
constraint, a learner sequentially plays a base arm and observes its random
reward, with the objective of finding the optimal super arm with the maximum
bottleneck value, defined as the minimum expected reward of the base arms
contained in the super arm. CPE-B captures a variety of practical scenarios
such as network routing in communication networks, and its \emph{unique
challenges} fall on how to utilize the bottleneck property to save samples and
achieve the statistical optimality. None of the existing CPE studies (most of
them assume linear rewards) can be adapted to solve such challenges, and thus
we develop brand-new techniques to handle them. For the FC setting, we propose
novel algorithms with optimal sample complexity for a broad family of instances
and establish a matching lower bound to demonstrate the optimality (within a
logarithmic factor). For the FB setting, we design an algorithm which achieves
the state-of-the-art error probability guarantee and is the first to run
efficiently on fixed-budget path instances, compared to existing CPE
algorithms. Our experimental results on the top-$k$, path and matching
instances validate the empirical superiority of the proposed algorithms over
their baselines.
|
The visible light communication (VLC) by LED is one of the important
communication methods because LED can work as high speed and VLC sends the
information by high flushing LED. We use the pulse wave modulation for the VLC
with LED because LED can be controlled easily by the microcontroller, which has
the digital output pins. At the pulse wave modulation, deciding the high and
low voltage by the middle voltage when the receiving signal level is amplified
is equal to deciding it by the threshold voltage without amplification. In this
paper, we proposed two methods that adjust the threshold value using counting
the slot number and measuring the signal level. The number of signal slots is
constant per one symbol when we use Pulse Position Modulation (PPM). If the
number of received signal slots per one symbol time is less than the
theoretical value, that means the threshold value is higher than the optimal
value. If it is more than the theoretical value, that means the threshold value
is lower. So, we can adjust the threshold value using the number of received
signal slots. At the second proposed method, the average received signal level
is not equal to the signal level because there is a ratio between the number of
high slots and low slots. So, we can calculate the threshold value from the
average received signal level and the slot ratio. We show these performances as
real experiments.
|
Numerous works have been proposed to generate random graphs preserving the
same properties as real-life large scale networks. However, many real networks
are better represented by hypergraphs. Few models for generating random
hypergraphs exist and no general model allows to both preserve a power-law
degree distribution and a high modularity indicating the presence of
communities. We present a dynamic preferential attachment hypergraph model
which features partition into communities. We prove that its degree
distribution follows a power-law and we give theoretical lower bounds for its
modularity. We compare its characteristics with a real-life co-authorship
network and show that our model achieves good performances. We believe that our
hypergraph model will be an interesting tool that may be used in many research
domains in order to reflect better real-life phenomena.
|
We introduce $\varepsilon$-approximate versions of the notion of Euclidean
vector bundle for $\varepsilon \geq 0$, which recover the classical notion of
Euclidean vector bundle when $\varepsilon = 0$. In particular, we study
\v{C}ech cochains with coefficients in the orthogonal group that satisfy an
approximate cocycle condition. We show that $\varepsilon$-approximate vector
bundles can be used to represent classical vector bundles when $\varepsilon >
0$ is sufficiently small. We also introduce distances between approximate
vector bundles and use them to prove that sufficiently similar approximate
vector bundles represent the same classical vector bundle. This gives a way of
specifying vector bundles over finite simplicial complexes using a finite
amount of data, and also allows for some tolerance to noise when working with
vector bundles in an applied setting. As an example, we prove a reconstruction
theorem for vector bundles from finite samples. We give algorithms for the
effective computation of low-dimensional characteristic classes of vector
bundles directly from discrete and approximate representations and illustrate
the usage of these algorithms with computational examples.
|
Many Gibbs measures with mean field interactions are known to be chaotic, in
the sense that any collection of $k$ particles in the $n$-particle system are
asymptotically independent, as $n\to\infty$ with $k$ fixed or perhaps $k=o(n)$.
This paper quantifies this notion for a class of continuous Gibbs measures on
Euclidean space with pairwise interactions, with main examples being systems
governed by convex interactions and uniformly convex confinement potentials.
The distance between the marginal law of $k$ particles and its limiting product
measure is shown to be $O((k/n)^{c \wedge 2})$, with $c$ proportional to the
squared temperature. In the high temperature case, this improves upon prior
results based on subadditivity of entropy, which yield $O(k/n)$ at best. The
bound $O((k/n)^2)$ cannot be improved, as a Gaussian example demonstrates. The
results are non-asymptotic, and distance is quantified via relative Fisher
information, relative entropy, or the squared quadratic Wasserstein metric. The
method relies on an a priori functional inequality for the limiting measure,
used to derive an estimate for the $k$-particle distance in terms of the
$(k+1)$-particle distance.
|
We have designed and fabricated a microfluidic-based platform for sensing
mechanical forces generated by cardiac microtissues in a highly-controlled
microenvironment. Our fabrication approach combines Direct Laser Writing (DLW)
lithography with soft lithography. At the center of our platform is a
cylindrical volume, divided into two chambers by a cylindrical
polydimethylsiloxane (PDMS) shell. Cells are seeded into the inner chamber from
a top opening, and the microtissue assembles onto tailor-made attachment sites
on the inner walls of the cylindrical shell. The outer chamber is electrically
and fluidically isolated from the inner one by the cylindrical shell and is
designed for actuation and sensing purposes. Externally applied pressure waves
to the outer chamber deform parts of the cylindrical shell and thus allow us to
exert time-dependent forces on the microtissue. Oscillatory forces generated by
the microtissue similarly deform the cylindrical shell and change the volume of
the outer chamber, resulting in measurable electrical conductance changes. We
have used this platform to study the response of cardiac microtissues derived
from human induced pluripotent stem cells (hiPSC) under prescribed mechanical
loading and pacing.
|
In this work, we introduce a new preprocessing step applicable to UAV bird's
eye view imagery, which we call Adaptive Resizing. It is constructed to adjust
the vast variances in objects' scales, which are naturally inherent to UAV data
sets. Furthermore, it improves inference speed by four to five times on
average. We test this extensively on UAVDT, VisDrone, and on a new data set, we
captured ourselves. On UAVDT, we achieve more than 100 % relative improvement
in AP50. Moreover, we show how this method can be applied to a general UAV
object detection task. Additionally, we successfully test our method on a
domain transfer task where we train on some interval of altitudes and test on a
different one. Code will be made available at our website.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.