abstract
stringlengths 42
2.09k
|
---|
We present an exploratory case study describing the design and realisation of
a ''pure mixed reality'' application in a museum setting, where we investigate
the potential of using Microsoft's HoloLens for object-centred museum
mediation. Our prototype supports non-expert visitors observing a sculpture by
offering interpretation that is linked to visual properties of the museum
object. The design and development of our research prototype is based on a
two-stage visitor observation study and a formative study we conducted prior to
the design of the application. We present a summary of our findings from these
studies and explain how they have influenced our user-centred content creation
and the interaction design of our prototype. We are specifically interested in
investigating to what extent different constructs of initiative influence the
learning and user experience. Thus, we detail three modes of activity that we
realised in our prototype. Our case study is informed by research in the area
of human-computer interaction, the humanities and museum practice. Accordingly,
we discuss core concepts, such as gaze-based interaction, object-centred
learning, presence, and modes of activity and guidance with a transdisciplinary
perspective.
|
The computational power increases over the past decades havegreatly enhanced
the ability to simulate chemical reactions andunderstand ever more complex
transformations. Tensor contractions are the fundamental computational building
block of these simulations. These simulations have often been tied to one
platform and restricted in generality by the interface provided to the user.
The expanding prevalence of accelerators and researcher demands necessitate a
more general approach which is not tied to specific hardware or requires
contortion of algorithms to specific hardware platforms. In this paper we
present COMET, a domain-specific programming language and compiler
infrastructure for tensor contractions targeting heterogeneous accelerators. We
present a system of progressive lowering through multiple layers of abstraction
and optimization that achieves up to 1.98X speedup for 30 tensor contractions
commonly used in computational chemistry and beyond.
|
The next generation of galaxy surveys will allow us to test some fundamental
aspects of the standard cosmological model, including the assumption of a
minimal coupling between the components of the dark sector. In this paper, we
present the Javalambre Physics of the Accelerated Universe Astrophysical Survey
(J-PAS) forecasts on a class of unified models where cold dark matter interacts
with a vacuum energy, considering future observations of baryon acoustic
oscillations, redshift-space distortions, and the matter power spectrum. After
providing a general framework to study the background and linear perturbations,
we focus on a concrete interacting model without momentum exchange by taking
into account the contribution of baryons. We compare the J-PAS results with
those expected for DESI and Euclid surveys and show that J-PAS is competitive
to them, especially at low redshifts. Indeed, the predicted errors for the
interaction parameter, which measures the departure from a $\Lambda$CDM model,
can be comparable to the actual errors derived from the current data of cosmic
microwave background temperature anisotropies.
|
The scaling of different features of stream-wise normal stress profiles
$\langle uu\rangle^+(y^+)$ in turbulent wall-bounded flows, in particular in
truly parallel flows, such as channel and pipe flows, is the subject of a long
running debate. Particular points of contention are the scaling of the "inner"
and "outer" peaks of $\langle uu\rangle^+$ at $y^+\approxeq 15$ and $y^+
=\mathcal{O}(10^3)$, respectively, their infinite Reynolds number limit, and
the rate of logarithmic decay in the outer part of the flow. Inspired by the
landmark paper of Chen and Sreenivasan (2021), two terms of the inner
asymptotic expansion of $\langle uu\rangle^+$ in the small parameter
$Re_\tau^{-1/4}$ are extracted for the first time from a set of direct
numerical simulations (DNS) of channel flow. This inner expansion is completed
by a matching outer expansion, which not only fits the same set of channel DNS
within 1.5\% of the peak stress, but also provides a good match of laboratory
data in pipes and the near-wall part of boundary layers, up to the highest
$Re_\tau$'s of order $10^5$. The salient features of the new composite
expansion are first, an inner $\langle uu\rangle^+$ peak, which saturates at
11.3 and decreases as $Re_\tau^{-1/4}$, followed by a short "wall loglaw" with
a slope that becomes positive for $Re_\tau \gtrapprox 20'000$, leading up to an
outer peak, and an outer logarithmic overlap with a negative slope continuously
going to zero for $Re_\tau \to\infty$.
|
In recent years, two-dimensional van der Waals materials have emerged as an
important platform for the observation of long-range ferromagnetic order in
atomically thin layers. Although heterostructures of such materials can be
conceived to harness and couple a wide range of magneto-optical and
magneto-electrical properties, technologically relevant applications require
Curie temperatures at or above room-temperature and the ability to grow films
over large areas. Here we demonstrate the large-area growth of single-crystal
ultrathin films of stoichiometric Fe5GeTe2 on an insulating substrate using
molecular beam epitaxy. Magnetic measurements show the persistence of soft
ferromagnetism up to room temperature, with a Curie temperature of 293 K, and a
weak out-of-plane magnetocrystalline anisotropy. Surface, chemical, and
structural characterizations confirm the layer-by-layer growth, 5:1:2 Fe:Ge:Te
stoichiometric elementary composition, and single crystalline character of the
films.
|
\textbf{Background} Hydrogels are crosslinked polymer networks that can
absorb and retain a large fraction of liquid. Near a critical sliding velocity,
hydrogels pressed against smooth surfaces exhibit time-dependent frictional
behavior occurring over multiple timescales, yet the origin of these dynamics
is unresolved. \textbf{Objective} Here, we characterize this time-dependent
regime and show that it is consistent with two distinct molecular processes:
sliding-induced relaxation and quiescent recovery. \textbf{Methods} Our
experiments use a custom pin-on-disk tribometer to examine poly(acrylic acid)
hydrogels on smooth poly(methyl methacrylate) surfaces over a variety of
sliding conditions, from minutes to hours. \textbf{Results} We show that at a
fixed sliding velocity, the friction coefficient decays exponentially and
reaches a steady-state value. The time constant associated with this decay
varies exponentially with the sliding velocity, and is sensitive to any
precedent frictional shearing of the interface. This process is reversible;
upon cessation of sliding, the friction coefficient recovers to its original
state. We also show that the initial direction of shear can be imprinted as an
observable "memory", and is visible after 24 hrs of repeated frictional
shearing. \textbf{Conclusions} We attribute this behavior to nanoscale
extension and relaxation dynamics of the near-surface polymer network, leading
to a model of frictional relaxation and recovery with two parallel timescales.
|
It is shown that the equalization of temperatures between our and mirror
sectors occurs during one Hubble time due to microscopic black hole production
and evaporation in particle collisions if the temperature of the Universe is
near the multidimensional Plank mass. This effect excludes the multidimensional
Planck masses smaller than the reheating temperature of the Universe
($\sim10^{13}$ GeV) in the mirror matter models, because the primordial
nucleosynthesis theory requires that the temperature of the mirror world should
be lower than ours. In particular, the birth of microscopic black holes in the
LHC is impossible if the dark matter of our Universe is represented by baryons
of mirror matter. It excludes some of the possible coexisting options in
particle physics and cosmology. Multidimensional models with flat additional
dimensions are already strongly constrained in maximum temperature due to the
effect of Kaluza-Klein mode (KK-mode) overproduction. In these models, the
reheating temperature should be significantly less than the multidimensional
Planck mass, so our restrictions in this case are not paramount. The new
constraints play a role in multidimensional models in which the spectrum of
KK-modes does not lead to their overproduction in the early Universe, for
example, in theories with hyperbolic additional space.
|
Multilingual models have demonstrated impressive cross-lingual transfer
performance. However, test sets like XNLI are monolingual at the example level.
In multilingual communities, it is common for polyglots to code-mix when
conversing with each other. Inspired by this phenomenon, we present two strong
black-box adversarial attacks (one word-level, one phrase-level) for
multilingual models that push their ability to handle code-mixed sentences to
the limit. The former uses bilingual dictionaries to propose perturbations and
translations of the clean example for sense disambiguation. The latter directly
aligns the clean example with its translations before extracting phrases as
perturbations. Our phrase-level attack has a success rate of 89.75% against
XLM-R-large, bringing its average accuracy of 79.85 down to 8.18 on XNLI.
Finally, we propose an efficient adversarial training scheme that trains in the
same number of steps as the original model and show that it improves model
accuracy.
|
We present a statistical study of the largest bibliographic compilation of
stellar and orbital parameters of W UMa stars derived by light curve synthesis
with Roche models. The compilation includes nearly 700 individually
investigated objects from over 450 distinct publications. Almost 70% of this
sample is comprised of stars observed in the last decade that have not been
considered in previous statistical studies. We estimate the ages of the
cataloged stars, model the distributions of their periods, mass ratios,
temperatures and other quantities, and compare them with the data from CRTS,
LAMOST and Gaia archives. As only a small fraction of the sample has radial
velocity curves, we examine the reliability of the photometric mass ratios in
totally and partially eclipsing systems and find that totally eclipsing W UMa
stars with photometric mass ratios have the same parameter distributions as
those with spectroscopic mass ratios. Most of the stars with reliable
parameters have mass ratios below 0.5 and orbital periods shorter than 0.5
days. Stars with longer periods and temperatures above 7000 K stand out as
outliers and shouldn't be labeled as W UMa binaries.
The collected data is available as an online database at
https://wumacat.aob.rs.
|
Among the models of disordered conduction and localization, models with $N$
orbitals per site are attractive both for their mathematical tractability and
for their physical realization in coupled disordered grains. However Wegner
proved that there is no Anderson transition and no localized phase in the $N
\rightarrow \infty$ limit, if the hopping constant $K$ is kept fixed. Here we
show that the localized phase is preserved in a different limit where $N$ is
taken to infinity and the hopping $K$ is simultaneously adjusted to keep $N \,
K$ constant. We support this conclusion with two arguments. The first is
numerical computations of the localization length showing that in the $N
\rightarrow \infty$ limit the site-diagonal-disorder model possesses a
localized phase if $N\,K$ is kept constant, but does not possess that phase if
$K$ is fixed. The second argument is a detailed analysis of the energy and
length scales in a functional integral representation of the gauge invariant
model. The analysis shows that in the $K$ fixed limit the functional integral's
spins do not exhibit long distance fluctuations, i.e. such fluctuations are
massive and therefore decay exponentially, which signals conduction. In
contrast the $N\,K$ fixed limit preserves the massless character of certain
spin fluctuations, allowing them to fluctuate over long distance scales and
cause Anderson localization.
|
Mutation and drift play opposite roles in genetics. While mutation creates
diversity, drift can cause gene variants to disappear, especially when they are
rare. In the absence of natural selection and migration, the balance between
the drift and mutation in a well-mixed population defines its diversity. The
Moran model captures the effects of these two evolutionary forces and has a
counterpart in social dynamics, known as the Voter model with external opinion
influencers. Two extreme outcomes of the Voter model dynamics are consensus and
coexistence of opinions, which correspond to low and high diversity in the
Moran model. Here we use a Shannon's information-theoretic approach to
characterize the smooth transition between the states of consensus and
coexistence of opinions in the Voter model. Mapping the Moran into the Voter
model we extend the results to the mutation-drift balance and characterize the
transition between low and high diversity in finite populations. Describing the
population as a network of connected individuals we show that the transition
between the two regimes depends on the network topology of the population and
on the possible asymmetries in the mutation rates.
|
Granulation of quantum matter -- the formation of persistent small-scale
patterns -- is realized in the images of quasi-one-dimensional Bose-Einstein
condensates perturbed by a periodically modulated interaction. Our present
analysis of a mean-field approximation suggests that granulation is caused by
the gradual transformation of phase undulations into density undulations. This
is achieved by a suitably large modulation frequency, while for low enough
frequencies the system exhibits a quasi-adiabatic regime. We show that the
persistence of granulation is a result of the irregular evolution of the phase
of the wavefunction representing an irreversible process. Our model predictions
agree with numerical solutions of the Schr\"odinger equation and experimental
observations. The numerical computations reveal the emergent many-body
correlations behind these phenomena via the multi-configurational
time-dependent Hartree theory for bosons (MCTDHB).
|
Production high-performance computing systems continue to grow in complexity
and size. As applications struggle to make use of increasingly heterogeneous
compute nodes, maintaining high efficiency (performance per watt) for the whole
platform becomes a challenge. Alongside the growing complexity of scientific
workloads, this extreme heterogeneity is also an opportunity: as applications
dynamically undergo variations in workload, due to phases or data/compute
movement between devices, one can dynamically adjust power across compute
elements to save energy without impacting performance. With an aim toward an
autonomous and dynamic power management strategy for current and future HPC
architectures, this paper explores the use of control theory for the design of
a dynamic power regulation method. Structured as a feedback loop, our
approach-which is novel in computing resource management-consists of
periodically monitoring application progress and choosing at runtime a suitable
power cap for processors. Thanks to a preliminary offline identification
process, we derive a model of the dynamics of the system and a
proportional-integral (PI) controller. We evaluate our approach on top of an
existing resource management framework, the Argo Node Resource Manager,
deployed on several clusters of Grid'5000, using a standard memory-bound HPC
benchmark.
|
We present a data-driven optimization framework for redesigning police patrol
zones in an urban environment. The objectives are to rebalance police workload
among geographical areas and to reduce response time to emergency calls. We
develop a stochastic model for police emergency response by integrating
multiple data sources, including police incidents reports, demographic surveys,
and traffic data. Using this stochastic model, we optimize zone redesign plans
using mixed-integer linear programming. Our proposed design was implemented by
the Atlanta Police Department in March 2019. By analyzing data before and after
the zone redesign, we show that the new design has reduced the response time to
high priority 911 calls by 5.8\% and the imbalance of police workload among
different zones by 43\%.
|
We present a novel class of projected methods, to perform statistical
analysis on a data set of probability distributions on the real line, with the
2-Wasserstein metric. We focus in particular on Principal Component Analysis
(PCA) and regression. To define these models, we exploit a representation of
the Wasserstein space closely related to its weak Riemannian structure, by
mapping the data to a suitable linear space and using a metric projection
operator to constrain the results in the Wasserstein space. By carefully
choosing the tangent point, we are able to derive fast empirical methods,
exploiting a constrained B-spline approximation. As a byproduct of our
approach, we are also able to derive faster routines for previous work on PCA
for distributions. By means of simulation studies, we compare our approaches to
previously proposed methods, showing that our projected PCA has similar
performance for a fraction of the computational cost and that the projected
regression is extremely flexible even under misspecification. Several
theoretical properties of the models are investigated and asymptotic
consistency is proven. Two real world applications to Covid-19 mortality in the
US and wind speed forecasting are discussed.
|
Task-agnostic knowledge distillation, a teacher-student framework, has been
proved effective for BERT compression. Although achieving promising results on
NLP tasks, it requires enormous computational resources. In this paper, we
propose Extract Then Distill (ETD), a generic and flexible strategy to reuse
the teacher's parameters for efficient and effective task-agnostic
distillation, which can be applied to students of any size. Specifically, we
introduce two variants of ETD, ETD-Rand and ETD-Impt, which extract the
teacher's parameters in a random manner and by following an importance metric
respectively. In this way, the student has already acquired some knowledge at
the beginning of the distillation process, which makes the distillation process
converge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark
and SQuAD. The experimental results show that: (1) compared with the baseline
without an ETD strategy, ETD can save 70\% of computation cost. Moreover, it
achieves better results than the baseline when using the same computing
resource. (2) ETD is generic and has been proven effective for different
distillation methods (e.g., TinyBERT and MiniLM) and students of different
sizes. The source code will be publicly available upon publication.
|
We present the results regarding the analysis of the fast X-ray/infrared (IR)
variability of the black-hole transient MAXI J1535$-$571. The data studied in
this work consist of two strictly simultaneous observations performed with
XMM-Newton (X-rays: 0.7$-$10 keV), VLT/HAWK-I ($K_{\rm s}$ band, 2.2 $\mu$m)
and VLT/VISIR ($M$ and $PAH2$_$2$ bands, 4.85 and 11.88 $\mu$m respectively).
The cross-correlation function between the X-ray and near-IR light curves shows
a strong asymmetric anti-correlation dip at positive lags. We detect a near-IR
QPO (2.5 $\sigma$) at $2.07\pm0.09$ Hz simultaneously with an X-ray QPO at
approximately the same frequency ($f_0=2.25\pm0.05$). From the cross-spectral
analysis a lag consistent with zero was measured between the two oscillations.
We also measure a significant correlation between the average near-IR and
mid-IR fluxes during the second night, but find no correlation on short
timescales. We discuss these results in terms of the two main scenarios for
fast IR variability (hot inflow and jet powered by internal shocks). In both
cases, our preliminary modelling suggests the presence of a misalignment
between disk and jet.
|
The paper contains an application of van Kampen theorem for groupoids for
computation of homotopy types of certain class of non-compact foliated surfaces
obtained by gluing at most countably many strips $\mathbb{R}\times(0,1)$ with
boundary intervals in $\mathbb{R}\times\{\pm1\}$ along some of those intervals.
|
We present randUBV, a randomized algorithm for matrix sketching based on the
block Lanzcos bidiagonalization process. Given a matrix $\bf{A}$, it produces a
low-rank approximation of the form ${\bf UBV}^T$, where $\bf{U}$ and $\bf{V}$
have orthonormal columns in exact arithmetic and $\bf{B}$ is block bidiagonal.
In finite precision, the columns of both ${\bf U}$ and ${\bf V}$ will be close
to orthonormal. Our algorithm is closely related to the randQB algorithms of
Yu, Gu, and Li (2018) in that the entries of $\bf{B}$ are incrementally
generated and the Frobenius norm approximation error may be efficiently
estimated. Our algorithm is therefore suitable for the fixed-accuracy problem,
and so is designed to terminate as soon as a user input error tolerance is
reached. Numerical experiments suggest that the block Lanczos method is
generally competitive with or superior to algorithms that use power iteration,
even when $\bf{A}$ has significant clusters of singular values.
|
We study the transport property of Gaussian measures on Sobolev spaces of
periodic functions under the dynamics of the one-dimensional cubic fractional
nonlinear Schr\"{o}dinger equation. For the case of second-order dispersion or
greater, we establish an optimal regularity result for the quasi-invariance of
these Gaussian measures, following the approach by Debussche and Tsutsumi [15].
Moreover, we obtain an explicit formula for the Radon-Nikodym derivative and,
as a corollary, a formula for the two-point function arising in wave turbulence
theory. We also obtain improved regularity results in the weakly dispersive
case, extending those in [20]. Our proof combines the approach introduced by
Planchon, Tzvetkov and Visciglia [47] and that of Debussche and Tsutsumi [15].
|
We report on preliminary results of a statistical study of student
performance in more than a decade of calculus-based introductory physics
courses. Treating average homework and test grades as proxies for student
effort and comprehension respectively, we plot comprehension versus effort in
an academic version of the astronomical Hertzsprung-Russell diagram (which
plots stellar luminosity versus temperature). We study the evolution of this
diagram with time, finding that the "academic main sequence" has begun to break
down in recent years as student achievement on tests has become decoupled from
homework grades. We present evidence that this breakdown is likely related to
the emergence of easily accessible online solutions to most textbook problems,
and discuss possible responses and strategies for maintaining and enhancing
student learning in the online era.
|
A method for creating a vision-and-language (V&L) model is to extend a
language model through structural modifications and V&L pre-training. Such an
extension aims to make a V&L model inherit the capability of natural language
understanding (NLU) from the original language model. To see how well this is
achieved, we propose to evaluate V&L models using an NLU benchmark (GLUE). We
compare five V&L models, including single-stream and dual-stream models,
trained with the same pre-training. Dual-stream models, with their higher
modality independence achieved by approximately doubling the number of
parameters, are expected to preserve the NLU capability better. Our main
finding is that the dual-stream scores are not much different than the
single-stream scores, contrary to expectation. Further analysis shows that
pre-training causes the performance drop in NLU tasks with few exceptions.
These results suggest that adopting a single-stream structure and devising the
pre-training could be an effective method for improving the maintenance of
language knowledge in V&L extensions.
|
We address a state-of-the-art reinforcement learning (RL) control approach to
automatically configure robotic prosthesis impedance parameters to enable
end-to-end, continuous locomotion intended for transfemoral amputee subjects.
Specifically, our actor-critic based RL provides tracking control of a robotic
knee prosthesis to mimic the intact knee profile. This is a significant advance
from our previous RL based automatic tuning of prosthesis control parameters
which have centered on regulation control with a designer prescribed robotic
knee profile as the target. In addition to presenting the complete tracking
control algorithm based on direct heuristic dynamic programming (dHDP), we
provide an analytical framework for the tracking controller with constrained
inputs. We show that our proposed tracking control possesses several important
properties, such as weight convergence of the learning networks, Bellman
(sub)optimality of the cost-to-go value function and control input, and
practical stability of the human-robot system under input constraint. We
further provide a systematic simulation of the proposed tracking control using
a realistic human-robot system simulator, the OpenSim, to emulate how the dHDP
enables level ground walking, walking on different terrains and at different
paces. These results show that our proposed dHDP based tracking control is not
only theoretically suitable, but also practically useful.
|
The high energy Operator Product Expansion for the product of two
electromagnetic currents is extended to the sub-eikonal level in a rigorous
way. I calculate the impact factors for polarized and unpolarized structure
functions, define new distribution functions, and derive the evolution
equations for unpolarized and polarized structure functions in the flavor
singlet and non-singlet case.
|
We investigate the diurnal modulation of the event rate for dark matter
scattering on solid targets arising from the directionally dependent defect
creation threshold energy. In particular, we quantify how this effect would
help in separating dark matter signal from the neutrino background. We perform
a benchmark analysis for a germanium detector and compute how the reach of the
experiment is affected by including the timing information of the scattering
events. We observe that for light dark matter just above the detection
threshold the magnitude of the annual modulation is enhanced. In this mass
range using either the annual or diurnal modulation information provides a
similar gain in the reach of the experiment, while the additional reach from
using both effects remains modest. Furthermore, we demonstrate that if the
background contains a feature exhibiting an annual modulation similar to the
one observed by DAMA experiment, the diurnal modulation provides for an
additional handle to separate dark matter signal from the background.
|
Quasielastic scattering excitation function at large backward angle has been
measured for the weakly bound system, $^{7}$Li+$^{159}$Tb at energies around
the Coulomb barrier. The corresponding quasielastic barrier distribution has
been derived from the excitation function, both including and excluding the
$\alpha$-particles produced in the reaction. The centroid of the barrier
distribution obtained after inclusion of $\alpha$-particles was found to be
shifted higher in energy, compared to the distribution excluding the $\alpha
$-particles. The quasielastic data, excluding the $\alpha$-particles, have been
analyzed in the framework of continuum discretized coupled channel
calculations. The quasielastic barrier distribution for $^{7}$Li+$^{159}$Tb,
has also been compared with the fusion barrier distribution for the system.
|
Rectification of interacting Brownian particles is investigated in a
two-dimensional asymmetric channel in the presence of an external periodic
driving force. The periodic driving force can break the thermodynamic
equilibrium and induces rectification of particles (or finite average
velocity). The spatial variation in the shape of the channel leads to entropic
barriers, which indeed control the rectification of particles. We find that by
simply tunning the driving frequency, driving amplitude, and shape of the
asymmetric channel, the average velocity can be reversed. Moreover, a short
range interaction force between the particles further enhances the
rectification of particles greatly. This interaction force is modeled as the
lubrication interaction. Interestingly, it is observed that there exists a
characteristic critical frequency $\Omega_c$ below which the rectification of
particles greatly enhances in the positive direction with increasing the
interaction strength; whereas, for the frequency above this critical value, it
greatly enhances in the negative direction with increasing the interaction
strength. Further, there exists an optimal value of the asymmetric parameter of
the channel for which the rectification of interacting particles is maximum.
These findings are useful in sorting out the particles and understanding the
diffusive behavior of small particles or molecules in microfluidic channels,
membrane pores, etc.
|
Modern deep neural networks (DNNs) achieve highly accurate results for many
recognition tasks on overhead (e.g., satellite) imagery. One challenge however
is visual domain shifts (i.e., statistical changes), which can cause the
accuracy of DNNs to degrade substantially and unpredictably when tested on new
sets of imagery. In this work we model domain shifts caused by variations in
imaging hardware, lighting, and other conditions as non-linear pixel-wise
transformations; and we show that modern DNNs can become largely invariant to
these types of transformations, if provided with appropriate training data
augmentation. In general, however, we do not know the transformation between
two sets of imagery. To overcome this problem, we propose a simple real-time
unsupervised training augmentation technique, termed randomized histogram
matching (RHM). We conduct experiments with two large public benchmark datasets
for building segmentation and find that RHM consistently yields comparable
performance to recent state-of-the-art unsupervised domain adaptation
approaches despite being simpler and faster. RHM also offers substantially
better performance than other comparably simple approaches that are widely-used
in overhead imagery.
|
Feature-based dynamic pricing is an increasingly popular model of setting
prices for highly differentiated products with applications in digital
marketing, online sales, real estate and so on. The problem was formally
studied as an online learning problem [Javanmard & Nazerzadeh, 2019] where a
seller needs to propose prices on the fly for a sequence of $T$ products based
on their features $x$ while having a small regret relative to the best --
"omniscient" -- pricing strategy she could have come up with in hindsight. We
revisit this problem and provide two algorithms (EMLP and ONSP) for stochastic
and adversarial feature settings, respectively, and prove the optimal
$O(d\log{T})$ regret bounds for both. In comparison, the best existing results
are $O\left(\min\left\{\frac{1}{\lambda_{\min}^2}\log{T},
\sqrt{T}\right\}\right)$ and $O(T^{2/3})$ respectively, with $\lambda_{\min}$
being the smallest eigenvalue of $\mathbb{E}[xx^T]$ that could be arbitrarily
close to $0$. We also prove an $\Omega(\sqrt{T})$ information-theoretic lower
bound for a slightly more general setting, which demonstrates that
"knowing-the-demand-curve" leads to an exponential improvement in feature-based
dynamic pricing.
|
The interplay of different electronic phases underlies the physics of
unconventional superconductors. One of the most intriguing examples is a
high-Tc superconductor FeTe1-xSex: it undergoes both a topological transition,
linked to the electronic band inversion, and an electronic nematic phase
transition, associated with rotation symmetry breaking, around the same
critical composition xc where superconducting Tc peaks. At this regime, nematic
fluctuations and symmetry-breaking strain could have an enormous impact, but
this is yet to be fully explored. Using spectroscopic-imaging scanning
tunneling microscopy, we study the electronic nematic transition in FeTe1-xSex
as a function of composition. Near xc, we reveal the emergence of electronic
nematicity in nanoscale regions. Interestingly, we discover that
superconductivity is drastically suppressed in areas where static nematic order
is the strongest. By analyzing atomic displacement in STM topographs, we find
that small anisotropic strain can give rise to these strongly nematic localized
regions. Our experiments reveal a tendency of FeTe1-xSex near x~0.45 to form
puddles hosting static nematic order, suggestive of nematic fluctuations pinned
by structural inhomogeneity, and demonstrate a pronounced effect of anisotropic
strain on superconductivity in this regime.
|
The near vanishing of the cosmological constant is one of the most puzzling
open problems in theoretical physics. We consider a system, the so-called
framid, that features a technically similar problem. Its stress-energy tensor
has a Lorentz-invariant expectation value on the ground state, yet there are no
standard, symmetry-based selection rules enforcing this, since the ground state
spontaneously breaks boosts. We verify the Lorentz invariance of the
expectation value in question with explicit one-loop computations. These,
however, yield the expected result only thanks to highly nontrivial
cancellations, which are quite mysterious from the low-energy effective theory
viewpoint.
|
We demonstrate experimentally a method of varying the degree of
directionality in laser-induced molecular rotation. To control the ratio
between the number of clockwise and counter-clockwise rotating molecules (with
respect to a fixed laboratory axis), we change the polarization ellipticity of
the laser field of an optical centrifuge. The experimental data, supported by
the numerical simulations, show that the degree of rotational directionality
can be varied in a continuous fashion between unidirectional and bidirectional
rotation. The control can be executed with no significant loss in the total
number of rotating molecules. The technique could be used for studying the
effects of orientation of the molecular angular momentum on molecular
collisions and chemical reactions. It could also be utilized for controlling
magnetic and optical properties of gases, as well as for the enantioselective
detection of chiral molecules.
|
Based on a general transport theory for non-reciprocal non-Hermitian systems
and a topological model that encompasses a wide range of previously studied
models, we (i) provide conditions for effects such as reflectionless and
transparent transport, lasing, and coherent perfect absorption, (ii) identify
which effects are compatible and linked with each other, and (iii) determine by
which levers they can be tuned independently. For instance, the directed
amplification inherent in the non-Hermitian skin effect does not enter the
spectral conditions for reflectionless transport, lasing, or coherent perfect
absorption, but allows to adjust the transparency of the system. In addition,
in the topological model the conditions for reflectionless transport depend on
the topological phase, but those for coherent perfect absorption do not. This
then allows us to establish a number of distinct transport signatures of
non-Hermitian, nonreciprocal, and topological behaviour, in particular (I)
reflectionless transport in a direction that depends on the topological phase,
(II) invisibility coinciding with the skin-effect phase transition of
topological edge states, and (III) coherent perfect absorption in a system that
is transparent when probed from one side.
|
We obtain an upper bound for the number of critical points of the systole
function on $\mathcal{M}_g$. Besides, we obtain an upper bound for the number
of those critical points whose systole is smaller than a constant.
|
We study the multimessenger signals from the merger of a black hole with a
magnetized neutron star using resistive magnetohydrodynamics simulations
coupled to full general relativity. We focus on a case with a 5:1 mass ratio,
where only a small amount of the neutron star matter remains post-merger, but
we nevertheless find that significant electromagnetic radiation can be powered
by the interaction of the neutron star's magnetosphere with the black hole. In
the lead-up to merger, strong twisting of magnetic field lines from the
inspiral leads to plasmoid emission and results in a luminosity in excess of
that expected from unipolar induction. We find that the strongest emission
occurs shortly after merger during a transitory period in which magnetic loops
form and escape the central region. The remaining magnetic field collimates
around the spin axis of the remnant black hole before dissipating, an
indication that, in more favorable scenarios (higher black hole spin/lower mass
ratio) with larger accretion disks, a jet would form.
|
As a unique perovskite transparent oxide semiconductor, high-mobility
La-doped BaSnO3 films have been successfully synthesized by molecular beam
epitaxy and pulsed laser deposition. However, it remains a big challenge for
magnetron sputtering, a widely applied technique suitable for large-scale
fabrication, to grow high-mobility La-doped BaSnO3 films. Here, we developed a
method to synthesize high-mobility epitaxial La-doped BaSnO3 films (mobility up
to 121 cm2V-1s-1 at the carrier density ~ 4.0 x 10^20 cm-3 at room temperature)
directly on SrTiO3 single crystal substrates using high-pressure magnetron
sputtering. The structural and electrical properties of the La-doped BaSnO3
films were characterized by combined high-resolution X-ray diffraction, X-ray
photoemission spectroscopy, and temperature-dependent electrical transport
measurements. The room temperature electron mobility of La-doped BaSnO3 films
in this work is 2 to 4 times higher than the reported values of the films grown
by magnetron sputtering. Moreover, in the high carrier density range (n > 3 x
10^20 cm-3), the electron mobility value of 121 cm2V-1s-1 in our work is among
the highest values for all reported doped BaSnO3 films. It is revealed that
high argon pressure during sputtering plays a vital role in stabilizing the
fully relaxed films and inducing oxygen vacancies, which benefit the high
mobility at room temperature. Our work provides an easy and economical way to
massively synthesize high-mobility transparent conducting films for transparent
electronics.
|
The 16-year old Blaise Pascal found a way to determine if 6 points lie on a
conic using a straightedge. Nearly 400 years later, we develop a method that
uses a straightedge to check whether 10 points lie on a plane cubic curve.
|
A compact analytic model is proposed to describe the combined orientation
preference (OP) and ocular dominance (OD) features of simple cells and their
layout in the primary visual cortex (V1). This model consists of three parts:
(i) an anisotropic Laplacian (AL) operator that represents the local neural
sensitivity to the orientation of visual inputs; (ii) a receptive field (RF)
operator that models the anisotropic spatial RF that projects to a given V1
cell over scales of a few tenths of a millimeter and combines with the AL
operator to give an overall OP operator; and (iii) a map that describes how the
parameters of these operators vary approximately periodically across V1. The
parameters of the proposed model maximize the neural response at a given OP
with an OP tuning curve fitted to experimental results. It is found that the
anisotropy of the AL operator does not significantly affect OP selectivity,
which is dominated by the RF anisotropy, consistent with Hubel and Wiesel's
original conclusions that orientation tuning width of V1 simple cell is
inversely related to the elongation of its RF. A simplified OP-OD map is then
constructed to describe the approximately periodic OP-OD structure of V1 in a
compact form. Specifically, the map is approximated by retaining its dominant
spatial Fourier coefficients, which are shown to suffice to reconstruct the
overall structure of the OP-OD map. This representation is a suitable form to
analyze observed maps compactly and to be used in neural field theory of V1.
Application to independently simulated V1 structures shows that observed
irregularities in the map correspond to a spread of dominant coefficients in a
circle in Fourier space.
|
We propose a scheme comprising an array of anisotropic optical waveguides,
embedded in a gas of cold atoms, which can be tuned from a Hermitian to an
odd-PT -- symmetric configuration through the manipulation of control and
assistant laser fields. We show that the system can be controlled by tuning
intra -- and inter-cell coupling coefficients, enabling the creation of
topologically distinct phases and linear topological edge states. The waveguide
array, characterized by a quadrimer primitive cell, allows for implementing
transitions between Hermitian and odd-PT -symmetric configurations, broken and
unbroken PT -symmetric phases, topologically trivial and nontrivial phases, as
well as transitions between linear and nonlinear regimes. The introduced scheme
generalizes the Rice-Mele Hamiltonian for a nonlinear non-Hermitian quadrimer
array featuring odd-PT symmetry and makes accessible unique phenomena and
functionalities that emerge from the interplay of non-Hermiticity, topology,
and nonlinearity. We also show that in the presence of nonlinearity the system
sustains nonlinear topological edge states bifurcating from the linear
topological edge states and the modes without linear limit. Each nonlinear mode
represents a doublet of odd-PT -conjugate states. In the broken PT phase, the
nonlinear edge states may be effectively stabilized when an additional
absorption is introduced into the system.
|
Universal register machine, a formal model of computation, can be emulated on
the array of the Game of Life, a two-dimensional cellular automaton. We perform
spectral analysis on the computation dynamical process of the universal
register machine on the Game of Life. The array is divided into small sectors
and the power spectrum is calculated from the evolution in each sector. The
power spectrum can be classified into four categories by its shape; null, white
noise, sharp peaks, and power law. By representing the shape of power spectrum
by a mark, we can visualize the activity of the sector during the computation
process. For example, the track of pulse moving between components of the
universal register machine and the position of frequently modified registers
can be identified. This method can expose the functional difference in each
region of computing machine.
|
In this note, we investigate a new model theoretical tree property, called
the antichain tree property (ATP). We develop combinatorial techniques for ATP.
First, we show that ATP is always witnessed by a formula in a single free
variable, and for formulas, not having ATP is closed under disjunction. Second,
we show the equivalence of ATP and $k$-ATP, and provide a criterion for
theories to have not ATP (being NATP).
Using these combinatorial observations, we find algebraic examples of ATP and
NATP, including pure group, pure fields, and valued fields. More precisely, we
prove Mekler's construction for groups, Chatzidakis-Ramsey's style criterion
for PAC fields, and the AKE-style principle for valued fields preserving NATP.
And we give a construction of an antichain tree in the Skolem arithmetic and
atomless Boolean algebras.
|
We extend some classical results of Bousfield on homology localizations and
nilpotent completions to a presentably symmetric monoidal stable
$\infty$-category $\mathscr{M}$ admitting a multiplicative left-complete
$t$-structure. If $E$ is a homotopy commutative algebra in $\mathscr{M}$ we
show that $E$-nilpotent completion, $E$-localization, and a suitable formal
completion agree on bounded below objects when $E$ satisfies some reasonable
conditions.
|
In one-shot NAS, sub-networks need to be searched from the supernet to meet
different hardware constraints. However, the search cost is high and $N$ times
of searches are needed for $N$ different constraints. In this work, we propose
a novel search strategy called architecture generator to search sub-networks by
generating them, so that the search process can be much more efficient and
flexible. With the trained architecture generator, given target hardware
constraints as the input, $N$ good architectures can be generated for $N$
constraints by just one forward pass without re-searching and supernet
retraining. Moreover, we propose a novel single-path supernet, called unified
supernet, to further improve search efficiency and reduce GPU memory
consumption of the architecture generator. With the architecture generator and
the unified supernet, we propose a flexible and efficient one-shot NAS
framework, called Searching by Generating NAS (SGNAS). With the pre-trained
supernt, the search time of SGNAS for $N$ different hardware constraints is
only 5 GPU hours, which is $4N$ times faster than previous SOTA single-path
methods. After training from scratch, the top1-accuracy of SGNAS on ImageNet is
77.1%, which is comparable with the SOTAs. The code is available at:
https://github.com/eric8607242/SGNAS.
|
We report here on the discovery with XMM-Newton of pulsations at 22 ms from
the central compact source associated with IKT16, a supernova remnant in the
Small Magellanic Cloud (SMC). The measured spin period and spin period
derivative correspond to 21.7661076(2) ms and $2.9(3)\times10^{-14}$
s,s$^{-1}$, respectively. Assuming standard spin-down by magnetic dipole
radiation, the spin-down power corresponds to $1.1\times10^{38}$,erg,s$^{-1}$
implying a Crab-like pulsar. This makes it the most energetic pulsar discovered
in the SMC so far and a close analogue of PSR J0537--6910, a Crab-like pulsar
in the Large Magellanic Cloud. The characteristic age of the pulsar is 12 kyr.
Having for the first time a period measure for this source, we also searched
for the signal in archival data collected in radio with the Parkes telescope
and in Gamma-rays with the Fermi/LAT, but no evidence for pulsation was found
in these energy bands.
|
For a locally presentable abelian category $\mathsf B$ with a projective
generator, we construct the projective derived and contraderived model
structures on the category of complexes, proving in particular the existence of
enough homotopy projective complexes of projective objects. We also show that
the derived category $\mathsf D(\mathsf B)$ is generated, as a triangulated
category with coproducts, by the projective generator of $\mathsf B$. For a
Grothendieck abelian category $\mathsf A$, we construct the injective derived
and coderived model structures on complexes. Assuming Vopenka's principle, we
prove that the derived category $\mathsf D(\mathsf A)$ is generated, as a
triangulated category with products, by the injective cogenerator of $\mathsf
A$. More generally, we define the notion of an exact category with an object
size function and prove that the derived category of any such exact category
with exact $\kappa$-directed colimits of chains of admissible monomorphisms has
Hom sets. In particular, the derived category of any locally presentable
abelian category has Hom sets.
|
The recent emergence of contrastive learning approaches facilitates the
research on graph representation learning (GRL), introducing graph contrastive
learning (GCL) into the literature. These methods contrast semantically similar
and dissimilar sample pairs to encode the semantics into node or graph
embeddings. However, most existing works only performed model-level evaluation,
and did not explore the combination space of modules for more comprehensive and
systematic studies. For effective module-level evaluation, we propose a
framework that decomposes GCL models into four modules: (1) a sampler to
generate anchor, positive and negative data samples (nodes or graphs); (2) an
encoder and a readout function to get sample embeddings; (3) a discriminator to
score each sample pair (anchor-positive and anchor-negative); and (4) an
estimator to define the loss function. Based on this framework, we conduct
controlled experiments over a wide range of architectural designs and
hyperparameter settings on node and graph classification tasks. Specifically,
we manage to quantify the impact of a single module, investigate the
interaction between modules, and compare the overall performance with current
model architectures. Our key findings include a set of module-level guidelines
for GCL, e.g., simple samplers from LINE and DeepWalk are strong and robust; an
MLP encoder associated with Sum readout could achieve competitive performance
on graph classification. Finally, we release our implementations and results as
OpenGCL, a modularized toolkit that allows convenient reproduction, standard
model and module evaluation, and easy extension.
|
With use of the U(1) quantum rotor method in the path integral effective
action formulation, we have confirmed the mathematical similarity of the phase
Hamiltonian and of the extended Bose-Hubbard model with density-induced
tunneling (DIT). Moreover, we have shown that the latter model can be mapped to
a pseudospin Hamiltonian that exhibits two coexisting (single-particle and
pair) superfluid phases. Phase separation of the two has also been confirmed,
determining that there exists a range of coefficients in which only pair
condensation, and not single-particle superfluidity, is present. The DIT part
supports the coherence in the system at high densities and low temperatures,
but also has dissipative effects independent of the system's thermal
properties.
|
Recent advances in Named Entity Recognition (NER) show that document-level
contexts can significantly improve model performance. In many application
scenarios, however, such contexts are not available. In this paper, we propose
to find external contexts of a sentence by retrieving and selecting a set of
semantically relevant texts through a search engine, with the original sentence
as the query. We find empirically that the contextual representations computed
on the retrieval-based input view, constructed through the concatenation of a
sentence and its external contexts, can achieve significantly improved
performance compared to the original input view based only on the sentence.
Furthermore, we can improve the model performance of both input views by
Cooperative Learning, a training method that encourages the two input views to
produce similar contextual representations or output label distributions.
Experiments show that our approach can achieve new state-of-the-art performance
on 8 NER data sets across 5 domains.
|
Vertical Federated Learning (vFL) allows multiple parties that own different
attributes (e.g. features and labels) of the same data entity (e.g. a person)
to jointly train a model. To prepare the training data, vFL needs to identify
the common data entities shared by all parties. It is usually achieved by
Private Set Intersection (PSI) which identifies the intersection of training
samples from all parties by using personal identifiable information (e.g.
email) as sample IDs to align data instances. As a result, PSI would make
sample IDs of the intersection visible to all parties, and therefore each party
can know that the data entities shown in the intersection also appear in the
other parties, i.e. intersection membership. However, in many real-world
privacy-sensitive organizations, e.g. banks and hospitals, revealing membership
of their data entities is prohibited. In this paper, we propose a vFL framework
based on Private Set Union (PSU) that allows each party to keep sensitive
membership information to itself. Instead of identifying the intersection of
all training samples, our PSU protocol generates the union of samples as
training instances. In addition, we propose strategies to generate synthetic
features and labels to handle samples that belong to the union but not the
intersection. Through extensive experiments on two real-world datasets, we show
our framework can protect the privacy of the intersection membership while
maintaining the model utility.
|
Electroencephalograms (EEG) are noninvasive measurement signals of electrical
neuronal activity in the brain. One of the current major statistical challenges
is formally measuring functional dependency between those complex signals. This
paper, proposes the spectral causality model (SCAU), a robust linear model,
under a causality paradigm, to reflect inter- and intra-frequency modulation
effects that cannot be identifiable using other methods. SCAU inference is
conducted with three main steps: (a) signal decomposition into frequency bins,
(b) intermediate spectral band mapping, and (c) dependency modeling through
frequency-specific autoregressive models (VAR). We apply SCAU to study complex
dependencies during visual and lexical fluency tasks (word generation and
visual fixation) in 26 participants' EEGs. We compared the connectivity
networks estimated using SCAU with respect to a VAR model. SCAU networks show a
clear contrast for both stimuli while the magnitude links also denoted a low
variance in comparison with the VAR networks. Furthermore, SCAU dependency
connections not only were consistent with findings in the neuroscience
literature, but it also provided further evidence on the directionality of the
spatio-spectral dependencies such as the delta-originated and theta-induced
links in the fronto-temporal brain network.
|
Let $\Sigma$ be a closed Riemann surface, $h$ a positive smooth function on
$\Sigma$, $\rho$ and $\alpha$ real numbers. In this paper, we study a
generalized mean field equation \begin{align*}
-\Delta u=\rho\left(\dfrac{he^u}{\int_\Sigma
he^u}-\dfrac{1}{\mathrm{Area}\left(\Sigma\right)}\right)+\alpha\left(u-\fint_{\Sigma}u\right),
\end{align*} where $\Delta$ denotes the Laplace-Beltrami operator. We first
derive a uniform bound for solutions when $\rho\in (8k\pi, 8(k+1)\pi)$ for some
non-negative integer number $k\in \mathbb{N}$ and
$\alpha\notin\mathrm{Spec}\left(-\Delta\right)\setminus\set{0}$. Then we obtain
existence results for $\alpha<\lambda_1\left(\Sigma\right)$ by using the
Leray-Schauder degree theory and the minimax method, where
$\lambda_1\left(\Sigma\right)$ is the first positive eigenvalue for $-\Delta$.
|
Argument mining is often addressed by a pipeline method where segmentation of
text into argumentative units is conducted first and proceeded by an argument
component identification task. In this research, we apply a token-level
classification to identify claim and premise tokens from a new corpus of
argumentative essays written by middle school students. To this end, we compare
a variety of state-of-the-art models such as discrete features and deep
learning architectures (e.g., BiLSTM networks and BERT-based architectures) to
identify the argument components. We demonstrate that a BERT-based multi-task
learning architecture (i.e., token and sentence level classification)
adaptively pretrained on a relevant unlabeled dataset obtains the best results
|
We discuss the essential spectrum of essentially self-adjoint elliptic
differential operators of first order and of Laplace type operators on
Riemannian vector bundles over geometrically finite orbifolds.
|
We analyze the SDSS data to classify the galaxies based on their colour using
a fuzzy set-theoretic method and quantify their environments using the local
dimension. We find that the fraction of the green galaxies does not depend on
the environment and $10\%-20\%$ of the galaxies at each environment are in the
green valley depending on the stellar mass range chosen. Approximately $10\%$
of the green galaxies at each environment host an AGN. Combining data from the
Galaxy Zoo, we find that $\sim 95\%$ of the green galaxies are spirals and
$\sim 5\%$ are ellipticals at each environment. Only $\sim 8\%$ of green
galaxies exhibit signs of interactions and mergers, $\sim 1\%$ have dominant
bulge, and $\sim 6\%$ host a bar. We show that the stellar mass distributions
for the red and green galaxies are quite similar at each environment. Our
analysis suggests that the majority of the green galaxies must curtail their
star formation using physical mechanism(s) other than interactions, mergers,
and those driven by bulge, bar and AGN activity. We speculate that these are
the massive galaxies that have grown only via smooth accretion and suppressed
the star formation primarily through mass driven quenching. Using a
Kolmogorov-Smirnov test, we do not find any statistically significant
difference between the properties of green galaxies in different environments.
We conclude that the environmental factors play a minor role and the internal
processes play the dominant role in quenching star formation in the green
valley galaxies.
|
Single sign-on authentication systems such as OAuth 2.0 are widely used in
web services. They allow users to use accounts registered with major identity
providers such as Google and Facebook to login on multiple services (relying
parties). These services can both identify users and access a subset of the
user's data stored with the provider. We empirically investigate the end-user
privacy implications of OAuth 2.0 implementations in relying parties most
visited around the world. We collect data on the use of OAuth-based logins in
the Alexa Top 500 sites per country for five countries. We categorize user data
made available by four identity providers (Google, Facebook, Apple and
LinkedIn) and evaluate popular services accessing user data from the SSO
platforms of these providers. Many services allow users to choose from multiple
login options (with different identity providers). Our results reveal that
services request different categories and amounts of personal data from
different providers, with at least one choice undeniably more
privacy-intrusive. These privacy choices (and their privacy implications) are
highly invisible to users. Based on our analysis, we also identify areas which
could improve user privacy and help users make informed decisions.
|
This study explores the Gaussian and the Lorentzian distributed spherically
symmetric wormhole solutions in the $f(\tau, T)$ gravity. The basic idea of the
Gaussian and Lorentzian noncommutative geometries emerges as the physically
acceptable and substantial notion in quantum physics. This idea of the
noncommutative geometries with both the Gaussian and Lorentzian distributions
becomes more striking when wormhole geometries in the modified theories of
gravity are discussed. Here we consider a linear model within $f(\tau,T)$
gravity to investigate traversable wormholes. In particular, we discuss the
possible cases for the wormhole geometries using the Gaussian and the
Lorentzian noncommutative distributions to obtain the exact shape function for
them. By incorporating the particular values of the unknown parameters
involved, we discuss different properties of the new wormhole geometries
explored here. It is noted that the involved matter violates the weak energy
condition for both the cases of the noncommutative geometries, whereas there is
a possibility for a physically viable wormhole solution. By analyzing the
equilibrium condition, it is found that the acquired solutions are stable.
Furthermore, we provide the embedded diagrams for wormhole structures under
Gaussian and Lorentzian noncommutative frameworks. Moreover, we present the
critical analysis on an anisotropic pressure under the Gaussian and the
Lorentzian distributions.
|
We report test results searching for an effect of electrostatic charge on
weight. For conducting test objects of mass of order 1 kilogram, we found no
effect on weight, for potentials ranging from 10 V to 200 kV, corresponding to
charge states ranging from $10^{-9}$ to over $10^{-5}$ coulombs, and for both
polarities, to within a measurement precision of 2 grams. While such a result
may not be unexpected, this is the first unipolar, high-voltage, meter-scale,
static test for electro-gravitic effects reported in the literature. Our
investigation was motivated by the search for possible coupling to a long-range
scalar field that could surround the planet, yet go otherwise undetected. The
large buoyancy force predicted within the classical Kaluza theory involving a
long-range scalar field is falsified by our results, and this appears to be the
first such experimental test of the classical Kaluza theory in the weak field
regime where it was otherwise thought identical with known physics. A
parameterization is suggested to organize the variety of electro-gravitic
experiment designs.
|
Communication efficiency is a major bottleneck in the applications of
distributed networks. To address the problem, the problem of quantized
distributed optimization has attracted a lot of attention. However, most of the
existing quantized distributed optimization algorithms can only converge
sublinearly. To achieve linear convergence, this paper proposes a novel
quantized distributed gradient tracking algorithm (Q-DGT) to minimize a finite
sum of local objective functions over directed networks. Moreover, we
explicitly derive the update rule for the number of quantization levels, and
prove that Q-DGT can converge linearly even when the exchanged variables are
respectively one bit. Numerical results also confirm the efficiency of the
proposed algorithm.
|
The multislice method, which simulates the propagation of the incident
electron wavefunction through a crystal, is a well-established method for
analyzing the multiple scattering effects that an electron beam may undergo.
The inclusion of magnetic effects into this method proves crucial towards
simulating magnetic differential phase contrast images at atomic resolution,
enhanced magnetic interaction of vortex beams with magnetic materials,
calculating magnetic Bragg spots, or searching for magnon signatures, to name a
few examples. Inclusion of magnetism poses novel challenges to the efficiency
of the multislice method for larger systems, especially regarding the
consistent computation of magnetic vector potentials and magnetic fields over
large supercells. We present in this work a tabulation of parameterized
magnetic values for the first three rows of transition metal elements computed
from atomic density functional theory calculations, allowing for the efficient
computation of approximate magnetic vector fields across large crystals using
only structural and magnetic moment size and direction information.
Ferromagnetic bcc Fe and tetragonal FePt are chosen as examples in this work to
showcase the performance of the parameterization versus directly obtaining
magnetic vector fields from the unit cell spin density by density functional
theory calculations, both for the quantities themselves and the resulting
magnetic signal from their respective use in multislice calculations.
|
Recently, several experiments on La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) challenged
the Fermi liquid picture for overdoped cuprates, and stimulated intensive
debates [1]. In this work, we study the magnetotransport phenomena in such
systems based on the Fermi liquid assumption. The Hall coefficient $R_H$ and
magnetoresistivity $\rho_{xx}$ are investigated near the van Hove singularity
$x_{\tiny\text{VHS}}\approx0.2$ across which the Fermi surface topology changes
from hole- to electron-like. Our main findings are: (1) $R_H$ depends on the
magnetic field $B$ and drops from positive to negative values with increasing
$B$ in the doping regime $x_{\tiny\text{VHS}}<x\lesssim0.3$; (2) $\rho_{xx}$
grows up as $B^2$ at small $B$ and saturates at large $B$, while in the
transition regime a "nearly linear" behavior shows up. Our results can be
further tested by future magnetotransport experiments in the overdoped LSCO.
|
Using the cohomology of the $G_2$-flag manifolds $G_2/U(2)_{\pm}$, and their
structure as a fiber bundle over the homogeneous space $G_2/SO(4)$, we compute
their Borel cohomology and the $\mathbb{Z}_2$ Fadell-Husseini index of such
fiber bundles, for the $\mathbb{Z}_2$ action given by complex conjugation.
Considering the orthogonal complement of the tautological bundle $\gamma$
over $\widetilde{G}_{3}( \mathbb{R}^{7})$, we compute the $\mathbb{Z}_2$
Fadell-Husseini index of the pullback bundle of $s\gamma^{\perp}$ along the
composition of the embedding between $G_2/SO(4)$ and $\widetilde{G}_{3}(
\mathbb{R}^{7})$, and the fiber bundle $ G_2/U(2)_{\pm} \to G_2/SO(4)$. Here
$s\gamma^{\perp}$ means the associated sphere bundle of the orthogonal bundle
$\gamma^{\perp}$. Furthermore, we derive a general formula for the $n$-fold
product bundle $(s\gamma^{\perp})^n$ for which we make the same computations.
|
Identifying "superspreaders" of disease is a pressing concern for society
during pandemics such as COVID-19. Superspreaders represent a group of people
who have much more social contacts than others. The widespread deployment of
WLAN infrastructure enables non-invasive contact tracing via people's
ubiquitous mobile devices. This technology offers promise for detecting
superspreaders. In this paper, we propose a general framework for
WLAN-log-based superspreader detection. In our framework, we first use WLAN
logs to construct contact graphs by jointly considering human symmetric and
asymmetric interactions. Next, we adopt three vertex centrality measurements
over the contact graphs to generate three groups of superspreader candidates.
Finally, we leverage SEIR simulation to determine groups of superspreaders
among these candidates, who are the most critical individuals for the spread of
disease based on the simulation results. We have implemented our framework and
evaluate it over a WLAN dataset with 41 million log entries from a large-scale
university. Our evaluation shows superspreaders exist on university campuses.
They change over the first few weeks of a semester, but stabilize throughout
the rest of the term. The data also demonstrate that both symmetric and
asymmetric contact tracing can discover superspreaders, but the latter performs
better with daily contact graphs. Further, the evaluation shows no consistent
differences among three vertex centrality measures for long-term (i.e., weekly)
contact graphs, which necessitates the inclusion of SEIR simulation in our
framework. We believe our proposed framework and these results may provide
timely guidance for public health administrators regarding effective testing,
intervention, and vaccination policies.
|
The recent demonstration of MoSi2N4 and its exceptional stability to air,
water, acid, and heat has generated intense interest in this family of
two-dimensional (2D) materials. Among these materials, NbSi2N4, VSi2N4, and
VSi2P4 are semiconducting, easy-plane ferromagnets with negligible in-plane
magnetic anisotropy. They thus satisfy a necessary condition for exhibiting a
dissipationless spin superfluid mode. The Curie temperatures of monolayer
VSi2P4 and VSi2N4 are determined to be above room temperature based on Monte
Carlo and density functional theory calculations. The magnetic moments of
VSi2N4 can be switched from in-plane to out-of-plane by applying tensile
biaxial strain or electron doping.
|
Being able to parse code-switched (CS) utterances, such as Spanish+English or
Hindi+English, is essential to democratize task-oriented semantic parsing
systems for certain locales. In this work, we focus on Spanglish
(Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances
alongside their semantic parses. We examine the CS generalizability of various
Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language
models when data for only one language is present. As such, we focus on
improving the pre-trained models for the case when only English corpus
alongside either zero or a few CS training instances are available. We propose
two data augmentation methods for the zero-shot and the few-shot settings:
fine-tune using translate-and-align and augment using a generation model
followed by match-and-filter. Combining the few-shot setting with the above
improvements decreases the initial 30-point accuracy gap between the zero-shot
and the full-data settings by two thirds.
|
Let $\mathbf G$ be the finite simple group $\mathrm{PSL}(2,\mathbf F_{11})$.
It has an irreducible representation $V_{10}$ of dimension 10. In this note, we
study a special trivector $\sigma\in \bigwedge^3V_{10}^\vee$ which is $\mathbf
G$-invariant. Following the construction of Debarre-Voisin, we obtain a smooth
hyperk\"ahler fourfold $X_6^\sigma\subset\mathrm{Gr}(6,V_{10})$ with many
symmetries. We will also look at the associated Peskine variety
$X_1^\sigma\subset \mathbf P(V_{10})$, which is highly symmetric as well and
admits 55 isolated singular points. It will help us to understand better the
geometry of the special Debarre-Voisin fourfold $X_6^\sigma$.
|
The Electric Network Frequency (ENF) is a signature of power distribution
networks that can be captured by multimedia recordings made in areas where
there is electrical activity. This has led to an emergence of several forensic
applications based on the use of the ENF signature. Examples of such
applications include estimating or verifying the time-of-recording of a media
signal and inferring the power grid associated with the location in which the
media signal was recorded. In this paper, we carry out a feasibility study to
examine the possibility of using embedded ENF traces to pinpoint the
location-of-recording of a signal within a power grid. In this study, we
demonstrate that it is possible to pinpoint the location-of-recording to a
certain geographical resolution using power signal recordings containing strong
ENF traces. To this purpose, a high-passed version of an ENF signal is
extracted and it is demonstrated that the correlation between two such signals,
extracted from recordings made in different geographical locations within the
same grid, decreases as the distance between the recording locations increases.
We harness this property of correlation in the ENF signals to propose
trilateration based localization methods, which pinpoint the unknown location
of a recording while using some known recording locations as anchor locations.
We also discuss the challenges that need to be overcome in order to extend this
work to using ENF traces in noisier audio/video recordings for such fine
localization purposes.
|
We present a collaborative learning method called Mutual Contrastive Learning
(MCL) for general visual representation learning. The core idea of MCL is to
perform mutual interaction and transfer of contrastive distributions among a
cohort of models. Benefiting from MCL, each model can learn extra contrastive
knowledge from others, leading to more meaningful feature representations for
visual recognition tasks. We emphasize that MCL is conceptually simple yet
empirically powerful. It is a generic framework that can be applied to both
supervised and self-supervised representation learning. Experimental results on
supervised and self-supervised image classification, transfer learning and
few-shot learning show that MCL can lead to consistent performance gains,
demonstrating that MCL can guide the network to generate better feature
representations.
|
In this paper, we discuss the educational value of a few mid-size and one
large applied research projects at the Computer Science Department of Okanagan
College (OC) and at the Universities of Paris East Creteil (LACL) and Orleans
(LIFO) in France. We found, that some freshmen students are very active and
eager to be involved in applied research projects starting from the second
semester. They are actively participating in programming competitions and want
to be involved in applied research projects to compete with sophomore and older
students. Our observation is based on five NSERC Engage College and Applied
Research and Development (ARD) grants, and several small applied projects.
Student involvement in applied research is a key motivation and success factor
in our activities, but we are also involved in transferring some results of
applied research, namely programming techniques, into the parallel programming
courses for beginners at the senior- and first-year MSc levels. We illustrate
this feedback process with programming notions for beginners, practical tools
to acquire them and the overall success/failure of students as experienced for
more than 10 years in our French University courses.
|
We present a new approach to jet definition as an alternative to methods that
exploit kinematic data directly, such as the anti-$k_T$ scheme; we use the
kinematics to represent the particles of an event in a new multidimensional
space. The latter is constituted by the eigenvectors of a matrix of kinematic
relations between particles, and the resulting partition is called spectral
clustering. After confirming its Infra-Red (IR) safety, we compare its
performance to the anti-$k_T$ algorithm in reconstructing relevant final
states. We base this on Monte Carlo (MC) samples generated from the following
processes: $qq\to H_{125\,\text{GeV}} \rightarrow H_{40\,\text{GeV}}
H_{40\,\text{GeV}} \rightarrow b \bar{b} b \bar{b}$, $qq\to H_{500\,\text{GeV}}
\rightarrow H_{125\,\text{GeV}} H_{125\,\text{GeV}} \rightarrow b \bar{b} b
\bar{b}$ and $gg,q\bar q\to t\bar t\to b\bar b W^+W^-\to b\bar b jj
\ell\nu_\ell$. Additionally, the impact of pileup on the clustering algorithm
is demonstrated. Finally, we show that the results for spectral clustering are
obtained without any change in the algorithm's parameter settings, unlike the
anti-$k_T$ case, which requires the cone size to be adjusted to the physics
process under study.
|
Antireflection coatings are an interesting challenge for multijunction solar
cells due to their broadband spectrum absorption and the requirement of current
matching of each subcell. A new design for multijunction solar cell
antireflection coatings is presented in this work in which alternative high and
low index materials are used to minimize the reflection in a broadband
(300nm-1800nm). We compared the short circuit current density of high-low
refractive index stacks designs with optimum double-layer antireflection
coatings by considering two optical materials combinations (MgF2/ZnS and
Al2O3/TiO2) for the AM0 and AM1.5D spectra. The calculations demonstrate that
for lattice-matched triple-junction solar cells and inverted metamorphic
quadruple-junction solar cells, high-low refractive index stacks outperform the
optimum double-layer antireflection coatings. The new design philosophy
requires no new optical materials because only two materials are used and
exhibits excellent performance in broadband spectra. The angle performance of
these antireflection coatings is slightly better than classical double-layers
whereas the analysis for thickness sensitivity shows that small deviations from
deposition targets only slightly impact the performance of antireflection
coatings. Finally, some technical solutions for depositing these high-low
refractive index multilayers are discussed.
|
Interaction enables users to navigate large amounts of data effectively,
supports cognitive processing, and increases data representation methods.
However, there have been few attempts to empirically demonstrate whether adding
interaction to a static visualization improves its function beyond popular
beliefs. In this paper, we address this gap. We use a classic Bayesian
reasoning task as a testbed for evaluating whether allowing users to interact
with a static visualization can improve their reasoning. Through two
crowdsourced studies, we show that adding interaction to a static Bayesian
reasoning visualization does not improve participants' accuracy on a Bayesian
reasoning task. In some cases, it can significantly detract from it. Moreover,
we demonstrate that underlying visualization design modulates performance and
that people with high versus low spatial ability respond differently to
different interaction techniques and underlying base visualizations. Our work
suggests that interaction is not as unambiguously good as we often believe; a
well designed static visualization can be as, if not more, effective than an
interactive one.
|
As cross-chain technologies make the interactions among different blockchains
(hereinafter "chains") possible, multi-chains consensus is becoming more and
more important in blockchain networks. However, more attention has been paid to
the single-chain consensus schemes. The multi-chains consensus with trusted
miners participation has been not considered, thus offering opportunities for
malicious users to launch Diverse Miners Behaviors (DMB) attacks on different
chains. DMB attackers can be friendly in the consensus process of some chains
called mask-chains to enhance trust value, while on other chains called
kill-chains they engage in destructive behaviors of network. In this paper, we
propose a multi-chains consensus scheme named as Proof-of-DiscTrust (PoDT) to
defend against DMB attacks. Distinctive trust idea (DiscTrust) is introduced to
evaluate the trust value of each user concerning different chains. A dynamic
behaviors prediction scheme is designed to strengthen DiscTrust to prevent
intensive DMB attackers who maintain high trust by alternately creating true or
false blocks on kill-chains. On this basis, a trusted miners selection
algorithm for multi-chains can be achieved at a round of block creation.
Experimental results show that PoDT is secure against DMB attacks and more
effective than traditional consensus schemes in multi-chains environments.
|
The Eshelby formalism for an inclusion in a solid is of significant
theoretical and practical implications in mechanics and other fields of
heterogeneous media. Eshelby's finding that a uniform eigenstrain prescribed in
a solitary ellipsoidal inclusion in an infinite isotropic medium results in a
uniform elastic strain field in the inclusion leads to the conjecture that the
ellipsoid is the only inclusion that possesses the so-called Eshelby uniformity
property. Previously, only the weak version of the conjecture has been proved
for the isotropic medium, whereas the general validity of the conjecture for
anisotropic media in three dimensions is yet to be explored. In this work,
firstly, we present proofs of the weak version of the generalized Eshelby
conjecture for anisotropic media that possess cubic, transversely isotropic,
orthotropic, and monoclinic symmetries. Secondly, we prove that in these
anisotropic media, there exist non-ellipsoidal inclusions that can transform
particular polynomial eigenstrains of even degrees into polynomial elastic
strain fields of the same even degrees in them. These results constitute
counter-examples, in the strong sense, to the generalized high-order Eshelby
conjecture (inverse problem of Eshelby's polynomial conservation theorem) for
polynomial eigenstrains in both anisotropic media and the isotropic medium
(quadratic eigenstrain only). These findings reveal striking richness of the
uniformity between the eigenstrains and the correspondingly induced elastic
strains in inclusions in anisotropic media beyond the canonical ellipsoidal
inclusions. Since the strain fields in embedded and inherently anisotropic
quantum dot crystals are effective tuning knobs of the quality of the emitted
photons by the quantum dots, the results may have implications in the
technology of quantum information, in addition to in mechanics and materials
science.
|
Recent years have witnessed an upsurge of interest in the problem of anomaly
detection on attributed networks due to its importance in both research and
practice. Although various approaches have been proposed to solve this problem,
two major limitations exist: (1) unsupervised approaches usually work much less
efficiently due to the lack of supervisory signal, and (2) existing anomaly
detection methods only use local contextual information to detect anomalous
nodes, e.g., one- or two-hop information, but ignore the global contextual
information. Since anomalous nodes differ from normal nodes in structures and
attributes, it is intuitive that the distance between anomalous nodes and their
neighbors should be larger than that between normal nodes and their neighbors
if we remove the edges connecting anomalous and normal nodes. Thus, hop counts
based on both global and local contextual information can be served as the
indicators of anomaly. Motivated by this intuition, we propose a hop-count
based model (HCM) to detect anomalies by modeling both local and global
contextual information. To make better use of hop counts for anomaly
identification, we propose to use hop counts prediction as a self-supervised
task. We design two anomaly scores based on the hop counts prediction via HCM
model to identify anomalies. Besides, we employ Bayesian learning to train HCM
model for capturing uncertainty in learned parameters and avoiding overfitting.
Extensive experiments on real-world attributed networks demonstrate that our
proposed model is effective in anomaly detection.
|
We report on hyperpolarization of quadrupolar (I=3/2) 131Xe via spin-exchange
optical pumping. Observations of the 131Xe polarization dynamics show that the
effective alkali-metal/131Xe spin-exchange cross-sections are large enough to
compete with 131Xe spin relaxation. 131Xe polarization up to 7.6 p/m 1.5
percent was achieved in ca. 8.5EE20 spins--a ca. 100-fold improvement in the
total spin angular momentum--enabling applications including measurement of
spin-dependent neutron-131Xe s-wave scattering and sensitive searches for
time-reversal violation in neutron-131Xe interactions beyond the Standard
Model.
|
A translation from Spanish into French of a paper by N. Cuesta published in
1954. The paper deals mainly with partially, and totally, ordered sets. Two
subjects are specially dealt with: Construction of new ordered sets starting
from a family of those. Completion of ordered sets by tools akin to Dedekind
cuts. Curiously enough, the so-called surreal numbers (later defined by Conway,
in 1974) are already there, thirty years before.
|
We comprehensively study admissible transformations between normal linear
systems of second-order ordinary differential equations with an arbitrary
number of dependent variables under several appropriate gauges of the arbitrary
elements parameterizing these systems. For each class from the constructed
chain of nested gauged classes of such systems, we single out its singular
subclass, which appears to consist of systems being similar to the elementary
(free particle) system whereas the regular subclass is the complement of the
singular one. This allows us to exhaustively describe the equivalence groupoids
of the above classes as well as of their singular and regular subclasses.
Applying various algebraic techniques, we establish principal properties of Lie
symmetries of the systems under consideration and outline ways for completely
classifying these symmetries. In particular, we compute the sharp lower and
upper bounds for the dimensions of the maximal Lie invariance algebras
possessed by systems from each of the above classes and subclasses. We also
show how equivalence transformations and Lie symmetries can be used for
reduction of order of such systems and their integration. As an illustrative
example of using the theory developed, we solve the complete group
classification problems for all these classes in the case of two dependent
variables.
|
The Hubble parameter inferred from cosmic microwave background observations
is consistently lower than that from local measurements, which could hint
towards new physics. Solutions to the Hubble tension typically require a
sizable amount of extra radiation $\Delta N^{}_{\rm eff}$ during recombination.
However, the amount of $\Delta N^{}_{\rm eff}$ in the early Universe is
unavoidably constrained by Big Bang Nucleosynthesis (BBN), which causes
problems for such solutions. We present a possibility to evade this problem by
introducing neutrino self-interactions via a simple Majoron-like coupling. The
scalar is slightly heavier than $1~{\rm MeV}$ and allowed to be fully
thermalized throughout the BBN era. The rise of neutrino temperature due to the
entropy transfer via $\phi \to \nu\overline{\nu}$ reactions compensates the
effect of a large $\Delta N^{}_{\rm eff}$ on BBN. Values of $\Delta N^{}_{\rm
eff}$ as large as $0.7$ are in this case compatible with BBN. We perform a fit
to the parameter space of the model.
|
Online educational systems running on smart devices have the advantage of
allowing users to learn online regardless of the location of the users. In
particular, data synchronization enables users to cooperate on contents in real
time anywhere by sharing their files via cloud storage. However, users cannot
collaborate by simultaneously modifying files that are shared with each other.
In this paper, we propose a content collaboration method and a history
management technique that are based on distributed system structure and can
synchronize data shared in the cloud for multiple users and multiple devices.
|
Face recognition and identification is a very important application in
machine learning. Due to the increasing amount of available data, traditional
approaches based on matricization and matrix PCA methods can be difficult to
implement. Moreover, the tensorial approaches are a natural choice, due to the
mere structure of the databases, for example in the case of color images.
Nevertheless, even though various authors proposed factorization strategies for
tensors, the size of the considered tensors can pose some serious issues. When
only a few features are needed to construct the projection space, there is no
need to compute a SVD on the whole data. Two versions of the tensor Golub-Kahan
algorithm are considered in this manuscript, as an alternative to the classical
use of the tensor SVD which is based on truncated strategies. In this paper, we
consider the Tensor Tubal Golub Kahan Principal Component Analysis method which
purpose is to extract the main features of images using the tensor singular
value decomposition (SVD) based on the tensor cosine product that uses the
discrete cosine transform. This approach is applied for classification and face
recognition and numerical tests show its effectiveness.
|
Extreme near-field heat transfer between metallic surfaces is a subject of
debate as the state-of-the-art theory and experiments are in disagreement on
the energy carriers driving heat transport. In an effort to elucidate the
physics of extreme near-field heat transfer between metallic surfaces, this
Letter presents a comprehensive model combining radiation, acoustic phonon and
electron transport across sub-10-nm vacuum gaps. The results obtained for gold
surfaces show that in the absence of bias voltage, acoustic phonon transport is
dominant for vacuum gaps smaller than ~2 nm. The application of a bias voltage
significantly affects the dominant energy carriers as it increases the phonon
contribution mediated by the long-range Coulomb force and the electron
contribution due to a lower potential barrier. For a bias voltage of 0.6 V,
acoustic phonon transport becomes dominant at a vacuum gap of 5 nm, whereas
electron tunneling dominates at sub-1-nm vacuum gaps. The comparison of the
theory against experimental data from the literature suggests that
well-controlled measurements between metallic surfaces are needed to quantify
the contributions of acoustic phonon and electron as a function of the bias
voltage.
|
Let $V\subseteq A$ be a conformal inclusion of vertex operator algebras and
let $\mathcal{C}$ be a category of grading-restricted generalized $V$-modules
that admits the vertex algebraic braided tensor category structure of
Huang-Lepowsky-Zhang. We give conditions under which $\mathcal{C}$ inherits
semisimplicity from the category of grading-restricted generalized $A$-modules
in $\mathcal{C}$, and vice versa. The most important condition is that $A$ be a
rigid $V$-module in $\mathcal{C}$ with non-zero categorical dimension, that is,
we assume the index of $V$ as a subalgebra of $A$ is finite and non-zero. As a
consequence, we show that if $A$ is strongly rational, then $V$ is also
strongly rational under the following conditions: $A$ contains $V$ as a
$V$-module direct summand, $V$ is $C_2$-cofinite with a rigid tensor category
of modules, and $A$ has non-zero categorical dimension as a $V$-module. These
results are vertex operator algebra interpretations of theorems proved for
general commutative algebras in braided tensor categories. We also generalize
these results to the case that $A$ is a vertex operator superalgebra.
|
Machine-learning models contain information about the data they were trained
on. This information leaks either through the model itself or through
predictions made by the model. Consequently, when the training data contains
sensitive attributes, assessing the amount of information leakage is paramount.
We propose a method to quantify this leakage using the Fisher information of
the model about the data. Unlike the worst-case a priori guarantees of
differential privacy, Fisher information loss measures leakage with respect to
specific examples, attributes, or sub-populations within the dataset. We
motivate Fisher information loss through the Cram\'{e}r-Rao bound and delineate
the implied threat model. We provide efficient methods to compute Fisher
information loss for output-perturbed generalized linear models. Finally, we
empirically validate Fisher information loss as a useful measure of information
leakage.
|
In this work, considering the background dynamics of flat
Friedmann-Lemaitre-Robertson-Walker(FLRW) model of the universe, we investigate
a non-canonical scalar field model as dark energy candidate which interacting
with the pressureless dust as dark matter in view of dynamical systems
analysis. Two interactions from phenomenological point of view are chosen: one
is depending on Hubble parameter $H$, another is local, independent of Hubble
parameter. In Interaction model 1, an inverse square form of potential as well
as coupling function associated with scalar field is chosen and a two
dimensional autonomous system is obtained. From the 2D autonomous system, we
obtain scalar field dominated solutions representing late time accelerated
evolution of the universe. Late time scaling solutions are also realized by the
accelerated evolution of the universe attracted in quintessence era. Center
Manifold Theory can provide the sufficient conditions on model parameters such
that the de Sitter like solutions can be stable attractor at late time in this
model. In the Interaction model 2, potential as well as coupling function are
considered to be evolved exponentially on scalar field and as a result of which
a four dimensional autonomous system is achieved. From the analysis of 4D
system, we obtain non-hyperbolic sets of critical points which are analyzed by
the Center Manifold Theory. In this model, de Sitter like solutions represent
the transient evolution of the universe.
|
This article focuses on different aspects of pedestrian (crowd) modeling and
simulation. The review includes: various modeling criteria, such as
granularity, techniques, and factors involved in modeling pedestrian behavior,
and different pedestrian simulation methods with a more detailed look at two
approaches for simulating pedestrian behavior in traffic scenes. At the end,
benefits and drawbacks of different simulation techniques are discussed and
recommendations are made for future research.
|
We investigate the convergence of the Crouzeix-Raviart finite element method
for variational problems with non-autonomous integrands that exhibit
non-standard growth conditions. While conforming schemes fail due to the
Lavrentiev gap phenomenon, we prove that the solution of the Crouzeix-Raviart
scheme converges to a global minimiser. Numerical experiments illustrate the
performance of the scheme and give additional analytical insights.
|
We present Phrase-Verified Voting, a voter-verifiable remote voting system
assembled from commercial off-the-shelf software for small private elections.
The system is transparent and enables each voter to verify that the tally
includes their ballot selection without requiring any understanding of
cryptography. This paper describes the system and its use in fall 2020, to vote
remotely in promotion committees in a university. Each voter fills out a form
in the cloud with their vote V (YES, NO, ABSTAIN) and a passphrase P-two words
entered by the voter. The system generates a verification prompt of the (P,V)
pairs and a tally of the votes, organized to help visualize how the votes add
up. After the polls close, each voter verifies that this table lists their
(P,V) pair and that the tally is computed correctly. The system is especially
appropriate for any small group making sensitive decisions. Because the system
would not prevent a coercer from demanding that their victim use a specified
passphrase, it is not designed for applications where such malfeasance would be
likely or go undetected. Results from 43 voters show that the system was
well-accepted, performed effectively for its intended purpose, and introduced
users to the concept of voter-verified elections. Compared to the commonly-used
alternatives of paper ballots or voting by email, voters found the system
easier to use, and that it provided greater privacy and outcome integrity.
|
In this paper we provide results on using integer programming (IP) and
constraint programming (CP) to search for sets of mutually orthogonal latin
squares (MOLS). Both programming paradigms have previously successfully been
used to search for MOLS, but solvers for IP and CP solvers have significantly
improved in recent years and data on how modern IP and CP solvers perform on
the MOLS problem is lacking. Using state-of-the-art solvers as black boxes we
were able to quickly find pairs of MOLS (or prove their nonexistence) in all
orders up to ten. Moreover, we improve the effectiveness of the solvers by
formulating an extended symmetry breaking method as well as an improvement to
the straightforward CP encoding. We also analyze the effectiveness of using CP
and IP solvers to search for triples of MOLS, compare our timings to those
which have been previously published, and estimate the running time of using
this approach to resolve the longstanding open problem of determining the
existence of a triple of MOLS of order ten.
|
We theoretically investigate the anomalous Hall effect (AHE) that requires
neither a net magnetization nor an external magnetic field in collinear
antiferromagnets. We show that such an emergent AHE is essentially caused by a
ferroic ordering of the anisotropic magnetic dipole (AMD), which provides an
effective coupling between ordered magnetic moments and electronic motion in
the crystal. We demonstrate that the AMD is naturally induced by the
antiferromagnetic ordering, in which the magnetic moments have a quadrupole
spatial distribution. In view of the ferroic AMD ordering, we analyze the
behavior of the AHE in the orthorhombic lattice system, where the AHE is
largely enhanced by the large coupling between the AMD and the spin-orbit
interaction. From these findings, the AMD can be used as a descriptor in
general to investigate the ferromagnetic-related physical quantities in
antiferromagnets including noncollinear ones, which is detectable by using the
x-ray magneto-circular dichroism.
|
Students develop and test simple models of the spread of COVID-19. Microsoft
Excel is used as the modeling platform because it's non-threatening to students
and because it's widely available. Students develop finite difference models
and implement them in the cells of preformatted spreadsheets following a
guided-inquiry pedagogy that introduces new model parameters in a scaffolded
step-by-step manner. That approach allows students to investigate the
implications of new model parameters in a systematic way. Students fit the
resulting models to reported cases-per-day data for the United States using
least-squares techniques with Excel's Solver. Using their own spreadsheets,
students discover for themselves that the initial exponential growth of
COVID-19 can be explained by a simplified unlimited growth model and by the SIR
model. They also discover that the effects of social distancing can be modeled
using a Gaussian transition function for the infection rate coefficient and
that the summer surge was caused by prematurely relaxing social distancing and
then reimposing stricter social distancing. Students then model the effect of
vaccinations and validate the resulting SIRV model by showing that it
successfully predicts the reported cases-per-day data from Thanksgiving through
February 14, 2021. The same SIRV model is then extended and successfully fits
the fourth peak up to June 1, 2021, caused by further relaxation of social
distancing measures. Finally, students extend the model up to the present day
and successfully account for the appearance of the delta variant of SARS-CoV-2.
The fitted model also predicts that the delta-variant peak will be
comparatively short, and the cases-per-day data should begin to fall off in
early September 2021 - counter to current expectations. This case study would
make an excellent capstone experience for students interested in scientific
modeling.
|
We report the discovery of diffuse extended Ly-alpha emission from redshift
3.1 to 4.5, tracing cosmic web filaments on scales of 2.5-4 comoving Mpc. These
structures have been observed in overdensities of Ly-alpha emitters in the MUSE
Extremely Deep Field, a 140 hour deep MUSE observation located in the Hubble
Ultra Deep Field. Among the 22 overdense regions identified, 5 are likely to
harbor very extended Ly-alpha emission at high significance with an average
surface brightness of $\mathrm{5 \times 10^{-20} erg s^{-1} cm^{-2}
arcsec^{-2}}$. Remarkably, 70% of the total Ly-alpha luminosity from these
filaments comes from beyond the circumgalactic medium of any identified
Ly-alpha emitters. Fluorescent Ly-alpha emission powered by the cosmic UV
background can only account for less than 34% of this emission at z$\approx$3
and for not more than 10% at higher redshift. We find that the bulk of this
diffuse emission can be reproduced by the unresolved Ly-alpha emission of a
large population of ultra low luminosity Ly-alpha emitters ($\mathrm{<10^{40}
erg s^{-1}}$), provided that the faint end of the Ly-alpha luminosity function
is steep ($\alpha \lessapprox -1.8$), it extends down to luminosities lower
than $\mathrm{10^{38} - 10^{37} erg s^{-1}}$ and the clustering of these
Ly-alpha emitters is significant (filling factor $< 1/6$). If these Ly-alpha
emitters are powered by star formation, then this implies their luminosity
function needs to extend down to star formation rates $\mathrm{< 10^{-4}
M_\odot yr^{-1}}$. These observations provide the first detection of the cosmic
web in Ly-alpha emission in typical filamentary environments and the first
observational clue for the existence of a large population of ultra low
luminosity Ly-alpha emitters at high redshift.
|
Recently, the selection-recombination equation with a single selected site
and an arbitrary number of neutral sites was solved by means of the ancestral
selection-recombination graph. Here, we introduce a more accessible approach,
namely the ancestral initiation graph. The construction is based on a
discretisation of the selection-recombination equation. We apply our method to
systematically explain a long-standing observation concerning the dynamics of
linkage disequilibrium between two neutral loci hitchhiking along with a
selected one. In particular, this clarifies the nontrivial dependence on the
position of the selected site.
|
Searching for novel two-dimensional (2D) materials is crucial for the
development of the next generation technologies such as electronics,
optoelectronics, electrochemistry and biomedicine. In this work, we designed a
series of 2D materials based on endohedral fullerenes, and revealed that many
of them integrate different functions in a single system, such as
ferroelectricity with large electric dipole moments, multiple magnetic phases
with both strong magnetic anisotropy and high Curie temperature, quantum spin
Hall effect or quantum anomalous Hall effect with robust topologically
protected edge states. We further proposed a new style topological field-effect
transistor. These findings provide a strategy of using fullerenes as building
blocks for the synthesis of novel 2D materials which can be easily controlled
with a local electric field.
|
Large-scale tissue deformation during biological processes such as
morphogenesis requires cellular rearrangements. The simplest rearrangement in
confluent cellular monolayers involves neighbor exchanges among four cells,
called a T1 transition, in analogy to foams. But unlike foams, cells must
execute a sequence of molecular processes, such as endocytosis of adhesion
molecules, to complete a T1 transition. Such processes could take a long time
compared to other timescales in the tissue. In this work, we incorporate this
idea by augmenting vertex models to require a fixed, finite time for T1
transitions, which we call the "T1 delay time". We study how variations in T1
delay time affect tissue mechanics, by quantifying the relaxation time of
tissues in the presence of T1 delays and comparing that to the cell-shape based
timescale that characterizes fluidity in the absence of any T1 delays. We show
that the molecular-scale T1 delay timescale dominates over the cell shape-scale
collective response timescale when the T1 delay time is the larger of the two.
We extend this analysis to tissues that become anisotropic under convergent
extension, finding similar results. Moreover, we find that increasing the T1
delay time increases the percentage of higher-fold coordinated vertices and
rosettes, and decreases the overall number of successful T1s, contributing to a
more elastic-like -- and less fluid-like -- tissue response. Our work suggests
that molecular mechanisms that act as a brake on T1 transitions could stiffen
global tissue mechanics and enhance rosette formation during morphogenesis.
|
We attempt to reveal the geometry, emerged from the entanglement structure of
any general $N$-party pure quantum many-body state by representing entanglement
entropies corresponding to all $2^N $ bipartitions of the state by means of a
generalized adjacency matrix. We show this representation is often exact and
may lead to a geometry very different than suggested by the Hamiltonian.
Moreover, in all the cases, it yields a natural entanglement contour, similar
to previous proposals. The formalism is extended for conformal invariant
systems, and a more insightful interpretation of entanglement is presented as a
flow among different parts of the system.
|
In audio processing applications, phase retrieval (PR) is often performed
from the magnitude of short-time Fourier transform (STFT) coefficients.
Although PR performance has been observed to depend on the considered STFT
parameters and audio data, the extent of this dependence has not been
systematically evaluated yet. To address this, we studied the performance of
three PR algorithms for various types of audio content and various STFT
parameters such as redundancy, time-frequency ratio, and the type of window.
The quality of PR was studied in terms of objective difference grade and
signal-to-noise ratio of the STFT magnitude, to provide auditory- and
signal-based quality assessments. Our results show that PR quality improved
with increasing redundancy, with a strong relevance of the time-frequency
ratio. The effect of the audio content was smaller but still observable. The
effect of the window was only significant for one of the PR algorithms.
Interestingly, for a good PR quality, each of the three algorithms required a
different set of parameters, demonstrating the relevance of individual
parameter sets for a fair comparison across PR algorithms. Based on these
results, we developed guidelines for optimizing STFT parameters for a given
application.
|
Motion of a test particle plays an important role in understanding the
properties of a spacetime. As a new type of the strong gravity system, boson
stars could mimic black holes located at the center of galaxies. Studying the
motion of a test particle in the spacetime of a rotating boson star will
provide the astrophysical observable effects if a boson star is located at the
center of a galaxy. In this paper, we investigate the timelike geodesic of a
test particle in the background of a rotating boson star with angular number
$m=(1, 2, 3)$. With the change of angular number and frequency, a rotating
boson star will transform from the low rotating state to the highly
relativistic rapidly rotating state, the corresponding Lense-Thirring effects
will be more and more significant and it should be studied in detail. By
solving the four-velocity of a test particle and integrating the geodesics, we
investigate the bound orbits with a zero and nonzero angular momentum. We find
that a test particle can stay more longer time in the central region of a boson
star when the boson star becomes from low rotating state to highly relativistic
rotating state. Such behaviors of the orbits are quite different from the
orbits in a Kerr black hole, and the observable effects from these orbits will
provide a rule to investigate the astrophysical compact objects in the Galactic
center.
|
As the medical usage of computed tomography (CT) continues to grow, the
radiation dose should remain at a low level to reduce the health risks.
Therefore, there is an increasing need for algorithms that can reconstruct
high-quality images from low-dose scans. In this regard, most of the recent
studies have focused on iterative reconstruction algorithms, and little
attention has been paid to restoration of the projection measurements, i.e.,
the sinogram. In this paper, we propose a novel sinogram interpolation
algorithm. The proposed algorithm exploits the self-similarity and smoothness
of the sinogram. Sinogram self-similarity is modeled in terms of the similarity
of small blocks extracted from stacked projections. The smoothness is modeled
via second-order total variation. Experiments with simulated and real CT data
show that sinogram interpolation with the proposed algorithm leads to a
substantial improvement in the quality of the reconstructed image, especially
on low-dose scans. The proposed method can result in a significant reduction in
the number of projection measurements. This will reduce the radiation dose and
also the amount of data that need to be stored or transmitted, if the
reconstruction is to be performed in a remote site.
|
To address fermion mass hierarchy and flavor mixings in the quark and lepton
sectors, a minimal flavor structure without any redundant parameters beyond
phenomenological observables is proposed via decomposition of the Standard
Model Yukawa mass matrix into a bi-unitary form. After reviewing the roles and
parameterization of the factorized matrix ${\bf M}_0^f$ and ${\bf F}_L^f$ in
fermion masses and mixings, we generalize the mechanism to up- and down-type
fermions to unify them into a universal quark/lepton Yukawa interaction. In the
same way, a unified form of the description of the quark and lepton Yukawa
interactions is also proposed, which shows a similar picture as the unification
of gauge interactions.
|
The multi-link operation (MLO) is a new feature proposed to be part of the
IEEE 802.11be Extremely High Throughput (EHT) amendment. Through MLO, access
points and stations will be provided with the capabilities to transmit and
receive data from the same traffic flow over multiple radio interfaces.
However, the question on how traffic flows should be distributed over the
different interfaces to maximize the WLAN performance is still unresolved. To
that end, we evaluate in this article different traffic allocation policies,
under a wide variety of scenarios and traffic loads, in order to shed some
light on that question. The obtained results confirm that congestion-aware
policies outperform static ones. However, and more importantly, the results
also reveal that traffic flows become highly vulnerable to the activity of
neighboring networks when they are distributed across multiple links. As a
result, the best performance is obtained when a new arriving flow is simply
assigned entirely to the emptiest interface.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.