abstract
stringlengths 42
2.09k
|
---|
We consider a family of character sums as multiplicative analogues of
Kloosterman sums. Using Gauss sums, Jacobi sums and Deligne's bound for
hyper-Kloosterman sums, we establish asymptotic formulae for any real
(positive) moments of the above character sum as the character runs over all
non-trivial multiplicative characters mod $p.$ These also allow us to obtain
asymptotic formulae for moments of such character sums weighted by special
$L$-values (at $1/2$ and $1$).
|
One of the challenges in optimization of high dimensional problems is finding
appropriate solutions in a way that are as close as possible to the global
optima. In this regard, one of the most common phenomena that occurs is the
curse of dimensionality in which a large scale feature space generates more
parameters that need to be estimated. Heuristic algorithms, such as
Gravitational Search Algorithm, are among the tools proposed for optimizing
large-scale problems, but in this case, they cannot solve the problem on their
own. This paper proposes a novel method for optimizing large scale problems by
improving the gravitational search algorithm's performance. In order to
increase the efficiency of the gravitational search algorithm in solving large
scale problems, the proposed method combines this algorithm with the
cooperative-coevolution methods. For the evaluation of the performance of the
proposed algorithm, we consider two approaches. In the first approach, the
proposed algorithm is compared with the gravitational search algorithm, and in
the second approach, it is compared with some of the most significant research
in this field. In the first approach, our method was able to improve the
performance of the original gravitational algorithm to solve large scale
problems, and in the second one, the results indicate more favorable
performance, in some benchmark functions, compared with other cooperative
methods.
|
The proportional odds cumulative logit model (POCLM) is a standard regression
model for an ordinal response. Ordinality of predictors can be incorporated by
monotonicity constraints for the corresponding parameters. It is shown that
estimators defined by optimization, such as maximum likelihood estimators, for
an unconstrained model and for parameters in the interior set of the parameter
space of a constrained model are asymptotically equivalent. This is used in
order to derive asymptotic confidence regions and tests for the constrained
model, involving simple modifications for finite samples. The finite sample
coverage probability of the confidence regions is investigated by simulation.
Tests concern the effect of individual variables, monotonicity, and a specified
monotonicity direction. The methodology is applied on real data related to the
assessment of school performance.
|
The objective of this work was to know the amount and frequency with which
the people of Arandas in the Altos de Jalisco region use disposable cups and
then know how willing they are to use edible cups made with natural gelatin. In
this regard, it is worth commenting that these can not only be nutritious for
those who consume them (since gelatin is a fortifying nutrient created from the
skin and bone of pigs and cows), but they could also be degraded in a few days
or be ingested by animals. To collect the information, a survey consisting of
six questions was used, which was applied to 31 people by telephone and another
345 personally (in both cases they were applied to young people and adults).
The results show that the residents of that town considerably use plastic cups
in the different events that take place each week, which are more numerous
during the patron saint festivities or at the end of the year. Even so, these
people would be willing to change these habits, although for this, measures
must be taken that do not affect the companies in that area, which work mainly
with plastics and generate a high percentage of jobs.
|
Many mathematical optimization algorithms fail to sufficiently explore the
solution space of high-dimensional nonlinear optimization problems due to the
curse of dimensionality. This paper proposes generative models as a complement
to optimization algorithms to improve performance in problems with high
dimensionality. To demonstrate this method, a conditional generative
adversarial network (C-GAN) is used to augment the solutions produced by a
genetic algorithm (GA) for a 311-dimensional nonconvex multi-objective
mixed-integer nonlinear optimization. The C-GAN, composed of two networks with
three fully connected hidden layers, is trained on solutions generated by GA,
and then given sets of desired labels (i.e., objective function values),
generates complementary solutions corresponding to those labels. Six
experiments are conducted to evaluate the capabilities of the proposed method.
The generated complementary solutions are compared to the original solutions in
terms of optimality and diversity. The generative model generates solutions
with objective functions up to 100% better, and with hypervolumes up to 100%
higher, than the original solutions. These findings show that a C-GAN with even
a simple training approach and architecture can, with a much shorter runtime,
highly improve the diversity and optimality of solutions found by an
optimization algorithm for a high-dimensional nonlinear optimization problem.
[Link to GitHub repository: https://github.com/PouyaREZ/GAN_GA]
|
Quantum many-body dynamics generically results in increasing entanglement
that eventually leads to thermalization of local observables. This makes the
exact description of the dynamics complex despite the apparent simplicity of
(high-temperature) thermal states. For accurate but approximate simulations one
needs a way to keep track of essential (quantum) information while discarding
inessential one. To this end, we first introduce the concept of the information
lattice, which supplements the physical spatial lattice with an additional
dimension and where a local Hamiltonian gives rise to well defined locally
conserved von Neumann information current. This provides a convenient and
insightful way of capturing the flow, through time and space, of information
during quantum time evolution, and gives a distinct signature of when local
degrees of freedom decouple from long-range entanglement. As an example, we
describe such decoupling of local degrees of freedom for the mixed field
transverse Ising model. Building on this, we secondly construct algorithms to
time-evolve sets of local density matrices without any reference to a global
state. With the notion of information currents, we can motivate algorithms
based on the intuition that information for statistical reasons flow from small
to large scales. Using this guiding principle, we construct an algorithm that,
at worst, shows two-digit convergence in time-evolutions up to very late times
for diffusion process governed by the mixed field transverse Ising Hamiltonian.
While we focus on dynamics in 1D with nearest-neighbor Hamiltonians, the
algorithms do not essentially rely on these assumptions and can in principle be
generalized to higher dimensions and more complicated Hamiltonians.
|
There is a growing need for ranking universities, departments, research
groups, and individual scholars. Usually, the scientific community measures the
scientific merits of the researchers by using a variety of indicators that take
into account both the productivity of scholars and the impact of their
publications. We propose the t-index, the new indicator to measure the
scientific merits of the individual researchers. The proposed t-index takes
into account the number of citations, number of coauthors on every published
paper, and career duration. The t-index makes the possible comparison of
researchers at various stages of their careers. We also use in this paper the
Data Envelopment Analysis (DEA) to measure the scientific merits of the
individual researchers within the observed group of researchers. We chose 15
scholars in the scientific area of transportation engineering and measured
their t-index values, as well as DEA scores.
|
We generalize Bonahon-Wong's $\mathrm{SL}_2$-quantum trace map to the setting
of $\mathrm{SL}_3$. More precisely, given a non-zero complex parameter $q=e^{2
\pi i \hbar}$, we associate to each isotopy class of framed oriented links $K$
in a thickened punctured surface $\mathfrak{S} \times (0, 1)$ a Laurent
polynomial $\mathrm{Tr}_\lambda^q(K) = \mathrm{Tr}_\lambda^q(K)(X_i^q)$ in
$q$-deformations $X_i^q$ of the Fock-Goncharov $\mathcal{X}$-coordinates for
higher Teichm\"{u}ller space. This construction depends on a choice $\lambda$
of ideal triangulation of the surface $\mathfrak{S}$. Along the way, we propose
a definition for a $\mathrm{SL}_n$-version of this invariant.
|
Face recognition systems are extremely vulnerable to morphing attacks, in
which a morphed facial reference image can be successfully verified as two or
more distinct identities. In this paper, we propose a morph attack detection
algorithm that leverages an undecimated 2D Discrete Wavelet Transform (DWT) for
identifying morphed face images. The core of our framework is that artifacts
resulting from the morphing process that are not discernible in the image
domain can be more easily identified in the spatial frequency domain. A
discriminative wavelet sub-band can accentuate the disparity between a real and
a morphed image. To this end, multi-level DWT is applied to all images,
yielding 48 mid and high-frequency sub-bands each. The entropy distributions
for each sub-band are calculated separately for both bona fide and morph
images. For some of the sub-bands, there is a marked difference between the
entropy of the sub-band in a bona fide image and the identical sub-band's
entropy in a morphed image. Consequently, we employ Kullback-Liebler Divergence
(KLD) to exploit these differences and isolate the sub-bands that are the most
discriminative. We measure how discriminative a sub-band is by its KLD value
and the 22 sub-bands with the highest KLD values are chosen for network
training. Then, we train a deep Siamese neural network using these 22 selected
sub-bands for differential morph attack detection. We examine the efficacy of
discriminative wavelet sub-bands for morph attack detection and show that a
deep neural network trained on these sub-bands can accurately identify morph
imagery.
|
People ``understand'' the world via vision, hearing, tactile, and also the
past experience. Human experience can be learned through normal learning (we
call it explicit knowledge), or subconsciously (we call it implicit knowledge).
These experiences learned through normal learning or subconsciously will be
encoded and stored in the brain. Using these abundant experience as a huge
database, human beings can effectively process data, even they were unseen
beforehand. In this paper, we propose a unified network to encode implicit
knowledge and explicit knowledge together, just like the human brain can learn
knowledge from normal learning as well as subconsciousness learning. The
unified network can generate a unified representation to simultaneously serve
various tasks. We can perform kernel space alignment, prediction refinement,
and multi-task learning in a convolutional neural network. The results
demonstrate that when implicit knowledge is introduced into the neural network,
it benefits the performance of all tasks. We further analyze the implicit
representation learnt from the proposed unified network, and it shows great
capability on catching the physical meaning of different tasks. The source code
of this work is at : https://github.com/WongKinYiu/yolor.
|
Water shapes and defines the properties of biological systems. Therefore,
understanding the nature of the mutual interaction between water and biological
systems is of primary importance for a proper assessment of biological activity
and the development of new drugs and vaccines. A handy way to characterize the
interactions between biological systems and water is to analyze their impact on
water density and dynamics in the proximity of the interfaces. It is well
established that water bulk density and dynamical properties are recovered at
distances in the order of $\sim1$~nm from the surface of biological systems.
Such evidence led to the definition of \emph{hydration} water as the thin layer
of water covering the surface of biological systems and affecting-defining
their properties and functionality. Here, we review some of our latest
contributions showing that phospholipid membranes affect the structural
properties and the hydrogen bond network of water at greater distances than the
commonly evoked $\sim1$~nm from the membrane surface. Our results imply that
the concept of hydration water should be revised or extended, and pave the way
to a deeper understanding of the mutual interactions between water and
biological systems.
|
Whitney partition is a very important concept in modern analysis. We discuss
here a quasiconformal version of the Whitney partition that can be usefull for
Sobolev spaces.
|
The possibility of carrying out a meaningful forensics analysis on printed
and scanned images plays a major role in many applications. First of all,
printed documents are often associated with criminal activities, such as
terrorist plans, child pornography pictures, and even fake packages.
Additionally, printing and scanning can be used to hide the traces of image
manipulation or the synthetic nature of images, since the artifacts commonly
found in manipulated and synthetic images are gone after the images are printed
and scanned. A problem hindering research in this area is the lack of large
scale reference datasets to be used for algorithm development and benchmarking.
Motivated by this issue, we present a new dataset composed of a large number of
synthetic and natural printed face images. To highlight the difficulties
associated with the analysis of the images of the dataset, we carried out an
extensive set of experiments comparing several printer attribution methods. We
also verified that state-of-the-art methods to distinguish natural and
synthetic face images fail when applied to print and scanned images. We
envision that the availability of the new dataset and the preliminary
experiments we carried out will motivate and facilitate further research in
this area.
|
There has recently been much activity within the Kardar-Parisi-Zhang
universality class spurred by the construction of the canonical limiting
object, the parabolic Airy sheet $\mathcal{S}:\mathbb{R}^2\to\mathbb{R}$
[arXiv:1812.00309]. The parabolic Airy sheet provides a coupling of parabolic
Airy$_2$ processes -- a universal limiting geodesic weight profile in planar
last passage percolation models -- and a natural goal is to understand this
coupling. Geodesic geometry suggests that the difference of two parabolic
Airy$_2$ processes, i.e., a difference profile, encodes important structural
information. This difference profile $\mathcal{D}$, given by
$\mathbb{R}\to\mathbb{R}:x\mapsto \mathcal{S}(1,x)-\mathcal{S}(-1,x)$, was
first studied by Basu, Ganguly, and Hammond [arXiv:1904.01717], who showed that
it is monotone and almost everywhere constant, with its points of non-constancy
forming a set of Hausdorff dimension $1/2$. Noticing that this is also the
Hausdorff dimension of the zero set of Brownian motion, we adopt a different
approach. Establishing previously inaccessible fractal structure of
$\mathcal{D}$, we prove, on a global scale, that $\mathcal{D}$ is absolutely
continuous on compact sets to Brownian local time (of rate four) in the sense
of increments, which also yields the main result of [arXiv:1904.01717] as a
simple corollary. Further, on a local scale, we explicitly obtain Brownian
local time of rate four as a local limit of $\mathcal{D}$ at a point of
increase, picked by a number of methods, including at a typical point sampled
according to the distribution function $\mathcal{D}$. Our arguments rely on the
representation of $\mathcal{S}$ in terms of a last passage problem through the
parabolic Airy line ensemble and an understanding of geodesic geometry at
deterministic and random times.
|
This paper is about solving polynomial systems. It first recalls how to do
that efficiently with a very high probability of correctness by reconstructing
a rational univariate representation (rur) using Groebner revlex computation,
Berlekamp-Massey algorithm and Hankel linear system solving modulo several
primes in parallel. Then it introduces a new method (theorem \ref{prop:check})
for rur certification that is effective for most polynomial systems.These
algorithms are implemented in
https://www-fourier.univ-grenoble-alpes.fr/~parisse/giac.html since version
1.7.0-13 or 1.7.0-17 for certification, it has (July 2021) leading performances
on multiple CPU, at least for an open-source software.
|
The Burling sequence is a sequence of triangle-free graphs of increasing
chromatic number. Any graph which is an induced subgraph of a graph in this
sequence is called a Burling graph. These graphs have attracted some attention
because they have geometric representations and because they provide
counter-examples to several conjectures about bounding the chromatic number in
classes of graphs.
We recall an equivalent definition of Burling graphs from the first part of
this work: the graphs derived from a tree. We then give several structural
properties of derived graphs.
|
In this paper, we try to create an effective mathematical model for the well
known slayer exciter transformer circuit. We aim to analyze various aspects of
the slayer exciter circuit, by using physical and computational methods. We use
a computer simulation for data collection of various parameters pertaining to
the circuit. Using this data, we generate plots for various components and
parameters. We also derive an approximate equation to maximize the secondary
output voltage generated by the circuit. We also discuss a possible method to
construct such a circuit using low cost materials.
|
The main challenge of dynamic texture synthesis lies in how to maintain
spatial and temporal consistency in synthesized videos. The major drawback of
existing dynamic texture synthesis models comes from poor treatment of the
long-range texture correlation and motion information. To address this problem,
we incorporate a new loss term, called the Shifted Gram loss, to capture the
structural and long-range correlation of the reference texture video.
Furthermore, we introduce a frame sampling strategy to exploit long-period
motion across multiple frames. With these two new techniques, the application
scope of existing texture synthesis models can be extended. That is, they can
synthesize not only homogeneous but also structured dynamic texture patterns.
Thorough experimental results are provided to demonstrate that our proposed
dynamic texture synthesis model offers state-of-the-art visual performance.
|
The QAnon conspiracy posits that Satan-worshiping Democrats operate a covert
child sex-trafficking operation, which Donald Trump is destined to expose and
annihilate. Emblematic of the ease with which political misconceptions can
spread through social media, QAnon originated in late 2017 and rapidly grew to
shape the political beliefs of millions. To illuminate the process by which a
conspiracy theory spreads, we report two computational studies examining the
social network structure and semantic content of tweets produced by users
central to the early QAnon network on Twitter. Using data mined in the summer
of 2018, we examined over 800,000 tweets about QAnon made by about 100,000
users. The majority of users disseminated rather than produced information,
serving to create an online echochamber. Users appeared to hold a simplistic
mental model in which political events are viewed as a struggle between
antithetical forces-both observed and unobserved-of Good and Evil.
|
This paper is concerned with a model for the dynamics of a single species in
a one-dimensional heterogeneous environment. The environment consists of two
kinds of patches, which are periodically alternately arranged along the spatial
axis. We first establish the well-posedness for the Cauchy problem. Next, we
give existence and uniqueness results for the positive steady state and we
analyze the long-time behavior of the solutions to the evolution problem.
Afterwards, based on dynamical systems methods, we investigate the spreading
properties and the existence of pulsating traveling waves in the positive and
negative directions. It is shown that the asymptotic spreading speed, c * ,
exists and coincides with the minimal wave speed of pulsating traveling waves
in positive and negative directions. In particular, we give a variational
formula for c * by using the principal eigenvalues of certain linear periodic
eigenvalue problems.
|
Toward the goal of automatic production for sports broadcasts, a paramount
task consists in understanding the high-level semantic information of the game
in play. For instance, recognizing and localizing the main actions of the game
would allow producers to adapt and automatize the broadcast production,
focusing on the important details of the game and maximizing the spectator
engagement. In this paper, we focus our analysis on action spotting in soccer
broadcast, which consists in temporally localizing the main actions in a soccer
game. To that end, we propose a novel feature pooling method based on NetVLAD,
dubbed NetVLAD++, that embeds temporally-aware knowledge. Different from
previous pooling methods that consider the temporal context as a single set to
pool from, we split the context before and after an action occurs. We argue
that considering the contextual information around the action spot as a single
entity leads to a sub-optimal learning for the pooling module. With NetVLAD++,
we disentangle the context from the past and future frames and learn specific
vocabularies of semantics for each subsets, avoiding to blend and blur such
vocabulary in time. Injecting such prior knowledge creates more informative
pooling modules and more discriminative pooled features, leading into a better
understanding of the actions. We train and evaluate our methodology on the
recent large-scale dataset SoccerNet-v2, reaching 53.4% Average-mAP for action
spotting, a +12.7% improvement w.r.t the current state-of-the-art.
|
We analyze the Lanczos method for matrix function approximation (Lanczos-FA),
an iterative algorithm for computing $f(\mathbf{A}) \mathbf{b}$ when
$\mathbf{A}$ is a Hermitian matrix and $\mathbf{b}$ is a given mathbftor.
Assuming that $f : \mathbb{C} \rightarrow \mathbb{C}$ is piecewise analytic, we
give a framework, based on the Cauchy integral formula, which can be used to
derive {\em a priori} and \emph{a posteriori} error bounds for Lanczos-FA in
terms of the error of Lanczos used to solve linear systems. Unlike many error
bounds for Lanczos-FA, these bounds account for fine-grained properties of the
spectrum of $\mathbf{A}$, such as clustered or isolated eigenvalues. Our
results are derived assuming exact arithmetic, but we show that they are easily
extended to finite precision computations using existing theory about the
Lanczos algorithm in finite precision. We also provide generalized bounds for
the Lanczos method used to approximate quadratic forms $\mathbf{b}^\textsf{H}
f(\mathbf{A}) \mathbf{b}$, and demonstrate the effectiveness of our bounds with
numerical experiments.
|
We use \textit{ab initio} molecular dynamics simulations to investigate the
properties of the dry surface of pure silica and sodium silicate glasses. The
surface layers are defined based on the atomic distributions along the
direction ($z-$direction) perpendicular to the surfaces. We show that these
surfaces have a higher concentration of dangling bonds as well as two-membered
(2M) rings than the bulk samples. Increasing concentration of Na$_2$O reduces
the proportion of structural defects.
From the vibrational density of states, one concludes that 2M rings have a
unique vibrational signature at a frequency $\approx850$~cm$^{-1}$, compatible
with experimental findings.
We also find that, due to the presence of surfaces, the atomic vibration in
the $z-$direction is softer than for the two other directions. The electronic
density of states shows clear the differences between the surface and interior
and we can attribute these to specific structural units. Finally, the analysis
of the electron localization function allows to get insight on the influence of
local structure and the presence of Na on the nature of chemical bonding in the
glasses.
|
In recent years, graph neural networks (GNNs) have gained increasing
attention, as they possess the excellent capability of processing graph-related
problems. In practice, hyperparameter optimisation (HPO) is critical for GNNs
to achieve satisfactory results, but this process is costly because the
evaluations of different hyperparameter settings require excessively training
many GNNs. Many approaches have been proposed for HPO, which aims to identify
promising hyperparameters efficiently. In particular, the genetic algorithm
(GA) for HPO has been explored, which treats GNNs as a black-box model, of
which only the outputs can be observed given a set of hyperparameters. However,
because GNN models are sophisticated and the evaluations of hyperparameters on
GNNs are expensive, GA requires advanced techniques to balance the exploration
and exploitation of the search and make the optimisation more effective given
limited computational resources. Therefore, we proposed a tree-structured
mutation strategy for GA to alleviate this issue. Meanwhile, we reviewed the
recent HPO works, which gives room for the idea of tree-structure to develop,
and we hope our approach can further improve these HPO methods in the future.
|
We study the cross sections for the inclusive production of $\psi(2S)$ and
$X(3872)$ hadrons in $pp$ collisions at the LHC at two different center-of-mass
energies and compare with experimental data obtained by the ATLAS, CMS, and
LHCb collaborations.
|
We present sufficient conditions to have global hypoellipticity for a class
of Vekua-type operators defined on a compact Lie group. When the group has the
property that every non-trivial representation is not self-dual we show that
these sufficient conditions are also necessary. We also present results about
the global solvability for this class of operators.
|
This paper unapologetically reflects on the critical role that Black feminism
can and should play in abolishing algorithmic oppression. Positioning
algorithmic oppression in the broader field of feminist science and technology
studies, I draw upon feminist philosophical critiques of science and technology
and discuss histories and continuities of scientific oppression against
historically marginalized people. Moreover, I examine the concepts of
invisibility and hypervisibility in oppressive technologies a l\'a the
canonical double bind. Furthermore, I discuss what it means to call for
diversity as a solution to algorithmic violence, and I critique dialectics of
the fairness, accountability, and transparency community. I end by inviting you
to envision and imagine the struggle to abolish algorithmic oppression by
abolishing oppressive systems and shifting algorithmic development practices,
including engaging our communities in scientific processes, centering
marginalized communities in design, and consensual data and algorithmic
practices.
|
Recent trends and advancement in including more diverse and heterogeneous
hardware in High-Performance Computing is challenging software developers in
their pursuit for good performance and numerical stability. The well-known
maxim "software outlives hardware" may no longer necessarily hold true, and
developers are today forced to re-factor their codebases to leverage these
powerful new systems. CFD is one of the many application domains affected. In
this paper, we present Neko, a portable framework for high-order spectral
element flow simulations. Unlike prior works, Neko adopts a modern
object-oriented approach, allowing multi-tier abstractions of the solver stack
and facilitating hardware backends ranging from general-purpose processors down
to exotic vector processors and FPGAs. We show that Neko's performance and
accuracy are comparable to NekRS, and thus on-par with Nek5000's successor on
modern CPU machines. Furthermore, we develop a performance model, which we use
to discuss challenges and opportunities for high-order solvers on emerging
hardware.
|
We introduce a formulation of optimal transport problem for distributions on
function spaces, where the stochastic map between functional domains can be
partially represented in terms of an (infinite-dimensional) Hilbert-Schmidt
operator mapping a Hilbert space of functions to another. For numerous machine
learning tasks, data can be naturally viewed as samples drawn from spaces of
functions, such as curves and surfaces, in high dimensions. Optimal transport
for functional data analysis provides a useful framework of treatment for such
domains. In this work, we develop an efficient algorithm for finding the
stochastic transport map between functional domains and provide theoretical
guarantees on the existence, uniqueness, and consistency of our estimate for
the Hilbert-Schmidt operator. We validate our method on synthetic datasets and
study the geometric properties of the transport map. Experiments on real-world
datasets of robot arm trajectories further demonstrate the effectiveness of our
method on applications in domain adaptation.
|
We derive a factor graph EM (FGEM) algorithm, a technique that permits
combined parameter estimation and statistical inference, to determine hidden
kinetic microstates from patch clamp measurements. Using the cystic fibrosis
transmembrane conductance regulator (CFTR) and nicotinic acetylcholine receptor
(nAChR) as examples, we perform {\em Monte Carlo} simulations to demonstrate
the performance of the algorithm. We show that the performance, measured in
terms of the probability of estimation error, approaches the theoretical
performance limit of maximum {\em a posteriori} estimation. Moreover, the
algorithm provides a reliability score for its estimates, and we demonstrate
that the score can be used to further improve the performance of estimation. We
use the algorithm to estimate hidden kinetic states in lab-obtained CFTR single
channel patch clamp traces.
|
Both coronal holes and active regions are source regions of the solar wind.
The distribution of these coronal structures across both space and time is well
known, but it is unclear how much each source contributes to the solar wind. In
this study we use photospheric magnetic field maps observed over the past four
solar cycles to estimate what fraction of magnetic open solar flux is rooted in
active regions, a proxy for the fraction of all solar wind originating in
active regions. We find that the fractional contribution of active regions to
the solar wind varies between 30% to 80% at any one time during solar maximum
and is negligible at solar minimum, showing a strong correlation with sunspot
number. While active regions are typically confined to latitudes
$\pm$30$^{\circ}$ in the corona, the solar wind they produce can reach
latitudes up to $\pm$60$^{\circ}$. Their fractional contribution to the solar
wind also correlates with coronal mass ejection rate, and is highly variable,
changing by $\pm$20% on monthly timescales within individual solar maxima. We
speculate that these variations could be driven by coronal mass ejections
causing reconfigurations of the coronal magnetic field on sub-monthly
timescales.
|
At large virtuality $Q^2$, the coupling to the $\rho$ meson production
channels provides us with a natural explanation of the surprisingly large cross
section of the $\omega$, as well as the $\pi^+$, meson electroproduction
recently measured at backward angles, without destroying the good agreement
between the Regge pole model and the data at the real photon point. Together
with elastic re-scattering of the outgoing meson it also provides us with a way
to explain why the node, that appears at $u\sim -0.15$ GeV$^2$ at the real
photon point, disappears at moderate virtuality $Q^2$. Predictions are given
for the electroproduction of the $\pi^0$ meson.
|
In this paper, we use a New Radio (NR) Vehicular-to-everything (V2X) standard
compliant simulator based on ns-3, to study the impact of NR numerologies on
the end-to-end performance. In particular, we focus on NR V2X Mode 2, used for
autonomous resource selection in out-of-coverage communications, and consider
the two key procedures defined in 3GPP: sensing and non-sensing based resource
selection. We pay particular attention to the interplay between the operational
numerology and the resource selection window length, a key parameter of NR V2X
Mode 2. The results in a standard-compliant, end-to-end simulation platform
show that in all cases, for basic service messages, a higher numerology is
beneficial because of different reasons, depending on the way the resource
selection window length is established.
|
This is an evolving document describing the meta-theory, the implementation,
and the instantiations of Gillian, a multi-language symbolic analysis platform.
|
Quantum geometry has emerged as a central and ubiquitous concept in quantum
sciences, with direct consequences on quantum metrology and many-body quantum
physics. In this context, two fundamental geometric quantities are known to
play complementary roles: the Fubini-Study metric, which introduces a notion of
distance between quantum states defined over a parameter space, and the Berry
curvature associated with Berry-phase effects and topological band structures.
In fact, recent studies have revealed direct relations between these two
important quantities, suggesting that topological properties can, in special
cases, be deduced from the quantum metric. In this work, we establish general
and exact relations between the quantum metric and the topological invariants
of generic Dirac Hamiltonians. In particular, we demonstrate that topological
indices (Chern numbers or winding numbers) are bounded by the quantum volume
determined by the quantum metric. Our theoretical framework, which builds on
the Clifford algebra of Dirac matrices, is applicable to topological insulators
and semimetals of arbitrary spatial dimensions, with or without chiral
symmetry. This work clarifies the role of the Fubini-Study metric in
topological states of matter, suggesting unexplored topological responses and
metrological applications in a broad class of quantum-engineered systems.
|
We introduce (dual) strongly relative CS-Rickart objects in abelian
categories, as common generalizations of (dual) strongly relative Rickart
objects and strongly extending (lifting) objects. We give general properties,
and we study direct summands, (co)products of (dual) strongly relative
CS-Rickart objects and classes all of whose objects are (dual) strongly
self-CS-Rickart.
|
Graphene supports surface plasmon polaritons with comparatively slow
propagation velocities in the THz region, which becomes increasingly
interesting for future communication technologies. This ability can be used to
realize compact antennas, which are up to two orders of magnitude smaller than
their metallic counterparts. For a proper functionality of these antennas some
minimum material requirements have to be fulfilled, which are presently
difficult to achieve, since the fabrication and transfer technologies for
graphene are still evolving. In this work we analyze available graphene
materials experimentally and extract intrinsic characteristics at THz
frequencies, in order to predict the dependency of the THz signal emission
threshold as a function of the graphene relaxation time tau_r and the chemical
potential mu_c.
|
We give an unconditional proof that self-dual Artin representations of
$\mathbb{Q}$ of dimension $3$ have density $0$ among all Artin representations
of $\mathbb{Q}$ of dimension $3$. Previously this was known under the
assumption of Malle's Conjecture.
|
We report transport measurements on Josephson junctions consisting of Bi2Te3
topological insulator (TI) thin films contacted by superconducting Nb
electrodes. For a device with junction length L = 134 nm, the critical
supercurrent Ic can be modulated by an electrical gate which tunes the carrier
type and density of the TI film. Ic can reach a minimum when the TI is near the
charge neutrality regime with the Fermi energy lying close to the Dirac point
of the surface state. In the p-type regime the Josephson current can be well
described by a short ballistic junction model. In the n-type regime the
junction is ballistic at 0.7 K < T < 3.8 K while for T < 0.7 K the diffusive
bulk modes emerge and contribute a larger Ic than the ballistic model. We
attribute the lack of diffusive bulk modes in the p-type regime to the
formation of p-n junctions. Our work provides new clues for search of Majorana
zero mode in TI-based superconducting devices.
|
Let $0<p,q\leq \infty$ and denote by $\mathcal S_p^N$ and $\mathcal S_q^N$
the corresponding Schatten classes of real $N\times N$ matrices. We study
approximation quantities of natural identities $\mathcal S_p^N\hookrightarrow
\mathcal S_q^N$ between Schatten classes and prove asymptotically sharp bounds
up to constants only depending on $p$ and $q$, showing how approximation
numbers are intimately related to the Gelfand numbers and their duals, the
Kolmogorov numbers. In particular, we obtain new bounds for those sequences of
$s$-numbers. Our results improve and complement bounds previously obtained by
B. Carl and A. Defant [J. Approx. Theory, 88(2):228--256, 1997], Y. Gordon, H.
K\"onig, and C. Sch\"utt [J. Approx. Theory, 49(3):219--239, 1987], A. Hinrichs
and C. Michels [Rend. Circ. Mat. Palermo (2) Suppl., (76):395--411, 2005], and
A. Hinrichs, J. Prochno, and J. Vyb\'iral [preprint, 2020]. We also treat the
case of quasi-Schatten norms, which is relevant in applications such as
low-rank matrix recovery.
|
We propose a polarization-independent optical isolator that does not use
walk-off between the two orthogonal polarizations. The design is based on two
Faraday rotators in combination with two half-wave plates in a closed,
Sagnac-interferometer-like, configuration. An experimental prototype is tested
successfully under variation of the input light polarization state with
isolation level between 43 dB and 50 dB for all input polarizations (linear,
circular, or elliptical).
|
Known quantum and classical perturbative long-distance corrections to the
Newton potential are extended into the short-distance regime using evolution
equations for a `running' gravitational coupling, which is used to construct
examples non-perturbative potentials for the gravitational binding of two
particles. Model-I is based on the complete set of the relevant Feynman
diagrams. Its potential has a singularity at a distance below which it becomes
complex and the system gets black hole-like features. Model-II is based on a
reduced set of diagrams and its coupling approaches a non-Gaussian fixed point
as the distance is reduced. Energies and eigenfunctions are obtained and used
in a study of time-dependent collapse (model-I) and bouncing (both models) of a
spherical wave packet. The motivation for such non-perturbative `toy' models
stems from a desire to elucidate the mass dependence of binding energies found
25 years ago in an explorative numerical simulation within the dynamical
triangulation approach to quantum gravity. Models I \& II suggest indeed an
explanation of this mass dependence, in which the Schwarzschild scale plays a
role. An estimate of the renormalized Newton coupling is made by matching with
the small-mass region. Comparison of the dynamical triangulation results for
mass renormalization with `renormalized perturbation theory' in the continuum
leads to an independent estimate of this coupling, which is used in an improved
analysis of the binding energy data.
|
Contrastive learning has nearly closed the gap between supervised and
self-supervised learning of image representations, and has also been explored
for videos. However, prior work on contrastive learning for video data has not
explored the effect of explicitly encouraging the features to be distinct
across the temporal dimension. We develop a new temporal contrastive learning
framework consisting of two novel losses to improve upon existing contrastive
self-supervised video representation learning methods. The local-local temporal
contrastive loss adds the task of discriminating between non-overlapping clips
from the same video, whereas the global-local temporal contrastive aims to
discriminate between timesteps of the feature map of an input clip in order to
increase the temporal diversity of the learned features. Our proposed temporal
contrastive learning framework achieves significant improvement over the
state-of-the-art results in various downstream video understanding tasks such
as action recognition, limited-label action classification, and
nearest-neighbor video retrieval on multiple video datasets and backbones. We
also demonstrate significant improvement in fine-grained action classification
for visually similar classes. With the commonly used 3D ResNet-18 architecture,
we achieve 82.4% (+5.1% increase over the previous best) top-1 accuracy on
UCF101 and 52.9% (+5.4% increase) on HMDB51 action classification, and 56.2%
(+11.7% increase) Top-1 Recall on UCF101 nearest neighbor video retrieval.
|
Let $X$ be a complex irreducible smooth projective curve, and let ${\mathbb
L}$ be an algebraic line bundle on $X$ with a nonzero section $\sigma_0$. Let
$\mathcal{M}$ denote the moduli space of stable Hitchin pairs $(E,\, \theta)$,
where $E$ is an algebraic vector bundle on $X$ of fixed rank $r$ and degree
$\delta$, and $\theta\, \in\, H^0(X,\, End(E)\otimes K_X\otimes{\mathbb L})$.
Associating to every stable Hitchin pair its spectral data, an isomorphism of
$\mathcal{M}$ with a moduli space $\mathcal{P}$ of stable sheaves of pure
dimension one on the total space of $K_X\otimes{\mathbb L}$ is obtained. Both
the moduli spaces $\mathcal{P}$ and $\mathcal{M}$ are equipped with algebraic
Poisson structures, which are constructed using $\sigma_0$. Here we prove that
the above isomorphism between $\mathcal{P}$ and $\mathcal{M}$ preserves the
Poisson structures.
|
We study the {\em Budgeted Dominating Set} (BDS) problem on uncertain graphs,
namely, graphs with a probability distribution $p$ associated with the edges,
such that an edge $e$ exists in the graph with probability $p(e)$. The input to
the problem consists of a vertex-weighted uncertain graph $\G=(V, E, p,
\omega)$ and an integer {\em budget} (or {\em solution size}) $k$, and the
objective is to compute a vertex set $S$ of size $k$ that maximizes the
expected total domination (or total weight) of vertices in the closed
neighborhood of $S$. We refer to the problem as the {\em Probabilistic Budgeted
Dominating Set}~(PBDS) problem and present the following results.
\begin{enumerate} \dnsitem We show that the PBDS problem is NP-complete even
when restricted to uncertain {\em trees} of diameter at most four. This is in
sharp contrast with the well-known fact that the BDS problem is solvable in
polynomial time in trees. We further show that PBDS is \wone-hard for the
budget parameter $k$, and under the {\em Exponential time hypothesis} it cannot
be solved in $n^{o(k)}$ time.
\item We show that if one is willing to settle for $(1-\epsilon)$
approximation, then there exists a PTAS for PBDS on trees. Moreover, for the
scenario of uniform edge-probabilities, the problem can be solved optimally in
polynomial time.
\item We consider the parameterized complexity of the PBDS problem, and show
that Uni-PBDS (where all edge probabilities are identical) is \wone-hard for
the parameter pathwidth. On the other hand, we show that it is FPT in the
combined parameters of the budget $k$ and the treewidth.
\item Finally, we extend some of our parameterized results to planar and
apex-minor-free graphs. \end{enumerate}
|
Many states of linear real scalar quantum fields (in particular
Reeh-Schlieder states) on flat as well as curved spacetime are entangled on
spacelike separated local algebras of observables. It has been argued that this
entanglement can be "harvested" by a pair of so-called particle detectors, for
example singularly or non-locally coupled quantum mechanical harmonic
oscillator Unruh detectors. In an attempt to avoid such imperfect coupling, we
analyse a model-independent local and covariant entanglement harvesting
protocol based on the local probes of a recently proposed measurement theory of
quantum fields. We then introduce the notion of a local particle detector
concretely given by a local mode of a linear real scalar probe field on
possibly curved spacetime and possibly under the influence of external fields.
In a non-perturbative analysis we find that local particle detectors cannot
harvest entanglement below a critical coupling strength when the corresponding
probe fields are initially prepared in quasi-free Reeh-Schlieder states and are
coupled to a system field prepared in a quasi-free state. This is a consequence
of the fact that Reeh-Schlieder states restrict to truly mixed states on any
local mode.
|
Previously, statistical textbook wisdom has held that interpolating noisy
data will generalize poorly, but recent work has shown that data interpolation
schemes can generalize well. This could explain why overparameterized deep nets
do not necessarily overfit. Optimal data interpolation schemes have been
exhibited that achieve theoretical lower bounds for excess risk in any
dimension for large data (Statistically Consistent Interpolation). These are
non-parametric Nadaraya-Watson estimators with singular kernels. The recently
proposed weighted interpolating nearest neighbors method (wiNN) is in this
class, as is the previously studied Hilbert kernel interpolation scheme, in
which the estimator has the form $\hat{f}(x)=\sum_i y_i w_i(x)$, where $w_i(x)=
\|x-x_i\|^{-d}/\sum_j \|x-x_j\|^{-d}$. This estimator is unique in being
completely parameter-free. While statistical consistency was previously proven,
convergence rates were not established. Here, we comprehensively study the
finite sample properties of Hilbert kernel regression. We prove that the excess
risk is asymptotically equivalent pointwise to $\sigma^2(x)/\ln(n)$ where
$\sigma^2(x)$ is the noise variance. We show that the excess risk of the plugin
classifier is less than $2|f(x)-1/2|^{1-\alpha}\,(1+\varepsilon)^\alpha
\sigma^\alpha(x)(\ln(n))^{-\frac{\alpha}{2}}$, for any $0<\alpha<1$, where $f$
is the regression function $x\mapsto\mathbb{E}[y|x]$. We derive asymptotic
equivalents of the moments of the weight functions $w_i(x)$ for large $n$, for
instance for $\beta>1$, $\mathbb{E}[w_i^{\beta}(x)]\sim_{n\rightarrow
\infty}((\beta-1)n\ln(n))^{-1}$. We derive an asymptotic equivalent for the
Lagrange function and exhibit the nontrivial extrapolation properties of this
estimator. We present heuristic arguments for a universal $w^{-2}$ power-law
behavior of the probability density of the weights in the large $n$ limit.
|
Estimating the density of a continuous random variable X has been studied
extensively in statistics, in the setting where n independent observations of X
are given a priori and one wishes to estimate the density from that. Popular
methods include histograms and kernel density estimators. In this review paper,
we are interested instead in the situation where the observations are generated
by Monte Carlo simulation from a model. Then, one can take advantage of
variance reduction methods such as stratification, conditional Monte Carlo, and
randomized quasi-Monte Carlo (RQMC), and obtain a more accurate density
estimator than with standard Monte Carlo for a given computing budget. We
discuss several ways of doing this, proposed in recent papers, with a focus on
methods that exploit RQMC. A first idea is to directly combine RQMC with a
standard kernel density estimator. Another one is to adapt a simulation-based
derivative estimation method such as smoothed perturbation analysis or the
likelihood ratio method to obtain a continuous estimator of the cdf, whose
derivative is an unbiased estimator of the density. This can then be combined
with RQMC. We summarize recent theoretical results with these approaches and
give numerical illustrations of how they improve the convergence of the mean
square integrated error.
|
We present an original, generic, and efficient approach for computing the
first and second partial derivatives of ray velocities along ray paths in
general anisotropic elastic media. These derivatives are used in solving
kinematic problems, like two-point ray bending methods and seismic tomography,
and they are essential for evaluating the dynamic properties along the rays
(amplitudes and phases). The traveltime is delivered through an integral over a
given Lagrangian defined at each point along the ray. Although the Lagrangian
cannot be explicitly expressed in terms of the medium properties and the ray
direction components, its derivatives can still be formulated analytically
using the corresponding arclength-related Hamiltonian that can be explicitly
expressed in terms of the medium properties and the slowness vector components.
This requires first to invert for the slowness vector components, given the ray
direction components. Computation of the slowness vector and the ray velocity
derivatives is considerably simplified by using an auxiliary
scaled-time-related Hamiltonian obtained directly from the Christoffel equation
and connected to the arclength-related Hamiltonian by a simple scale factor.
This study consists of two parts. In Part I, we consider general anisotropic
(triclinic) media, and provide the derivatives (gradients and Hessians) of the
ray velocity, with respect to (1) the spatial/directional vectors and (2) the
elastic model parameters. In Part II, we apply the theory of Part I explicitly
to polar anisotropic media (transverse isotropy with tilted axis of symmetry,
TTI), and obtain the explicit ray velocity derivatives for the coupled qP and
qSV waves and for SH waves.
|
With the purpose of defending against lateral movement in today's borderless
networks, Zero Trust Architecture (ZTA) adoption is gaining momentum. With a
full scale ZTA implementation, it is unlikely that adversaries will be able to
spread through the network starting from a compromised endpoint. However, the
already authenticated and authorised session of a compromised endpoint can be
leveraged to perform limited, though malicious, activities ultimately rendering
the endpoints the Achilles heel of ZTA. To effectively detect such attacks,
distributed collaborative intrusion detection systems with an attack
scenario-based approach have been developed. Nonetheless, Advanced Persistent
Threats (APTs) have demonstrated their ability to bypass this approach with a
high success ratio. As a result, adversaries can pass undetected or potentially
alter the detection logging mechanisms to achieve a stealthy presence.
Recently, blockchain technology has demonstrated solid use cases in the cyber
security domain. In this paper, motivated by the convergence of ZTA and
blockchain-based intrusion detection and prevention, we examine how ZTA can be
augmented onto endpoints. Namely, we perform a state-of-the-art review of ZTA
models, real-world architectures with a focus on endpoints, and
blockchain-based intrusion detection systems. We discuss the potential of
blockchain's immutability fortifying the detection process and identify open
challenges as well as potential solutions and future directions.
|
We investigate the low-temperature charge-density-wave (CDW) state of bulk
TaS$_2$ with a fully self-consistent DFT+U approach, over which the controversy
has remained unresolved regarding the out-of-plane metallic band. By examining
the innate structure of the Hubbard U potential, we reveal that the
conventional use of atomic-orbital basis could seriously misevaluate the
electron correlation in the CDW state. By adopting a generalized basis,
covering the whole David star, we successfully reproduce the Mott insulating
nature with the layer-by-layer antiferromagnetic order. Similar consideration
should be applied for description of the electron correlation in molecular
solid.
|
A stable cut of a graph is a cut whose weight cannot be increased by changing
the side of a single vertex. Equivalently, a cut is stable if all vertices have
the (weighted) majority of their neighbors on the other side. In this paper we
study Min Stable Cut, the problem of finding a stable cut of minimum weight,
which is closely related to the Price of Anarchy of the Max Cut game. Since
this problem is NP-hard, we study its complexity on graphs of low treewidth,
low degree, or both. We show that the problem is weakly NP-hard on severely
restricted trees, so bounding treewidth alone cannot make it tractable. We
match this with a pseudo-polynomial DP algorithm running in time $(\Delta\cdot
W)^{O(tw)}n^{O(1)}$, where $tw$ is the treewidth, $\Delta$ the maximum degree,
and $W$ the maximum weight. On the other hand, bounding $\Delta$ is also not
enough, as the problem is NP-hard for unweighted graphs of bounded degree. We
therefore parameterize Min Stable Cut by both $tw+\Delta$ and obtain an FPT
algorithm running in time $2^{O(\Delta tw)}(n+\log W)^{O(1)}$. Our main result
is to provide a reduction showing that both aforementioned algorithms are
essentially optimal, even if we replace treewidth by pathwidth: if there exists
an algorithm running in $(nW)^{o(pw)}$ or $2^{o(\Delta pw)}(n+\log W)^{O(1)}$,
then the ETH is false. Complementing this, we show that we can obtain an FPT
approximation scheme parameterized by treewidth, if we consider almost-stable
solutions.
Motivated by these mostly negative results, we consider Unweighted Min Stable
Cut. Here our results already imply a much faster exact algorithm running in
time $\Delta^{O(tw)}n^{O(1)}$. We show that this is also probably essentially
optimal: an algorithm running in $n^{o(pw)}$ would contradict the ETH.
|
These are lecture notes based on the first part of a course on 'Mathematical
Data Science' that I taught to final year BSc students in the UK between 2019
and 2020. Topics include: Concentration of measure in high dimensions; Gaussian
random vectors in high dimensions; Random projections; Separation/disentangling
of Gaussian data.
|
Long-range correlated errors can severely impact the performance of NISQ
(noisy intermediate-scale quantum) devices, and fault-tolerant quantum
computation. Characterizing these errors is important for improving the
performance of these devices, via calibration and error correction, and to
ensure correct interpretation of the results. We propose a compressed sensing
method for detecting two-qubit correlated dephasing errors, assuming only that
the correlations are sparse (i.e., at most s pairs of qubits have correlated
errors, where s << n(n-1)/2, and n is the total number of qubits). In
particular, our method can detect long-range correlations between any two
qubits in the system (i.e., the correlations are not restricted to be
geometrically local).
Our method is highly scalable: it requires as few as m = O(s log n)
measurement settings, and efficient classical postprocessing based on convex
optimization. In addition, when m = O(s log^4(n)), our method is highly robust
to noise, and has sample complexity O(max(n,s)^2 log^4(n)), which can be
compared to conventional methods that have sample complexity O(n^3). Thus, our
method is advantageous when the correlations are sufficiently sparse, that is,
when s < O(n^(3/2) / log^2(n)). Our method also performs well in numerical
simulations on small system sizes, and has some resistance to
state-preparation-and-measurement (SPAM) errors. The key ingredient in our
method is a new type of compressed sensing measurement, which works by
preparing entangled Greenberger-Horne-Zeilinger states (GHZ states) on random
subsets of qubits, and measuring their decay rates with high precision.
|
In this paper, we study the role of the symmetry energy on the neutron-drip
transition in both nonaccreting and accreting neutron stars, allowing for the
presence of a strong magnetic field as in magnetars. The density, pressure, and
composition at the neutron-drip threshold are determined using the recent set
of the Brussels-Montreal microscopic nuclear mass models, which mainly differ
in their predictions for the value of the symmetry energy $J$ and its slope $L$
in infinite homogeneous nuclear matter at saturation. Although some
correlations between on the one hand the neutron-drip density, the pressure,
the proton fraction and on the other hand $J$ (or equivalently $L$) are found,
these correlations are radically different in nonaccreting and accreting
neutron stars. In particular, the neutron-drip density is found to increase
with $L$ in the former case, but decreases in the latter case depending on the
composition of ashes from x-ray bursts and superbursts. We have qualitatively
explained these different behaviors using a simple mass formula. We have also
shown that the details of the nuclear structure may play a more important role
than the symmetry energy in accreting neutron-star crusts.
|
Aminopropyl modified mesoporous SiO2 nanoparticles, MCM-41 type, have been
synthesized by the co-condensation method from tetraethylorthosilicate (TEOS)
and aminopropyltriethoxysilane (APTES). By means of modifying TEOS/APTES ratio
we have carried out an in-depth characterization of the nanoparticles as a
function of APTES content. Surface charge and nanoparticles morphology were
strongly influenced by the amount of APTES and particles changed from hexagonal
to bean-like morphology insofar APTES increased. Besides, the porous structure
was also affected, showing a contraction of the lattice parameter and pore
size, while increasing the wall thickness. These results bring about new
insights about the nanoparticles formation during the co-condensation process.
The model proposed herein considers that different interactions stablished
between TEOS and APTES with the structure directing agent have consequences on
pore size, wall thickness and particle morphology. Finally, APTES is an
excellent linker to covalently attach active targeting agents such as folate
groups. We have hypothesized that APTES could also play a role in the
biological behavior of the nanoparticles. So, the internalization efficiency of
the nanoparticles has been tested with cancerous LNCaP and non-cancerous
preosteoblast-like MC3T3-E1 cells. The results indicate a cooperative effect
between aminopropylsilane presence and folic acid, only for the cancerous LNCaP
cell line.
|
Spoken communication occurs in a "noisy channel" characterized by high levels
of environmental noise, variability within and between speakers, and lexical
and syntactic ambiguity. Given these properties of the received linguistic
input, robust spoken word recognition -- and language processing more generally
-- relies heavily on listeners' prior knowledge to evaluate whether candidate
interpretations of that input are more or less likely. Here we compare several
broad-coverage probabilistic generative language models in their ability to
capture human linguistic expectations. Serial reproduction, an experimental
paradigm where spoken utterances are reproduced by successive participants
similar to the children's game of "Telephone," is used to elicit a sample that
reflects the linguistic expectations of English-speaking adults. When we
evaluate a suite of probabilistic generative language models against the
yielded chains of utterances, we find that those models that make use of
abstract representations of preceding linguistic context (i.e., phrase
structure) best predict the changes made by people in the course of serial
reproduction. A logistic regression model predicting which words in an
utterance are most likely to be lost or changed in the course of spoken
transmission corroborates this result. We interpret these findings in light of
research highlighting the interaction of memory-based constraints and
representations in language processing.
|
This article summarizes the talk given at the LCWS 2021 conference on the
status and news of the WHIZARD Monte Carlo event generator. We presented its
features relevant for the physics program of future lepton and especially
linear colliders as well as recent developments towards including NLO
perturbative corrections and a UFO interface to study models beyond the
Standard Model. It takes as reference the version 3.0.0$\beta$ released in
August 2020 and additionally discusses the developments that will be included
in the next major version 3.0.0 to be released in April 2021.
|
It is suggested that many-body quantum chaos appears as spontaneous symmetry
breaking of unitarity in interacting quantum many-body systems. It has been
shown that many-body level statistics, probed by the spectral form factor (SFF)
defined as $K(\beta,t)=\langle|{\rm Tr}\, \exp(-\beta H + itH)|^2\rangle$, is
dominated by a diffusion-type mode in a field theory analysis. The key finding
of this paper is that the "unitary" $\beta=0$ case is different from the $\beta
\to 0^+$ limit, with the latter leading to a finite mass of these modes due to
interactions. This mass suppresses a rapid exponential ramp in the SFF, which
is responsible for the fast emergence of Poisson statistics in the
non-interacting case, and gives rise to a non-trivial random matrix structure
of many-body levels. The interaction-induced mass in the SFF shares
similarities with the dephasing rate in the theory of weak localization and the
Lyapunov exponent of the out-of-time-ordered correlators.
|
The current literature on intelligent reflecting surface (IRS) focuses on
optimizing the IRS phase shifts to yield coherent beamforming gains, under the
assumption of perfect channel state information (CSI) of individual
IRS-assisted links, which is highly impractical. This work, instead, considers
the random rotations scheme at the IRS in which the reflecting elements only
employ random phase rotations without requiring any CSI. The only CSI then
needed is at the base station (BS) of the overall channel to implement the
beamforming transmission scheme. Under this framework, we derive the sum-rate
scaling laws in the large number of users regime for the IRS-assisted
multiple-input single-output (MISO) broadcast channel, with optimal dirty paper
coding (DPC) scheme and the lower-complexity random beamforming (RBF) and
deterministic beamforming (DBF) schemes at the BS. The random rotations scheme
increases the sum-rate by exploiting multi-user diversity, but also compromises
the gain to some extent due to correlation. Finally, energy efficiency
maximization problems in terms of the number of BS antennas, IRS elements and
transmit power are solved using the derived scaling laws. Simulation results
show the proposed scheme to improve the sum-rate, with performance becoming
close to that under coherent beamforming for a large number of users.
|
This paper studies numerically the Weeks-Chandler-Andersen (WCA) system,
which is shown to obey hidden scale invariance with a density-scaling exponent
that varies from below 5 to above 500. This unprecedented variation makes it
advantageous to use the fourth-order Runge-Kutta algorithm for tracing out
isomorphs. Good isomorph invariance of structure and dynamics is observed over
more than three orders of magnitude temperature variation. For all state points
studied, the virial potential-energy correlation coefficient and the
density-scaling exponent are controlled mainly by the temperature. Based on the
assumption of statistically independent pair interactions, a mean-field theory
is developed that rationalizes this finding and provides an excellent fit to
data at low temperatures and densities.
|
A time-reversal invariant topological insulator occupying a Euclidean
half-space determines a 'Quaternionic' self-adjoint Fredholm family. We show
that the discrete spectrum data for such a family is geometrically encoded in a
non-trivial 'Real' gerbe. The gerbe invariant, rather than a na\"ive counting
of Dirac points, precisely captures how edge states completely fill up the bulk
spectral gap in a topologically protected manner.
|
We study the impact of strong magnetic fields on the pasta phases that are
expected to exist in the inner crust of neutron stars. We employ the
relativistic mean field model to describe the nucleon interaction and use the
self-consistent Thomas-Fermi approximation to calculate the nonuniform matter
in neutron star crust. The properties of pasta phases and crust-core transition
are examined. It is found that as the magnetic field strength $B$ is less than
$10^{17}$ G, the effects of magnetic field are not evident comparing with the
results without magnetic field. As $B$ is stronger than $10^{18}$ G, the onset
densities of pasta phases and crust-core transition density decrease
significantly, and the density distributions of nucleons and electrons are also
changed obviously.
|
Long-term availability of minerals and industrial materials is a necessary
condition for sustainable development as they are the constituents of any
manufacturing product. To enhance the efficiency of material management, we
define a computer-vision-enabled material measurement system and provide a
survey of works relevant to its development with particular emphasis on the
foundations. A network of such systems for wide-area material stock monitoring
is also covered. Finally, challenges and future research directions are
discussed. As the first article bridging industrial ecology and advanced
computer vision, this survey is intended to support both research communities
towards more sustainable manufacturing.
|
We study the regular surface defect in the Omega-deformed four dimensional
supersymmetric gauge theory with gauge group SU(N) with 2N hypermultiplets in
fundamental representation. We prove its vacuum expectation value obeys the
Knizhnik-Zamolodchikov equation for the 4-point conformal block of current
algebra of a two dimensional conformal field theory. The level and the vertex
operators are determined by the parameters of the Omega-background and the
masses of the hypermultiplets, the 4-point cross-ratio is determined by the
complexified gauge coupling. We clarify that in a somewhat subtle way the
branching rule is parametrized by the Coulomb moduli. This is an example of the
BPS/CFT relation.
|
In our previous article with Yukio Kametani, we investigated the geometric
structure underlying a large scale interacting system on infinite graphs, via
constructing a suitable cohomology theory called uniformly local cohomology,
which reflects the geometric property of the microscopic model, using a class
of functions called the uniformly local functions. In this article, we
introduce the co-local functions on the geometric structure associated to a
large scale interacting system. We may similarly define the notion of uniformly
local functions for co-local functions. However, contrary to the functions
appearing in our previous article, the co-local functions reflect the
stochastic property of the model, namely the probability measure on the
configuration space. We then prove a decomposition theorem of Varadhan type for
closed co-local forms. The space of co-local functions and forms contain the
space of $L^2$-functions and forms. In the last section, we state a conjecture
concerning the decomposition theorem for the $L^2$-case.
|
Data scientists often develop machine learning models to solve a variety of
problems in the industry and academy but not without facing several challenges
in terms of Model Development. The problems regarding Machine Learning
Development involves the fact that such professionals do not realize that they
usually perform ad-hoc practices that could be improved by the adoption of
activities presented in the Software Engineering Development Lifecycle. Of
course, since machine learning systems are different from traditional Software
systems, some differences in their respective development processes are to be
expected. In this context, this paper is an effort to investigate the
challenges and practices that emerge during the development of ML models from
the software engineering perspective by focusing on understanding how software
developers could benefit from applying or adapting the traditional software
engineering process to the Machine Learning workflow.
|
For robust GPS-vision navigation in urban areas, we propose an
Integrity-driven Landmark Attention (ILA) technique via stochastic
reachability. Inspired by cognitive attention in humans, we perform convex
optimization to select a subset of landmarks from GPS and vision measurements
that maximizes integrity-driven performance. Given known measurement error
bounds in non-faulty conditions, our ILA follows a unified approach to address
both GPS and vision faults and is compatible with any off-the-shelf estimator.
We analyze measurement deviation to estimate the stochastic reachable set of
expected position for each landmark, which is parameterized via probabilistic
zonotope (p-Zonotope). We apply set union to formulate a p-Zonotopic cost that
represents the size of position bounds based on landmark inclusion/exclusion.
We jointly minimize the p-Zonotopic cost and maximize the number of landmarks
via convex relaxation. For an urban dataset, we demonstrate improved
localization accuracy and robust predicted availability for a pre-defined alert
limit.
|
A quantum magnetic state due to magnetic charges is never observed, even
though they are treated as quantum mechanical variable in theoretical
calculations. Here, we demonstrate the occurrence of a novel quantum disordered
state of magnetic charges in nanoengineered magnetic honeycomb lattice of
ultra-small connecting elements. The experimental research, performed using
spin resolved neutron scattering, reveals a massively degenerate ground state,
comprised of low integer and energetically forbidden high integer magnetic
charges, that manifests cooperative paramagnetism at low temperature. The
system tends to preserve the degenerate configuration even under large magnetic
field application. It exemplifies the robustness of disordered correlation of
magnetic charges in 2D honeycomb lattice. The realization of quantum disordered
ground state elucidates the dominance of exchange energy, which is enabled due
to the nanoscopic magnetic element size in nanoengineered honeycomb.
Consequently, an archetypal platform is envisaged to study quantum mechanical
phenomena due to emergent magnetic charges.
|
One of the most important parts of Artificial Neural Networks is minimizing
the loss functions which tells us how good or bad our model is. To minimize
these losses we need to tune the weights and biases. Also to calculate the
minimum value of a function we need gradient. And to update our weights we need
gradient descent. But there are some problems with regular gradient descent ie.
it is quite slow and not that accurate. This article aims to give an
introduction to optimization strategies to gradient descent. In addition, we
shall also discuss the architecture of these algorithms and further
optimization of Neural Networks in general
|
In this paper, we develop efficient and accurate algorithms for evaluating
$\varphi(A)$ and $\varphi(A)b$, where $A$ is an $N\times N$ matrix, $b$ is an
$N$ dimensional vector and $\varphi$ is the function defined by
$\varphi(x)\equiv\sum\limits^{\infty}_{k=0}\frac{z^k}{(1+k)!}$. Such matrix
function (the so-called $\varphi$-function) plays a key role in a class of
numerical methods well-known as exponential integrators. The algorithms use the
scaling and modified squaring procedure combined with truncated Taylor series.
The backward error analysis is presented to find the optimal value of the
scaling and the degree of the Taylor approximation. Some useful techniques are
employed for reducing the computational cost. Numerical comparisons with
state-of-the-art algorithms show that the algorithms perform well in both
accuracy and efficiency.
|
Multidisciplinary cooperation is now common in research since social issues
inevitably involve multiple disciplines. In research articles, reference
information, especially citation content, is an important representation of
communication among different disciplines. Analyzing the distribution
characteristics of references from different disciplines in research articles
is basic to detecting the sources of referred information and identifying
contributions of different disciplines. This work takes articles in PLoS as the
data and characterizes the references from different disciplines based on
Citation Content Analysis (CCA). First, we download 210,334 full-text articles
from PLoS and collect the information of the in-text citations. Then, we
identify the discipline of each reference in these academic articles. To
characterize the distribution of these references, we analyze three
characteristics, namely, the number of citations, the average cited intensity
and the average citation length. Finally, we conclude that the distributions of
references from different disciplines are significantly different. Although
most references come from Natural Science, Humanities and Social Sciences play
important roles in the Introduction and Background sections of the articles.
Basic disciplines, such as Mathematics, mainly provide research methods in the
articles in PLoS. Citations mentioned in the Results and Discussion sections of
articles are mainly in-discipline citations, such as citations from Nursing and
Medicine in PLoS.
|
This paper looks into the modeling of hallucination in the human's brain.
Hallucinations are known to be causally associated with some malfunctions
within the interaction of different areas of the brain involved in perception.
Focusing on visual hallucination and its underlying causes, we identify an
adversarial mechanism between different parts of the brain which are
responsible in the process of visual perception. We then show how the
characterized adversarial interactions in the brain can be modeled by a
generative adversarial network.
|
The Radiance Enhancement (RE) method was introduced for efficient detection
of clouds from the space. Recently we have also reported that due to high
reflectance of combustion-originated smokes this approach can also be
generalized for detection of the forest fires by retrieving and analyzing
datasets collected from a space orbiting micro-spectrometer operating in the
near infrared spectral range. In our previous publication we have performed a
comparison of observed and synthetic radiance spectra by developing a method
for computation of surface reflectance consisting of different canopies by
weighted sum based on their areal coverage. However, this approach should be
justified by a method based on corresponding proportions of the upwelling
radiance. The results of computations we performed in this study reveals a good
match between areal coverage of canopies and the corresponding proportions of
the upwelling radiance due to effect of the instrument slit function.
|
Security issues in shipped code can lead to unforeseen device malfunction,
system crashes or malicious exploitation by crackers, post-deployment. These
vulnerabilities incur a cost of repair and foremost risk the credibility of the
company. It is rewarding when these issues are detected and fixed well ahead of
time, before release. Common Weakness Estimation (CWE) is a nomenclature
describing general vulnerability patterns observed in C code. In this work, we
propose a deep learning model that learns to detect some of the common
categories of security vulnerabilities in source code efficiently. The AI
architecture is an Attention Fusion model, that combines the effectiveness of
recurrent, convolutional and self-attention networks towards decoding the
vulnerability hotspots in code. Utilizing the code AST structure, our model
builds an accurate understanding of code semantics with a lot less learnable
parameters. Besides a novel way of efficiently detecting code vulnerability, an
additional novelty in this model is to exactly point to the code sections,
which were deemed vulnerable by the model. Thus helping a developer to quickly
focus on the vulnerable code sections; and this becomes the "explainable" part
of the vulnerability detection. The proposed AI achieves 98.40% F1-score on
specific CWEs from the benchmarked NIST SARD dataset and compares well with
state of the art.
|
Off-policy multi-step reinforcement learning algorithms consist of
conservative and non-conservative algorithms: the former actively cut traces,
whereas the latter do not. Recently, Munos et al. (2016) proved the convergence
of conservative algorithms to an optimal Q-function. In contrast,
non-conservative algorithms are thought to be unsafe and have a limited or no
theoretical guarantee. Nonetheless, recent studies have shown that
non-conservative algorithms empirically outperform conservative ones. Motivated
by the empirical results and the lack of theory, we carry out theoretical
analyses of Peng's Q($\lambda$), a representative example of non-conservative
algorithms. We prove that it also converges to an optimal policy provided that
the behavior policy slowly tracks a greedy policy in a way similar to
conservative policy iteration. Such a result has been conjectured to be true
but has not been proven. We also experiment with Peng's Q($\lambda$) in complex
continuous control tasks, confirming that Peng's Q($\lambda$) often outperforms
conservative algorithms despite its simplicity. These results indicate that
Peng's Q($\lambda$), which was thought to be unsafe, is a theoretically-sound
and practically effective algorithm.
|
As the use of algorithmic systems in high-stakes decision-making increases,
the ability to contest algorithmic decisions is being recognised as an
important safeguard for individuals. Yet, there is little guidance on what
`contestability'--the ability to contest decisions--in relation to algorithmic
decision-making requires. Recent research presents different conceptualisations
of contestability in algorithmic decision-making. We contribute to this growing
body of work by describing and analysing the perspectives of people and
organisations who made submissions in response to Australia's proposed `AI
Ethics Framework', the first framework of its kind to include `contestability'
as a core ethical principle. Our findings reveal that while the nature of
contestability is disputed, it is seen as a way to protect individuals, and it
resembles contestability in relation to human decision-making. We reflect on
and discuss the implications of these findings.
|
It is well known that entropy production is a proxy to the detection of
non-equilibrium, i.e. of the absence of detailed balance; however, due to the
global character of this quantity, its knowledge does not allow to identify
spatial currents or fluxes of information among specific elements of the system
under study. In this respect, much more insight can be gained by studying
transfer entropy and response, which allow quantifying the relative influence
of parts of the system and the asymmetry of the fluxes. In order to understand
the relation between the above-mentioned quantities, we investigate spatially
asymmetric extended systems. First, we consider a simplified linear stochastic
model, which can be studied analytically; then, we include nonlinear terms in
the dynamics. Extensive numerical investigation shows the relation between
entropy production and the above-introduced degrees of asymmetry. Finally, we
apply our approach to the highly nontrivial dynamics generated by the Lorenz
'96 model for Earth oceanic circulation.
|
Recurrent neural networks are a powerful means in diverse applications. We
show that, together with so-called conceptors, they also allow fast learning,
in contrast to other deep learning methods. In addition, a relatively small
number of examples suffices to train neural networks with high accuracy. We
demonstrate this with two applications, namely speech recognition and detecting
car driving maneuvers. We improve the state of the art by application-specific
preparation techniques: For speech recognition, we use mel frequency cepstral
coefficients leading to a compact representation of the frequency spectra, and
detecting car driving maneuvers can be done without the commonly used
polynomial interpolation, as our evaluation suggests.
|
Network filtering is an important form of dimension reduction to isolate the
core constituents of large and interconnected complex systems. We introduce a
new technique to filter large dimensional networks arising out of dynamical
behavior of the constituent nodes, exploiting their spectral properties. As
opposed to the well known network filters that rely on preserving key
topological properties of the realized network, our method treats the spectrum
as the fundamental object and preserves spectral properties. Applying
asymptotic theory for high dimensional data for the filter, we show that it can
be tuned to interpolate between zero filtering to maximal filtering that
induces sparsity and consistency while having the least spectral distance from
a linear shrinkage estimator. We apply our proposed filter to covariance
networks constructed from financial data, to extract the key subnetwork
embedded in the full sample network.
|
Intent Classification (IC) and Slot Labeling (SL) models, which form the
basis of dialogue systems, often encounter noisy data in real-word
environments. In this work, we investigate how robust IC/SL models are to noisy
data. We collect and publicly release a test-suite for seven common noise types
found in production human-to-bot conversations (abbreviations, casing,
misspellings, morphological variants, paraphrases, punctuation and synonyms).
On this test-suite, we show that common noise types substantially degrade the
IC accuracy and SL F1 performance of state-of-the-art BERT-based IC/SL models.
By leveraging cross-noise robustness transfer -- training on one noise type to
improve robustness on another noise type -- we design aggregate
data-augmentation approaches that increase the model performance across all
seven noise types by +10.8% for IC accuracy and +15 points for SL F1 on
average. To the best of our knowledge, this is the first work to present a
single IC/SL model that is robust to a wide range of noise phenomena.
|
The Malmquist-Takenaka system is a perturbation of the classical
trigonometric system, where powers of $z$ are replaced by products of other
M\"obius transforms of the disc. The system is also inherently connected to the
so-called nonlinear phase unwinding decomposition which has been in the center
of some recent activity. We prove $L^p$ bounds for the maximal partial sum
operator of the Malmquist-Takenaka series under additional assumptions on the
zeros of the M\"obius transforms. We locate the problem in the time-frequency
setting and, in particular, we connect it to the polynomial Carleson theorem.
|
We study the properties of the Nakajima-Zwanzig memory kernel for a qubit
immersed in a many-body localized (i.e.\ disordered and interacting) bath. We
argue that the memory kernel decays as a power law in both the localized and
ergodic regimes, and show how this can be leveraged to extract $t\to\infty$
populations for the qubit from finite time ($J t \leq 10^2$) data in the
thermalizing phase. This allows us to quantify how the long-time values of the
populations approach the expected thermalized state as the bath approaches the
thermodynamic limit. This approach should provide a good complement to
state-of-the-art numerical methods, for which the long-time dynamics with large
baths are impossible to simulate in this phase. Additionally, our numerics on
finite baths reveal the possibility for unbounded exponential growth in the
memory kernel, a phenomenon rooted in the appearance of exceptional points in
the projected Liouvillian governing the reduced dynamics. In small systems
amenable to exact numerics, we find that these pathologies may have some
correlation with delocalization.
|
Human evaluation of modern high-quality machine translation systems is a
difficult problem, and there is increasing evidence that inadequate evaluation
procedures can lead to erroneous conclusions. While there has been considerable
research on human evaluation, the field still lacks a commonly-accepted
standard procedure. As a step toward this goal, we propose an evaluation
methodology grounded in explicit error analysis, based on the Multidimensional
Quality Metrics (MQM) framework. We carry out the largest MQM research study to
date, scoring the outputs of top systems from the WMT 2020 shared task in two
language pairs using annotations provided by professional translators with
access to full document context. We analyze the resulting data extensively,
finding among other results a substantially different ranking of evaluated
systems from the one established by the WMT crowd workers, exhibiting a clear
preference for human over machine output. Surprisingly, we also find that
automatic metrics based on pre-trained embeddings can outperform human crowd
workers. We make our corpus publicly available for further research.
|
We define a multi-group version of the mean-field or Curie-Weiss spin model.
For this model, we show how, analogously to the classical (single-group) model,
the three temperature regimes are defined. Then we use the method of moments to
determine for each regime how the vector of the group magnetisations behaves
asymptotically. Some possible applications to social or political sciences are
discussed.
|
Short pulse lasers are used to characterize the nonlinear response of
amplified photodetectors. Two widely used balanced detectors are characterized
in terms of amplitude, area, broadening, and balancing the mismatch of their
impulse response. The dynamic impact of pulses on the detector is also
discussed. It is demonstrated that using photodetectors with short pulses
triggers nonlinearities even when the source average power is well below the
detector continuous power saturation threshold.
|
To enable a deep learning-based system to be used in the medical domain as a
computer-aided diagnosis system, it is essential to not only classify diseases
but also present the locations of the diseases. However, collecting
instance-level annotations for various thoracic diseases is expensive.
Therefore, weakly supervised localization methods have been proposed that use
only image-level annotation. While the previous methods presented the disease
location as the most discriminative part for classification, this causes a deep
network to localize wrong areas for indistinguishable X-ray images. To solve
this issue, we propose a spatial attention method using disease masks that
describe the areas where diseases mainly occur. We then apply the spatial
attention to find the precise disease area by highlighting the highest
probability of disease occurrence. Meanwhile, the various sizes, rotations and
noise in chest X-ray images make generating the disease masks challenging. To
reduce the variation among images, we employ an alignment module to transform
an input X-ray image into a generalized image. Through extensive experiments on
the NIH-Chest X-ray dataset with eight kinds of diseases, we show that the
proposed method results in superior localization performances compared to
state-of-the-art methods.
|
Probability is an important question in the ontological interpretation of
quantum mechanics. It has been discussed in some trajectory interpretations
such as Bohmian mechanics and stochastic mechanics. New questions arise when
the probability domain extends to the complex space, including the generation
of complex trajectory, the definition of the complex probability, and the
relation of the complex probability to the quantum probability. The complex
treatment proposed in this article applies the optimal quantum guidance law to
derive the stochastic differential equation governing a particle random motion
in the complex plane. The probability distribution of the particle position
over the complex plane is formed by an ensemble of the complex quantum random
trajectories, which are solved from the complex stochastic differential
equation. Meanwhile, this probability distribution is verified by the solution
of the complex Fokker Planck equation. It is shown that quantum probability and
classical probability can be integrated under the framework of complex
probability, such that they can both be derived from the same probability
distribution by different statistical ways of collecting spatial points.
|
We consider a family of Schr\"odinger equations with unbounded Hamiltonian
quadratic nonlinearities on a generic tori of dimension $d\geq1$. We study the
behaviour of high Sobolev norms $H^{s}$, $s\gg1$, of solutions with initial
conditions in $H^{s}$ whose
$H^{\rho}$-Sobolev norm, $1\ll\rho\ll s$, is smaller than $\e\ll1$. We
provide a control of the $H^{s}$-norm over a time interval of order
$O(\e^{-2})$. %where $\e\ll1$ is the size of the initial condition in
$H^{\rho}$.
Due to the lack of conserved quantities controlling high Sobolev norms, the
key ingredient of the proof is the construction of a modified energy equivalent
to the "low norm" $H^{\rho}$ (when $\rho$ is sufficiently high) over a
nontrivial time interval $O(\e^{-2})$. This is achieved by means of normal form
techniques for quasi-linear equations involving para-differential calculus. The
main difficulty is to control the possible loss of derivatives due to the small
divisors arising form three waves interactions. By performing "tame" energy
estimates we obtain upper bounds for higher Sobolev norms $H^{s}$.
|
Deepfakes are the result of digital manipulation to obtain credible videos in
order to deceive the viewer. This is done through deep learning techniques
based on autoencoders or GANs that become more accessible and accurate year
after year, resulting in fake videos that are very difficult to distinguish
from real ones. Traditionally, CNN networks have been used to perform deepfake
detection, with the best results obtained using methods based on EfficientNet
B7. In this study, we combine various types of Vision Transformers with a
convolutional EfficientNet B0 used as a feature extractor, obtaining comparable
results with some very recent methods that use Vision Transformers. Differently
from the state-of-the-art approaches, we use neither distillation nor ensemble
methods. The best model achieved an AUC of 0.951 and an F1 score of 88.0%, very
close to the state-of-the-art on the DeepFake Detection Challenge (DFDC).
|
The focus of this research is sensor applications including radar and sonar.
Optimal sensing means achieving the best signal quality with the least time and
energy cost, which allows processing more data. This paper presents novel work
by using an integer linear programming "algorithm" to achieve optimal sensing
by selecting the best possible number of signals of a type or a combination of
multiple types of signals to ensure the best sensing quality considering all
given constraints. A solution based on a heuristic algorithm is implemented to
improve the computing time performance. What is novel in this solution is
synthesis of an optimized signal mix using information such as but not limited
to signal quality, energy and computing time.
|
Mahi-mahi (Coryphaena hippurus) are a highly migratory pelagic fish, but
little is known about what environmental factors drive their broad
distribution. This study examined how temperature influences aerobic scope and
swimming performance in mahi. Mahi were acclimated to four temperatures
spanning their natural range (20, 24, 28, and 32{\deg}C; 5-27 days) and
critical swimming speed (Ucrit), metabolic rates, aerobic scope, and optimal
swim speed were measured. Aerobic scope and Ucrit were highest in
28{\deg}C-acclimated fish. 20{\deg}C-acclimated mahi experienced significantly
decreased aerobic scope and Ucrit relative to 28{\deg}C-acclimated fish (57 and
28% declines, respectively). 32{\deg}C-acclimated mahi experienced increased
mortality and a significant 23% decline in Ucrit, and a trend for a 26% decline
in factorial aerobic scope relative to 28{\deg}C-acclimated fish. Absolute
aerobic scope showed a similar pattern to factorial aerobic scope. Our results
are generally in agreement with previously observed distribution patterns for
wild fish. Although thermal performance can vary across life stages, the
highest tested swim performance and aerobic scope found in the present study
(28{\deg}C), aligns with recently observed habitat utilization patterns for
wild mahi and could be relevant for climate change predictions.
|
The mapped bases or Fake Nodes Approach (FNA), introduced in [10], allows to
change the set of nodes without the need of resampling the function. Such
scheme has been successfully applied in preventing the appearance of the Gibbs
phenomenon when interpolating discontinuous functions. However, the originally
proposed S-Gibbs map suffers of a subtle instability when the interpolant is
constructed at equidistant nodes, due to the Runge's phenomenon. Here, we
propose a novel approach, termed Gibbs-Runge-Avoiding Stable Polynomial
Approximation (GRASPA), where both Runge's and Gibbs phenomena are mitigated.
After providing a theoretical analysis of the Lebesgue constant associated to
the mapped nodes, we test the new approach by performing different numerical
experiments which confirm the theoretical findings.
|
We prove that the torsion points of an abelian variety are equidistributed
over the corresponding berkovich space with respect to the canonical measure.
|
We give explicit examples of quaternion-K\"ahler and hypercomplex structures
on bundles over hyperK\"ahler manifolds. We study the infinitesimal symmetries
of these examples and the associated Galicki-Lawson quaternion-K\"ahler moment
map. In particular, this leads us to a new proof a
hyperK\"ahler/quaternion-K\"ahler type correspondence. We also give examples of
other Einstein metrics and balanced Hermitian structures on these bundles.
|
Code review plays an important role in software quality control. A typical
review process would involve a careful check of a piece of code in an attempt
to find defects and other quality issues/violations. One type of issues that
may impact the quality of the software is code smells - i.e., bad programming
practices that may lead to defects or maintenance issues. Yet, little is known
about the extent to which code smells are identified during code reviews. To
investigate the concept behind code smells identified in code reviews and what
actions reviewers suggest and developers take in response to the identified
smells, we conducted an empirical study of code smells in code reviews using
the two most active OpenStack projects (Nova and Neutron). We manually checked
19,146 review comments obtained by keywords search and random selection, and
got 1,190 smell-related reviews to study the causes of code smells and actions
taken against the identified smells. Our analysis found that 1) code smells
were not commonly identified in code reviews, 2) smells were usually caused by
violation of coding conventions, 3) reviewers usually provided constructive
feedback, including fixing (refactoring) recommendations to help developers
remove smells, and 4) developers generally followed those recommendations and
actioned the changes. Our results suggest that 1) developers should closely
follow coding conventions in their projects to avoid introducing code smells,
and 2) review-based detection of code smells is perceived to be a trustworthy
approach by developers, mainly because reviews are context-sensitive (as
reviewers are more aware of the context of the code given that they are part of
the project's development team).
|
With the popularity of Machine Learning (ML) solutions, algorithms and data
have been released faster than the capacity of processing them. In this
context, the problem of Algorithm Recommendation (AR) is receiving a
significant deal of attention recently. This problem has been addressed in the
literature as a learning task, often as a Meta-Learning problem where the aim
is to recommend the best alternative for a specific dataset. For such, datasets
encoded by meta-features are explored by ML algorithms that try to learn the
mapping between meta-representations and the best technique to be used. One of
the challenges for the successful use of ML is to define which features are the
most valuable for a specific dataset since several meta-features can be used,
which increases the meta-feature dimension. This paper presents an empirical
analysis of Feature Selection and Feature Extraction in the meta-level for the
AR problem. The present study was focused on three criteria: predictive
performance, dimensionality reduction, and pipeline runtime. As we verified,
applying Dimensionality Reduction (DR) methods did not improve predictive
performances in general. However, DR solutions reduced about 80% of the
meta-features, obtaining pretty much the same performance as the original setup
but with lower runtimes. The only exception was PCA, which presented about the
same runtime as the original meta-features. Experimental results also showed
that various datasets have many non-informative meta-features and that it is
possible to obtain high predictive performance using around 20% of the original
meta-features. Therefore, due to their natural trend for high dimensionality,
DR methods should be used for Meta-Feature Selection and Meta-Feature
Extraction.
|
Whereas electron-phonon scattering typically relaxes the electron's momentum
in metals, a perpetual exchange of momentum between phonons and electrons
conserves total momentum and can lead to a coupled electron-phonon liquid with
unique transport properties. This theoretical idea was proposed decades ago and
has been revisited recently, but the experimental signatures of an
electron-phonon liquid have been rarely reported. We present evidence of such a
behavior in a transition metal ditetrelide, NbGe$_2$, from three different
experiments. First, quantum oscillations reveal an enhanced quasiparticle mass,
which is unexpected in NbGe$_2$ due to weak electron-electron correlations,
hence pointing at electron-phonon interactions. Second, resistivity
measurements exhibit a discrepancy between the experimental data and calculated
curves within a standard Fermi liquid theory. Third, Raman scattering shows
anomalous temperature dependence of the phonon linewidths which fits an
empirical model based on phonon-electron coupling. We discuss structural
factors, such as chiral symmetry, short metallic bonds, and a low-symmetry
coordination environment as potential sources of coupled electron-phonon
liquids.
|
A variety of techniques have been developed for the approximation of
non-periodic functions. In particular, there are approximation techniques based
on rank-$1$ lattices and transformed rank-$1$ lattices, including methods that
use sampling sets consisting of Chebyshev- and tent-transformed nodes. We
compare these methods with a parameterized transformed Fourier system that
yields similar $\ell_2$-approximation errors.
|
We show that the cobordism class of a polarization of Hodge module defines a
natural transformation from the Grothendieck group of Hodge modules to the
cobordism group of self-dual bounded complexes with real coefficients and
constructible cohomology sheaves in a compatible way with pushforward by proper
morphisms. This implies a new proof of the well-definedness of the natural
transformation from the Grothendieck group of varieties over a given variety to
the above cobordism group (with real coefficients). As a corollary, we get a
slight extension of a conjecture of Brasselet, Sch\"urmann and Yokura, showing
that in the $\bf Q$-homologically isolated singularity case, the homology
$L$-class which is the specialization of the Hirzebruch class coincides with
the intersection complex $L$-class defined by Goresky, MacPherson, and others
if and only if the sum of the reduced Euler-Hodge signatures of the stalks of
the shifted intersection complex vanishes. Here Hodge signature uses a
polarization of Hodge structure, and it does not seem easy to define it by a
purely topological method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.