abstract
stringlengths 42
2.09k
|
---|
The paper describes the construction of entropy-stable discontinuous Galerkin
difference (DGD) discretizations for hyperbolic conservation laws on
unstructured grids. The construction takes advantage of existing theory for
entropy-stable summation-by-parts (SBP) discretizations. In particular, the
paper shows how DGD discretizations -- both linear and nonlinear -- can be
constructed by defining the SBP trial and test functions in terms of
interpolated DGD degrees of freedom. In the case of entropy-stable
discretizations, the entropy variables rather than the conservative variables
must be interpolated to the SBP nodes. A fully-discrete entropy-stable scheme
is obtained by adopting the relaxation Runge-Kutta version of the midpoint
method. In addition, DGD matrix operators for the first derivative are shown to
be dense-norm SBP operators. Numerical results are presented to verify the
accuracy and entropy-stability of the DGD discretization in the context of the
Euler equations. The results suggest that DGD and SBP solution errors are
similar for the same number of degrees of freedom. Finally, an investigation of
the DGD spectra shows that spectral radius is relatively insensitive to
discretization order; however, the high-order methods do suffer from the linear
instability reported for other entropy-stable discretizations.
|
Segmenting histology images into diagnostically relevant regions is
imperative to support timely and reliable decisions by pathologists. To this
end, computer-aided techniques have been proposed to delineate relevant regions
in scanned histology slides. However, the techniques necessitate task-specific
large datasets of annotated pixels, which is tedious, time-consuming,
expensive, and infeasible to acquire for many histology tasks. Thus,
weakly-supervised semantic segmentation techniques are proposed to utilize weak
supervision that is cheaper and quicker to acquire. In this paper, we propose
SegGini, a weakly supervised segmentation method using graphs, that can utilize
weak multiplex annotations, i.e. inexact and incomplete annotations, to segment
arbitrary and large images, scaling from tissue microarray (TMA) to whole slide
image (WSI). Formally, SegGini constructs a tissue-graph representation for an
input histology image, where the graph nodes depict tissue regions. Then, it
performs weakly-supervised segmentation via node classification by using
inexact image-level labels, incomplete scribbles, or both. We evaluated SegGini
on two public prostate cancer datasets containing TMAs and WSIs. Our method
achieved state-of-the-art segmentation performance on both datasets for various
annotation settings while being comparable to a pathologist baseline.
|
Over the past decade, HCI researchers, design researchers, and practitioners
have increasingly addressed ethics-focused issues through a range of
theoretical, methodological and pragmatic contributions to the field. While
many forms of design knowledge have been proposed and described, we focus
explicitly on knowledge that has been codified as "methods," which we define as
any supports for everyday work practices of designers. In this paper, we
identify, analyze, and map a collection of 63 existing ethics-focused methods
intentionally designed for ethical impact. We present a content analysis,
providing a descriptive record of how they operationalize ethics, their
intended audience or context of use, their "core" or "script," and the means by
which these methods are formulated, articulated, and languaged. Building on
these results, we provide an initial definition of ethics-focused methods,
identifying potential opportunities for the development of future methods to
support design practice and research.
|
We present a novel neural network Maximum Mean Discrepancy (MMD) statistic by
identifying a new connection between neural tangent kernel (NTK) and MMD. This
connection enables us to develop a computationally efficient and
memory-efficient approach to compute the MMD statistic and perform NTK based
two-sample tests towards addressing the long-standing challenge of memory and
computational complexity of the MMD statistic, which is essential for online
implementation to assimilating new samples. Theoretically, such a connection
allows us to understand the NTK test statistic properties, such as the Type-I
error and testing power for performing the two-sample test, by adapting
existing theories for kernel MMD. Numerical experiments on synthetic and
real-world datasets validate the theory and demonstrate the effectiveness of
the proposed NTK-MMD statistic.
|
This paper contributes a closed-form linear IV estimator to a class of
estimators that minimises the mean dependence of an error term on a set of
instruments. Subject to a weak uncorrelatedness exclusion restriction, root-n
consistency and asymptotic normality are achieved under a significantly weak
relevance condition in at least two respects: (1) consistent estimation without
excluded instruments is possible provided endogenous covariates are
non-linearly mean-dependent on exogenous covariates, and (2) the endogenous
covariates may be uncorrelated with but mean-dependent on instruments. In
addition, this paper proposes a test of the weak relevance condition in the
case of a single endogenous with exogenous covariates. Monte Carlo simulations
illustrate low bias relative to conventional IV methods when instruments are
very weak. An empirical example illustrates the practical usefulness of the
estimator where, for instance, reasonable estimates are still achieved when no
excluded instrument is used.
|
We propose an algorithm based on Hilbert space-filling curves to reorder mesh
elements in memory for use with the Spectral Element Method, aiming to attain
fewer cache misses, better locality of data reference and faster execution. We
present a technique to numerically simulate acoustic wave propagation in 2D
domains using the Spectral Element Method, and discuss computational
performance aspects of this procedure. We reorder mesh-related data via Hilbert
curves to achieve sizable reductions in execution time under several mesh
configurations in shared-memory systems. Our experiments show that the Hilbert
curve approach works well with meshes of several granularities and also with
small and large variations in element sizes, achieving reductions between 9%
and 25% in execution time when compared to three other ordering schemes.
|
We propose a novel approach to lifelong learning, introducing a compact
encapsulated support structure which endows a network with the capability to
expand its capacity as needed to learn new tasks while preventing the loss of
learned tasks. This is achieved by splitting neurons with high semantic drift
and constructing an adjacent network to encode the new tasks at hand. We call
this the Plastic Support Structure (PSS), it is a compact structure to learn
new tasks that cannot be efficiently encoded in the existing structure of the
network. We validate the PSS on public datasets against existing lifelong
learning architectures, showing it performs similarly to them but without prior
knowledge of the task and in some cases with fewer parameters and in a more
understandable fashion where the PSS is an encapsulated container for specific
features related to specific tasks, thus making it an ideal "add-on" solution
for endowing a network to learn more tasks.
|
Entropy production characterizes irreversibility. This viewpoint allows us to
consider the thermodynamic uncertainty relation, which states that a higher
precision can be achieved at the cost of higher entropy production, as a
relation between precision and irreversibility. Considering the original and
perturbed dynamics, we show that the precision of an arbitrary counting
observable in continuous measurement of quantum Markov processes is bounded
from below by Loschmidt echo between the two dynamics, representing the
irreversibility of quantum dynamics. When considering particular perturbed
dynamics, our relation leads to several thermodynamic uncertainty relations,
indicating that our relation provides a unified perspective on classical and
quantum thermodynamic uncertainty relations.
|
We study the repair problem for hyperproperties specified in the temporal
logic HyperLTL. Hyperproperties are system properties that relate multiple
computation traces. This class of properties includes information flow policies
like noninterference and observational determinism. The repair problem is to
find, for a given Kripke structure, a substructure that satisfies a given
specification. We show that the repair problem is decidable for HyperLTL
specifications and finite-state Kripke structures. We provide a detailed
complexity analysis for different fragments of HyperLTL and different system
types: tree-shaped, acyclic, and general Kripke structures.
|
New integrability properties of a family of sequences of ordinary
differential equations, which contains the Riccati and Abel chains as the most
simple sequences, are studied. The determination of n generalized symmetries of
the nth-order equation in each chain provides, without any kind of integration,
n-1 functionally independent first integrals of the equation. A remaining first
integral arises by a quadrature by using a Jacobi last multiplier that is
expressed in terms of the preceding equation in the corresponding sequence. The
complete set of n first integrals is used to obtain the exact general solution
of the nth-order equation of each sequence. The results are applied to derive
directly the exact general solution of any equation in the Riccati and Abel
chains.
|
In a previous paper we presented the results of applying machine learning to
classify whether an HI 21-cm absorption spectrum arises in a source intervening
the sight-line to a more distant radio source or within the host of the radio
source itself. This is usually determined from an optical spectrum giving the
source redshift. However, not only will this be impractical for the large
number of sources expected to be detected with the Square Kilometre Array, but
bright optical sources are the most ultra-violet luminous at high redshift and
so bias against the detection of cool, neutral gas. Adding another 44, mostly
newly detected absorbers, to the previous sample of 92, we test four different
machine learning algorithms, again using the line properties (width, depth and
number of Gaussian fits) as features. Of these algorithms, three gave a some
improvement over the previous sample, with a logistic regression model giving
the best results. This suggests that the inclusion of further training data, as
new absorbers are detected, will further increase the prediction accuracy above
the current 80%. We use the logistic regression model to classify the z = 0.42
absorption towards PKS 1657-298 and find this to be associated, which is
consistent with a previous study which determined a similar redshift from the
K-band magnitude-redshift relation.
|
We study combinatorial problems with real world applications such as machine
scheduling, routing, and assignment. We propose a method that combines
Reinforcement Learning (RL) and planning. This method can equally be applied to
both the offline, as well as online, variants of the combinatorial problem, in
which the problem components (e.g., jobs in scheduling problems) are not known
in advance, but rather arrive during the decision-making process. Our solution
is quite generic, scalable, and leverages distributional knowledge of the
problem parameters. We frame the solution process as an MDP, and take a Deep
Q-Learning approach wherein states are represented as graphs, thereby allowing
our trained policies to deal with arbitrary changes in a principled manner.
Though learned policies work well in expectation, small deviations can have
substantial negative effects in combinatorial settings. We mitigate these
drawbacks by employing our graph-convolutional policies as non-optimal
heuristics in a compatible search algorithm, Monte Carlo Tree Search, to
significantly improve overall performance. We demonstrate our method on two
problems: Machine Scheduling and Capacitated Vehicle Routing. We show that our
method outperforms custom-tailored mathematical solvers, state of the art
learning-based algorithms, and common heuristics, both in computation time and
performance.
|
As weak lensing surveys become deeper, they reveal more non-Gaussian aspects
of the convergence field which can only be extracted using statistics beyond
the power spectrum. In Cheng et al. (2020) we showed that the scattering
transform, a novel statistic borrowing mathematical concepts from convolutional
neural networks, is a powerful tool for cosmological parameter estimation in
the non-Gaussian regime. Here, we extend that analysis to explore its
sensitivity to dark energy and neutrino mass parameters with weak lensing
surveys. We first use image synthesis to show visually that, compared to the
power spectrum and bispectrum, the scattering transform provides a better
statistical vocabulary to characterize the perceptual properties of lensing
mass maps. We then show that it is also better suited for parameter inference:
(i) it provides higher sensitivity in the noiseless regime, and (ii) at the
noise level of Rubin-like surveys, though the constraints are not significantly
tighter than those of the bispectrum, the scattering coefficients have a more
Gaussian sampling distribution, which is an important property for likelihood
parametrization and accurate cosmological inference. We argue that the
scattering coefficients are preferred statistics considering both constraining
power and likelihood properties.
|
This study applies a new approach, the Theory of Functional Connections
(TFC), to solve the two-point boundary-value problem (TPBVP) in non-Keplerian
orbit transfer. The perturbations considered are drag, solar radiation
pressure, higher-order gravitational potential harmonic terms, and multiple
bodies. The proposed approach is applied to Earth-to-Moon transfers, and
obtains exact boundary condition satisfaction and with very fast convergence.
Thanks to this highly efficient approach, perturbed pork-chop plots of
Earth-to-Moon transfers are generated, and individual analyses on the
transfers' parameters are easily done at low computational costs. The minimum
fuel analysis is provided in terms of the time of flight, thrust application
points, and relative geometry of the Moon and Sun. The transfer costs obtained
are in agreement with the literature's best solutions, and in some cases are
even slightly better.
|
We propose a fast and flexible method to scale multivariate return volatility
predictions up to high-dimensions using a dynamic risk factor model. Our
approach increases parsimony via time-varying sparsity on factor loadings and
is able to sequentially learn the use of constant or time-varying parameters
and volatilities. We show in a dynamic portfolio allocation problem with 452
stocks from the S&P 500 index that our dynamic risk factor model is able to
produce more stable and sparse predictions, achieving not just considerable
portfolio performance improvements but also higher utility gains for the
mean-variance investor compared to the traditional Wishart benchmark and the
passive investment on the market index.
|
With the increased interest in machine learning and big data problems, the
need for large amounts of labelled data has also grown. However, it is often
infeasible to get experts to label all of this data, which leads many
practitioners to crowdsourcing solutions. In this paper, we present new
techniques to improve the quality of the labels while attempting to reduce the
cost. The naive approach to assigning labels is to adopt a majority vote
method, however, in the context of data labelling, this is not always ideal as
data labellers are not equally reliable. One might, instead, give higher
priority to certain labellers through some kind of weighted vote based on past
performance. This paper investigates the use of more sophisticated methods,
such as Bayesian inference, to measure the performance of the labellers as well
as the confidence of each label. The methods we propose follow an iterative
improvement algorithm which attempts to use the least amount of workers
necessary to achieve the desired confidence in the inferred label. This paper
explores simulated binary classification problems with simulated workers and
questions to test the proposed methods. Our methods outperform the standard
voting methods in both cost and accuracy while maintaining higher reliability
when there is disagreement within the crowd.
|
Coded-caching is a promising technique to reduce the peak rate requirement of
backhaul links during high traffic periods. In this letter, we study the effect
of adaptive transmission on the performance of coded-caching based networks.
Particularly, concentrating on the reduction of backhaul peak load during the
high traffic periods, we develop adaptive rate and power allocation schemes
maximizing the network successful transmission probability, which is defined as
the probability of the event with all cache nodes decoding their intended
signals correctly. Moreover, we study the effect of different message decoding
and buffering schemes on the system performance. As we show, the performance of
coded-caching networks is considerably affected by rate/power allocation as
well as the message decoding/buffering schemes.
|
Time series classification(TSC) has always been an important and challenging
research task. With the wide application of deep learning, more and more
researchers use deep learning models to solve TSC problems. Since time series
always contains a lot of noise, which has a negative impact on network
training, people usually filter the original data before training the network.
The existing schemes are to treat the filtering and training as two stages, and
the design of the filter requires expert experience, which increases the design
difficulty of the algorithm and is not universal. We note that the essence of
filtering is to filter out the insignificant frequency components and highlight
the important ones, which is similar to the attention mechanism. In this paper,
we propose an attention mechanism that acts on spectrum (SAM). The network can
assign appropriate weights to each frequency component to achieve adaptive
filtering. We use L1 regularization to further enhance the frequency screening
capability of SAM. We also propose a segmented-SAM (SSAM) to avoid the loss of
time domain information caused by using the spectrum of the whole sequence. In
which, a tumbling window is introduced to segment the original data. Then SAM
is applied to each segment to generate new features. We propose a heuristic
strategy to search for the appropriate number of segments. Experimental results
show that SSAM can produce better feature representations, make the network
converge faster, and improve the robustness and classification accuracy.
|
A high-order Flux reconstruction implementation of the hyperbolic formulation
for the incompressible Navier-Stokes equation is presented. The governing
equations employ Chorin's classical artificial compressibility (AC) formulation
cast in hyperbolic form. Instead of splitting the second-order conservation law
into two equations, one for the solution and another for the gradient, the
Navier-Stokes equation is cast into a first-order hyperbolic system of
equations. Including the gradients in the AC iterative process results in a
significant improvement in accuracy for the pressure, velocity, and its
gradients. Furthermore, this treatment allows for taking larger time-steps
since the hyperbolic formulation eliminates the restriction due to diffusion.
Tests using the method of manufactured solutions show that solving the
conventional form of the Navier-Stokes equation lowers the order of accuracy
for gradients, while the hyperbolic method is shown to provide equal orders of
accuracy for both the velocity and its gradients which may be beneficial in
several applications. Two- and three-dimensional benchmark tests demonstrate
the superior accuracy and computational efficiency of the developed solver in
comparison to the conventional method and other published works. This study
shows that the developed high-order hyperbolic solver for incompressible flows
is attractive due to its accuracy, stability and efficiency in solving
diffusion dominated problems.
|
In this investigation, force field-based molecular dynamics (MD) simulations
have been employed to generate detailed structural representations for a range
of amorphous quaternary CaO-MgO-Al2O3-SiO2 (CMAS) and ternary CaO-Al2O3-SiO2
(CAS) glasses. Comparison of the simulation results with select experimental
X-ray and neutron total scattering and literature data reveals that the
MD-generated structures have captured the key structural features of these CMAS
and CAS glasses. Based on the MD-generated structural representations, we have
developed two structural descriptors, specifically (i) average metal oxide
dissociation energy (AMODE) and (ii) average self-diffusion coefficient (ASDC)
of all the atoms at melting. Both structural descriptors are seen to more
accurately predict the relative glass reactivity than the commonly used degree
of depolymerization parameter, especially for the eight synthetic CAS glasses
that span a wide compositional range. Hence these descriptors hold great
promise for predicting CMAS and CAS glass reactivity in alkaline environments
from compositional information.
|
We make use of ALMA continuum observations of $15$ luminous Lyman-break
galaxies at $z$$\sim$$7$$-$$8$ to probe their dust-obscured star-formation.
These observations are sensitive enough to probe to obscured SFRs of $20$
$M_{\odot}$$/$$yr$ ($3\sigma$). Six of the targeted galaxies show significant
($\geq$$3$$\sigma$) dust continuum detections, more than doubling the number of
known dust-detected galaxies at $z$$>$$6.5$. Their IR luminosities range from
$2.7$$\times$$10^{11}$ $L_{\odot}$ to $1.1$$\times$$10^{12}$ $L_{\odot}$,
equivalent to obscured SFRs of $20$ to $105$ $M_{\odot}$$/$$yr$. We use our
results to quantify the correlation of the infrared excess IRX on the
UV-continuum slope $\beta_{UV}$ and stellar mass. Our results are most
consistent with an SMC attenuation curve for intrinsic $UV$-slopes
$\beta_{UV,intr}$ of $-2.63$ and most consistent with an attenuation curve
in-between SMC and Calzetti for $\beta_{UV,intr}$ slopes of $-2.23$, assuming a
dust temperature $T_d$ of $50$ K. Our fiducial IRX-stellar mass results at
$z$$\sim$$7$$-$$8$ are consistent with marginal evolution from $z$$\sim$$0$. We
then show how both results depend on $T_d$. For our six dust-detected sources,
we estimate their dust masses and find that they are consistent with dust
production from SNe if the dust destruction is low ($<$$90$%). Finally we
determine the contribution of dust-obscured star formation to the star
formation rate density for $UV$ luminous ($<$$-$$21.5$ mag:
$\gtrsim$$1.7$$L_{UV} ^*$) $z$$\sim$$7$$-$$8$ galaxies, finding that the total
SFR density at $z$$\sim$$7$ and $z$$\sim$$8$ from bright galaxies is
$0.18_{-0.10}^{+0.08}$ dex and $0.20_{-0.09}^{+0.05}$ dex higher, respectively,
i.e. $\sim$$\frac{1}{3}$ of the star formation in $\gtrsim$$1.7$$L_{UV} ^*$
galaxies at $z$$\sim$$7$$-$$8$ is obscured by dust.
|
Cycle polytopes of matroids have been introduced in combinatorial
optimization as a generalization of important classes of polyhedral objects
like cut polytopes and Eulerian subgraph polytopes associated to graphs. Here
we start an algebraic and geometric investigation of these polytopes by
studying their toric algebras, called cycle algebras, and their defining
ideals. Several matroid operations are considered which determine faces of
cycle polytopes that belong again to this class of polyhedral objects. As a key
technique used in this paper, we study certain minors of given matroids which
yield algebra retracts on the level of cycle algebras. In particular, that
allows us to use a powerful algebraic machinery. As an application, we study
highest possible degrees in minimal homogeneous systems of generators of
defining ideals of cycle algebras as well as interesting cases of cut polytopes
and Eulerian subgraph polytopes.
|
We construct tame differential calculi coming from toral actions on a class
of C*-algebras. Relying on the existence of a unique Levi-Civita connection on
such a calculi, we prove a version of the Bianchi identity. A Gauss-Bonnet
theorem for the canonical rank 2-calculus is studied.
|
H-principle (or homotopy principle) is the property that some solutions to a
partial differential equation/inequality can be obtained as a deformed of a
formal solution by a homotopy. Gromov defines the sheaf theoritic h-principle
in his book and shows the existence of h-principle from a very abstract
setting. In this paper, we clarify a categorical structure behind Gromov
h-principle. The main result is that a flexible sheaf can be understood as a
fibrant object in some categories.
|
We propose a route choice model in which traveler behavior is represented as
a utility maximizing assignment of flow across an entire network under a flow
conservation constraint}. Substitution between routes depends on how much they
overlap. {\tr The model is estimated considering the full set of route
alternatives, and no choice set generation is required. Nevertheless,
estimation requires only linear regression and is very fast. Predictions from
the model can be computed using convex optimization, and computation is
straightforward even for large networks. We estimate and validate the model
using a large dataset comprising 1,337,096 GPS traces of trips in the Greater
Copenhagen road network.
|
Context. To investigate the source of a type III radio burst storm during
encounter 2 of NASA's Parker Solar Probe (PSP) mission.
Aims. It was observed that in encounter 2 of NASA's Parker Solar Probe
mission there was a large amount of radio activity, and in particular a noise
storm of frequent, small type III bursts from 31st March to 6th April 2019. Our
aim is to investigate the source of these small and frequent bursts.
Methods. In order to do this, we analysed data from the Hinode EUV Imaging
Spectrometer (EIS), PSP FIELDS, and the Solar Dynamics Observatory (SDO)
Atmospheric Imaging Assembly (AIA). We studied the behaviour of active region
12737, whose emergence and evolution coincides with the timing of the radio
noise storm and determined the possible origins of the electron beams within
the active region. To do this, we probe the dynamics, Doppler velocity,
non-thermal velocity, FIP bias, densities, and carry out magnetic modelling.
Results. We demonstrate that although the active region on the disk produces
no significant flares, its evolution indicates it is a source of the electron
beams causing the radio storm. They most likely originate from the area at the
edge of the active region that shows strong blue-shifted plasma. We demonstrate
that as the active region grows and expands, the area of the blue-shifted
region at the edge increases, which is also consistent with the increasing area
where large-scale or expanding magnetic field lines from our modelling are
anchored. This expansion is most significant between 1 and 4 April 2019,
coinciding with the onset of the type III storm and the decrease of the
individual burst's peak frequency, indicating the height at which the peak
radiation is emitted increases as the active region evolves.
|
We provide a simple proof that the partial sums $\sum_{n\leq x}f(n)$ of a
Rademacher random multiplicative function $f$ change sign for an infinite
number of $x>0$, almost surely.
|
We study random digraphs on sequences of expanders with bounded average
degree and weak local limit. The threshold for the existence of a giant
strongly connected component, as well as the asymptotic fraction of nodes with
giant fan-in or giant fan-out are local, in the sense that they are the same
for two sequences with the same weak local limit. The digraph has a bow-tie
structure, with all but a vanishing fraction of nodes lying either in the
unique strongly connected giant and its fan-in and fan-out, or in sets with
small fan-in and small fan-out. All local quantities are expressed in terms of
percolation on the limiting rooted graph, without any structural assumptions on
the limit, allowing, in particular, for non tree-like limits.
In the course of proving these results, we prove that for unoriented
percolation, there is a unique giant above criticality, whose size and critical
threshold are again local. An application of our methods shows that the
critical threshold for bond percolation and random digraphs on preferential
attachment graphs is $p_c=0$, with an infinite order phase transition at $p_c$.
|
In this paper we propose a novel point cloud generator that is able to
reconstruct and generate 3D point clouds composed of semantic parts. Given a
latent representation of the target 3D model, the generation starts from a
single point and gets expanded recursively to produce the high-resolution point
cloud via a sequence of point expansion stages. During the recursive procedure
of generation, we not only obtain the coarse-to-fine point clouds for the
target 3D model from every expansion stage, but also unsupervisedly discover
the semantic segmentation of the target model according to the
hierarchical/parent-child relation between the points across expansion stages.
Moreover, the expansion modules and other elements used in our recursive
generator are mostly sharing weights thus making the overall framework light
and efficient. Extensive experiments are conducted to demonstrate that our
proposed point cloud generator has comparable or even superior performance on
both generation and reconstruction tasks in comparison to various baselines, as
well as provides the consistent co-segmentation among 3D instances of the same
object class.
|
Given (small amounts of) time-series' data from a high-dimensional,
fine-grained, multiscale dynamical system, we propose a generative framework
for learning an effective, lower-dimensional, coarse-grained dynamical model
that is predictive of the fine-grained system's long-term evolution but also of
its behavior under different initial conditions. We target fine-grained models
as they arise in physical applications (e.g. molecular dynamics, agent-based
models), the dynamics of which are strongly non-stationary but their transition
to equilibrium is governed by unknown slow processes which are largely
inaccessible by brute-force simulations. Approaches based on domain knowledge
heavily rely on physical insight in identifying temporally slow features and
fail to enforce the long-term stability of the learned dynamics. On the other
hand, purely statistical frameworks lack interpretability and rely on large
amounts of expensive simulation data (long and multiple trajectories) as they
cannot infuse domain knowledge. The generative framework proposed achieves the
aforementioned desiderata by employing a flexible prior on the complex plane
for the latent, slow processes, and an intermediate layer of physics-motivated
latent variables that reduces reliance on data and imbues inductive bias. In
contrast to existing schemes, it does not require the a priori definition of
projection operators from the fine-grained description and addresses
simultaneously the tasks of dimensionality reduction and model estimation. We
demonstrate its efficacy and accuracy in multiscale physical systems of
particle dynamics where probabilistic, long-term predictions of phenomena not
contained in the training data are produced.
|
In the context of large-angle cone-beam tomography (CBCT), we present a
practical iterative reconstruction (IR) scheme designed for rapid convergence
as required for large datasets. The robustness of the reconstruction is
provided by the "space-filling" source trajectory along which the experimental
data is collected. The speed of convergence is achieved by leveraging the
highly isotropic nature of this trajectory to design an approximate
deconvolution filter that serves as a pre-conditioner in a multi-grid scheme.
We demonstrate this IR scheme for CBCT and compare convergence to that of more
traditional techniques.
|
Boundary controlled irreversible port-Hamiltonian systems (BC-IPHS) on
1-dimensional spatial domains are defined by extending the formulation of
reversible BC-PHS to irreversible thermodynamic systems controlled at the
boundaries of their spatial domains. The structure of BC-IPHS has clear
physical interpretation, characterizing the coupling between energy storing and
energy dissipating elements. By extending the definition of boundary port
variables of BC-PHS to deal with the dissipative terms, a set of boundary port
variables are defined such that BC-IPHS are passive with respect to a given set
of conjugated inputs and outputs. As for finite dimensional IPHS, the first and
second principle are satisfied as a structural property. Several examples are
given to illustrate the proposed approach.
|
In this paper some new proposals for method comparison are presented. On the
one hand, two new robust regressions, the M-Deming and the MM-Deming, have been
developed by modifying Linnet's method of the weighted Deming regression. The
M-Deming regression shows superior qualities to the Passing-Bablok regression;
it does not suffer from bias when the data to be validated have a reduced
precision, and therefore turns out to be much more reliable. On the other hand,
a graphical method (box and ellipses) for validations has been developed which
is also equipped with a unified statistical test. In this test the intercept
and slope pairs obtained from a bootstrap process are combined into a
multinomial distribution by robust determination of the covariance matrix. The
Mahalanobis distance from the point representing the null hypothesis is
evaluated using the $\chi^{2}$ distribution. It is emphasized that the
interpretation of the graph is more important than the probability obtained
from the test. The unified test has been evaluated through Monte Carlo
simulations, comparing the theoretical $\alpha$ levels with the empirical rate
of rejections (type-I errors). In addition, a power comparison of the various
(new and old) methods was conducted using the same techniques. This unified
method, regardless of the regression chosen, shows much higher power and allows
a significant reduction in the sample size required for validations.
|
In this paper, we introduce a streaming keyphrase detection system that can
be easily customized to accurately detect any phrase composed of words from a
large vocabulary. The system is implemented with an end-to-end trained
automatic speech recognition (ASR) model and a text-independent speaker
verification model. To address the challenge of detecting these keyphrases
under various noisy conditions, a speaker separation model is added to the
feature frontend of the speaker verification model, and an adaptive noise
cancellation (ANC) algorithm is included to exploit cross-microphone noise
coherence. Our experiments show that the text-independent speaker verification
model largely reduces the false triggering rate of the keyphrase detection,
while the speaker separation model and adaptive noise cancellation largely
reduce false rejections.
|
There are concerns that the ability of language models (LMs) to generate high
quality synthetic text can be misused to launch spam, disinformation, or
propaganda. Therefore, the research community is actively working on developing
approaches to detect whether a given text is organic or synthetic. While this
is a useful first step, it is important to be able to further fingerprint the
author LM to attribute its origin. Prior work on fingerprinting LMs is limited
to attributing synthetic text generated by a handful (usually < 10) of
pre-trained LMs. However, LMs such as GPT2 are commonly fine-tuned in a myriad
of ways (e.g., on a domain-specific text corpus) before being used to generate
synthetic text. It is challenging to fingerprinting fine-tuned LMs because the
universe of fine-tuned LMs is much larger in realistic scenarios. To address
this challenge, we study the problem of large-scale fingerprinting of
fine-tuned LMs in the wild. Using a real-world dataset of synthetic text
generated by 108 different fine-tuned LMs, we conduct comprehensive experiments
to demonstrate the limitations of existing fingerprinting approaches. Our
results show that fine-tuning itself is the most effective in attributing the
synthetic text generated by fine-tuned LMs.
|
We report on the analysis of a deep Chandra observation of the high-magnetic
field pulsar (PSR) J1119-6127 and its compact pulsar wind nebula (PWN) taken in
October 2019, three years after the source went into outburst. The 0.5-7 keV
post-outburst (2019) spectrum of the pulsar is best described by a
two-component blackbody plus powerlaw model with a temperature of 0.2\pm0.1
keV, photon index of 1.8\pm0.4 and X-ray luminosity of ~1.9e33 erg s^{-1},
consistent with its pre-burst quiescent phase. We find that the pulsar has gone
back to quiescence. The compact nebula shows a jet-like morphology elongated in
the north-south direction, similar to the pre-burst phase. The post-outburst
PWN spectrum is best fit by an absorbed powerlaw with a photon index of
2.3\pm0.5 and flux of ~3.2e-14 erg cm^{-2} s^{-1} (0.5-7 keV). The PWN spectrum
shows evidence of spectral softening in the post-outburst phase, with the
pre-burst photon index of 1.2\pm0.4 changing to 2.3\pm0.5, and pre-burst
luminosity of ~1.5e32 erg s^{-1} changing to 2.7e32 erg s^{-1} in the 0.5-7 keV
band, suggesting magnetar outbursts can impact PWNe. The observed timescale for
returning to quiescence, of just a few years, implies a rather fast cooling
process and favors a scenario where J1119 is temporarily powered by magnetic
energy following the magnetar outburst, in addition to its spin-down energy.
|
Inspired by biological molecular machines we explore the idea of an active
quantum robot whose purpose is delaying decoherence. A conceptual model capable
of partially protecting arbitrary logical qubit states against single physical
qubit errors is presented. Implementation of an instance of that model - the
entanglement qubot - is proposed using laser-dressed Rydberg atoms. Dynamics of
the system is studied using stochastic wavefunction methods.
|
Experiments dedicated to the measurement of the electric dipole moment of the
neutron require outstanding control of the magnetic field uniformity. The
neutron electric dipole moment (nEDM) experiment at the Paul Scherrer Institute
uses a 199Hg co-magnetometer to precisely monitor magnetic field variations.
This co-magnetometer, in the presence of field non-uniformity, is responsible
for the largest systematic effect of this measurement. To evaluate and correct
that effect, offline measurements of the field non-uniformity were performed
during mapping campaigns in 2013, 2014 and 2017. We present the results of
these campaigns, and the improvement the correction of this effect brings to
the neutron electric dipole moment measurement.
|
Given a nonparametric Hidden Markov Model (HMM) with two states, the question
of constructing efficient multiple testing procedures is considered, treating
one of the states as an unknown null hypothesis. A procedure is introduced,
based on nonparametric empirical Bayes ideas, that controls the False Discovery
Rate (FDR) at a user--specified level. Guarantees on power are also provided,
in the form of a control of the true positive rate. One of the key steps in the
construction requires supremum--norm convergence of preliminary estimators of
the emission densities of the HMM. We provide the existence of such estimators,
with convergence at the optimal minimax rate, for the case of a HMM with $J\ge
2$ states, which is of independent interest.
|
In this paper, we demonstrate a Symbolic Reinforcement Learning (SRL)
architecture for safe control in Radio Access Network (RAN) applications. In
our automated tool, a user can select a high-level safety specifications
expressed in Linear Temporal Logic (LTL) to shield an RL agent running in a
given cellular network with aim of optimizing network performance, as measured
through certain Key Performance Indicators (KPIs). In the proposed
architecture, network safety shielding is ensured through model-checking
techniques over combined discrete system models (automata) that are abstracted
through reinforcement learning. We demonstrate the user interface (UI) helping
the user set intent specifications to the architecture and inspect the
difference in allowed and blocked actions.
|
One of the distinct features of this century has been the population of older
adults which has been on a constant rise. Elderly people have several needs and
requirements due to physical disabilities, cognitive issues, weakened memory
and disorganized behavior, that they face with increasing age. The extent of
these limitations also differs according to the varying diversities in elderly,
which include age, gender, background, experience, skills, knowledge and so on.
These varying needs and challenges with increasing age, limits abilities of
older adults to perform Activities of Daily Living (ADLs) in an independent
manner. To add to it, the shortage of caregivers creates a looming need for
technology-based services for elderly people, to assist them in performing
their daily routine tasks to sustain their independent living and active aging.
To address these needs, this work consists of making three major contributions
in this field. First, it provides a rather comprehensive review of assisted
living technologies aimed at helping elderly people to perform ADLs. Second,
the work discusses the challenges identified through this review, that
currently exist in the context of implementation of assisted living services
for elderly care in Smart Homes and Smart Cities. Finally, the work also
outlines an approach for implementation, extension and integration of the
existing works in this field for development of a much-needed framework that
can provide personalized assistance and user-centered behavior interventions to
elderly as per their varying and ever-changing needs.
|
In solving optimization problems, objective functions generally need to be
minimized or maximized. However, objective functions cannot always be
formulated explicitly in a mathematical form for complicated problem settings.
Although several regression techniques infer the approximate forms of objective
functions, they are at times expensive to evaluate. Optimal points of
"black-box" objective functions are computed in such scenarios, while
effectively using a small number of clues. Recently, an efficient method by use
of inference by sparse prior for a black-box objective function with binary
variables has been proposed. In this method, a surrogate model was proposed in
the form of a quadratic unconstrained binary optimization (QUBO) problem, and
was iteratively solved to obtain the optimal solution of the black-box
objective function. In the present study, we employ the D-Wave 2000Q quantum
annealer, which can solve QUBO by driving the binary variables by quantum
fluctuations. The D-Wave 2000Q quantum annealer does not necessarily output the
ground state at the end of the protocol due to freezing effect during the
process. We investigate effects from the output of the D-Wave quantum annealer
in performing black-box optimization. We demonstrate a benchmark test by
employing the sparse Sherrington-Kirkpatrick (SK) model as the black-box
objective function, by introducing a parameter controlling the sparseness of
the interaction coefficients. Comparing the results of the D-Wave quantum
annealer to those of the simulated annealing (SA) and semidefinite programming
(SDP), our results by the D-Wave quantum annealer and SA exhibit superiority in
black-box optimization with SDP. On the other hand, we did not find any
advantage of the D-Wave quantum annealer over the simulated annealing. As far
as in our case, any effects by quantum fluctuation are not found.
|
We describe a procedure to introduce general dependence structures on a set
of random variables. These include order-$q$ moving average-type structures, as
well as seasonal, periodic, spatial and spatio-temporal dependences. The
invariant marginal distribution can be in any family that is conjugate to an
exponential family with quadratic variance function. Dependence is induced via
a set of suitable latent variables whose conditional distribution mirrors the
sampling distribution in a Bayesian conjugate analysis of such exponential
families. We obtain strict stationarity as a special case.
|
Consider a geodesic triangle on a surface of constant curvature and subdivide
it recursively into 4 triangles by joining the midpoints of its edges. We show
the existence of a uniform $\delta>0$ such that, at any step of the
subdivision, all the triangle angles lie in the interval $(\delta, \pi
-\delta)$. Additionally, we exhibit stabilising behaviours for both angles and
lengths as this subdivision progresses.
|
(Abridged) We present a systematic investigation of physical conditions and
elemental abundances in four optically thick Lyman-limit systems (LLSs) at
$z=0.36-0.6$ discovered within the Cosmic Ultraviolet Baryon Survey (CUBS).
CUBS LLSs exhibit multi-component kinematic structure and a complex mix of
multiphase gas, with associated metal transitions from multiple ionization
states that span several hundred km/s in line-of-sight velocity. Specifically,
higher column density components (log N(HI)>16) in all four absorbers comprise
dynamically cool gas with $\langle T \rangle =(2\pm1) \times10^4\,$K and modest
non-thermal broadening of $5\pm3\,$ km/s. The high quality of the QSO
absorption spectra allows us to infer the physical conditions of the gas, using
a detailed ionization modeling that takes into account the resolved component
structures of HI and metal transitions. The range of inferred gas densities
indicates that these absorbers consist of spatially compact clouds with a
median line-of-sight thickness of $160^{+140}_{-50}$ pc. While obtaining robust
metallicity constraints for the low-density, highly ionized phase remains
challenging due to the uncertain N(HI), we demonstrate that the cool-phase gas
in LLSs has a median metallicity of
$\mathrm{[\alpha/H]_{1/2}}=-0.7^{+0.1}_{-0.2}$, with a 16-84 percentile range
of $\mathrm{[\alpha/H]}=(-1.3,-0.1)$. Furthermore, the wide range of inferred
elemental abundance ratios ($\mathrm{[C/\alpha]}$, $\mathrm{[N/\alpha]}$, and
$\mathrm{[Fe/\alpha]}$) indicate a diversity of chemical enrichment histories.
Combining the absorption data with deep galaxy survey data characterizing the
galaxy environment of these absorbers, we discuss the physical connection
between star-forming regions in galaxies and diffuse gas associated with
optically thick absorption systems in the $z<1$ circumgalactic medium.
|
Offline reinforcement learning (RL) aims at learning a good policy from a
batch of collected data, without extra interactions with the environment during
training. However, current offline RL benchmarks commonly have a large reality
gap, because they involve large datasets collected by highly exploratory
policies, and the trained policy is directly evaluated in the environment. In
real-world situations, running a highly exploratory policy is prohibited to
ensure system safety, the data is commonly very limited, and a trained policy
should be well validated before deployment. In this paper, we present a near
real-world offline RL benchmark, named NeoRL, which contains datasets from
various domains with controlled sizes, and extra test datasets for policy
validation. We evaluate existing offline RL algorithms on NeoRL and argue that
the performance of a policy should also be compared with the deterministic
version of the behavior policy, instead of the dataset reward. The empirical
results demonstrate that the tested offline RL algorithms become less
competitive to the deterministic policy on many datasets, and the offline
policy evaluation hardly helps. The NeoRL suit can be found at
http://polixir.ai/research/neorl. We hope this work will shed some light on
future research and draw more attention when deploying RL in real-world
systems.
|
In this paper we establish uniqueness in the inverse boundary value problem
for the two coefficients in the inhomogeneous porous medium equation
$\epsilon\partial_tu-\nabla\cdot(\gamma\nabla u^m)=0$, with $m>1$, in dimension
3 or higher, which is a degenerate parabolic type quasilinear PDE. Our approach
relies on using a Laplace transform to turn the original equation into a
coupled family of nonlinear elliptic equations, indexed by the frequency
parameter ($1/h$ in our definition) of the transform. A careful analysis of the
asymptotic expansion in powers of $h$, as $h\to\infty$, of the solutions to the
transformed equation, with special boundary data, allows us to obtain
sufficient information to deduce the uniqueness result.
|
Accurate estimation of cratering asymmetry on the Moon is crucial for
understanding Moon evolution history. Early studies of cratering asymmetry have
omitted the contributions of high lunar obliquity and inclination. Here, we
include lunar obliquity and inclination as new controlling variables to derive
the cratering rate spatial variation as a function of longitude and latitude.
With examining the influence of lunar obliquity and inclination on the
asteroids population encountered by the Moon, we then have derived general
formulas of the cratering rate spatial variation based on the crater scaling
law. Our formulas with addition of lunar obliquity and inclination can
reproduce the lunar cratering rate asymmetry at the current Earth-Moon distance
and predict the apex/ant-apex ratio and the pole/equator ratio of this lunar
cratering rate to be 1.36 and 0.87, respectively. The apex/ant-apex ratio is
decreasing as the obliquity and inclination increasing. Combining with the
evolution of lunar obliquity and inclination, our model shows that the
apex/ant-apex ratio does not monotonically decrease with Earth-Moon distance
and hence the influences of obliquity and inclination are not negligible on
evolution of apex/ant-apex ratio. This model is generalizable to other planets
and moons, especially for different spin-orbit resonances.
|
We present the N-body simulation techniques in EXP. EXP uses
empirically-chosen basis functions to expand the potential field of an ensemble
of particles. Unlike other basis function expansions, the derived basis
functions are adapted to an input mass distribution, enabling accurate
expansion of highly non-spherical objects, such as galactic discs. We measure
the force accuracy in three models, one based on a spherical or aspherical
halo, one based on an exponential disc, and one based on a bar-based disc
model. We find that EXP is as accurate as a direct-summation or tree-based
calculation, and in some ways is better, while being considerably less
computationally intensive. We discuss optimising the computation of the basis
function representation. We also detail numerical improvements for performing
orbit integrations, including timesteps.
|
An analysis of 1,955 physics graduate students from 19 PhD programs shows
that undergraduate grade point average predicts graduate grades and PhD
completion more effectively than GRE scores. Students' undergraduate GPA (UGPA)
and GRE Physics (GRE-P) scores are small but statistically significant
predictors of graduate course grades, while GRE quantitative and GRE verbal
scores are not. We also find that males and females score equally well in their
graduate coursework despite a statistically significant 18 percentile point gap
in median GRE-P scores between genders. A counterfactual mediation analysis
demonstrates that among admission metrics tested only UGPA is a significant
predictor of overall PhD completion, and that UGPA predicts PhD completion
indirectly through graduate grades. Thus UGPA measures traits linked to
graduate course grades, which in turn predict graduate completion. Although
GRE-P scores are not significantly associated with PhD completion, our results
suggest that any predictive effect they may have are also linked indirectly
through graduate GPA. Overall our results indicate that among commonly used
quantitative admissions metrics, UGPA offers the most insight into two
important measures of graduate school success, while posing fewer concerns for
equitable admissions practices.
|
We propose a general method for optimizing periodic input waveforms for
global entrainment of weakly forced limit-cycle oscillators based on phase
reduction and nonlinear programming. We derive averaged phase dynamics from the
mathematical model of a limit-cycle oscillator driven by a weak periodic input
and optimize the Fourier coefficients of the input waveform to maximize
prescribed objective functions. In contrast to the optimization methods that
rely on the calculus of variations, the proposed method can be applied to a
wider class of optimization problems including global entrainment objectives.
As an illustration, we consider two optimization problems, one for achieving
fast global convergence of the oscillator to the entrained state and the other
for realizing prescribed global phase distributions in a population of
identical uncoupled noisy oscillators. We show that the proposed method can
successfully yield optimal input waveforms to realize the desired states in
both cases.
|
Using a generalized Madelung transformation, we derive the hydrodynamic
representation of the Dirac equation in arbitrary curved space-times coupled to
an electromagnetic field. We obtain Dirac-Euler equations for fermions
involving a continuity equation and a first integral of the Bernoulli equation.
Using the comparison of the Dirac and Klein-Gordon equations we obtain the
balance equation for fermion particles. We also use the correspondence between
fermions and bosons to derive the hydrodynamic representation of the Weyl
equation which is a chiral form of the Dirac equation.
|
Hate speech and profanity detection suffer from data sparsity, especially for
languages other than English, due to the subjective nature of the tasks and the
resulting annotation incompatibility of existing corpora. In this study, we
identify profane subspaces in word and sentence representations and explore
their generalization capability on a variety of similar and distant target
tasks in a zero-shot setting. This is done monolingually (German) and
cross-lingually to closely-related (English), distantly-related (French) and
non-related (Arabic) tasks. We observe that, on both similar and distant target
tasks and across all languages, the subspace-based representations transfer
more effectively than standard BERT representations in the zero-shot setting,
with improvements between F1 +10.9 and F1 +42.9 over the baselines across all
tested monolingual and cross-lingual scenarios.
|
The debate on the role of school closures as a mitigation strategy against
the spread of Covid-19 is gaining relevance due to emerging variants in Europe.
According to WHO, decisions on schools "should be guided by a risk-based
approach". However, risk evaluation requires sound methods, transparent data
and careful consideration of the context at the local level. We review a recent
study by Gandini et al., on the role of school opening as a driver of the
second COVID-19 wave in Italy, which concluded that there was no connection
between school openings/closures and SARS-CoV-2 incidence. infections. This
analysis has been widely commented in Italian media as conclusive proof that
"schools are safe". However the study presents severe oversights and careless
interpretation of data.
|
As autonomous driving and augmented reality evolve, a practical concern is
data privacy. In particular, these applications rely on localization based on
user images. The widely adopted technology uses local feature descriptors,
which are derived from the images and it was long thought that they could not
be reverted back. However, recent work has demonstrated that under certain
conditions reverse engineering attacks are possible and allow an adversary to
reconstruct RGB images. This poses a potential risk to user privacy. We take
this a step further and model potential adversaries using a privacy threat
model. Subsequently, we show under controlled conditions a reverse engineering
attack on sparse feature maps and analyze the vulnerability of popular
descriptors including FREAK, SIFT and SOSNet. Finally, we evaluate potential
mitigation techniques that select a subset of descriptors to carefully balance
privacy reconstruction risk while preserving image matching accuracy; our
results show that similar accuracy can be obtained when revealing less
information.
|
The electric power grid is a critical societal resource connecting multiple
infrastructural domains such as agriculture, transportation, and manufacturing.
The electrical grid as an infrastructure is shaped by human activity and public
policy in terms of demand and supply requirements. Further, the grid is subject
to changes and stresses due to solar weather, climate, hydrology, and ecology.
The emerging interconnected and complex network dependencies make such
interactions increasingly dynamic causing potentially large swings, thus
presenting new challenges to manage the coupled human-natural system. This
paper provides a survey of models and methods that seek to explore the
significant interconnected impact of the electric power grid and interdependent
domains. We also provide relevant critical risk indicators (CRIs) across
diverse domains that may influence electric power grid risks, including
climate, ecology, hydrology, finance, space weather, and agriculture. We
discuss the convergence of indicators from individual domains to explore
possible systemic risk, i.e., holistic risk arising from cross-domains
interconnections. Our study provides an important first step towards
data-driven analysis and predictive modeling of risks in the coupled
interconnected systems. Further, we propose a compositional approach to risk
assessment that incorporates diverse domain expertise and information, data
science, and computer science to identify domain-specific CRIs and their union
in systemic risk indicators.
|
One or more scalar leptoquarks with masses around a few TeV may provide a
solution to some of the flavor anomalies that have been observed. We discuss
the impact of such new degrees on baryon number violation when the theory is
embedded in a Pati-Salam model. The Pati-Salam embedding can suppress
renormalizable and dimension-five baryon number violation in some cases. Our
work extends the results of Assad, Grinstein, and Fornal who considered the
same issue for vector leptoquarks.
|
The inclusion of domain (point) sources into a three dimensional boundary
element method while solving the Helmholtz equation is described. The method is
fully desingularized which allows for the use of higher order quadratic
elements on the surfaces of the problem with ease. The effect of the monopole
sources ends up on the right hand side of the resulting matrix system. Several
carefully chosen examples are shown, such as sources near and within a
concentric spherical core-shell scatterer as a verification case, a curved
focusing surface and a multi-scale acoustic lens.
|
Single-crystal inorganic halide perovskites are attracting interest for
quantum device applications. Here we present low-temperature quantum
magnetotransport measurements on thin film devices of epitaxial single-crystal
CsSnBr$_{3}$, which exhibit two-dimensional Mott variable range hopping (VRH)
and giant negative magnetoresistance. These findings are described by a model
for quantum interference between different directed hopping paths and we
extract the temperature-dependent hopping length of charge carriers, their
localization length, and a lower bound for their phase coherence length of ~100
nm at low temperatures. These observations demonstrate that epitaxial halide
perovskite devices are emerging as a material class for low-dimensional quantum
coherent transport devices.
|
We present observations of quantum depletion in expanding condensates
released from a harmonic trap. We confirm experimental observations of
slowly-decaying tails in the far-field beyond the thermal component, consistent
with the survival of the quantum depletion. Our measurements support the
hypothesis that the depletion survives the expansion, and even appears stronger
in the far-field than expected before release based on the Bogoliubov theory.
This result is in conflict with the hydrodynamic theory which predicts that the
in-situ depletion does not survive when atoms are released from a trap.
Simulations of our experiment show that the depletion should indeed survive
into the far field and become stronger. However, while in qualitative
agreement, the final depletion observed in the experiment is much larger than
in the simulation. In light of the predicted power-law decay of the momentum
density, we discuss general issues inherent in characterizing power laws.
|
Precise theoretical predictions are a key ingredient for an accurate
determination of the structure of the Langrangian of particle physics,
including its free parameters, which summarizes our understanding of the
fundamental interactions among particles. Furthermore, due to the absence of
clear new-physics signals, precise theoretical calculations are required in
order to pin down possible subtle deviations from the Standard Model
predictions. The error associated with such calculations must be scrutinized,
as non-perturbative power corrections, dubbed infrared renormalons, can limit
the ultimate precision of truncated perturbative expansions in quantum
chromodynamics. In this review we focus on linear power corrections that can
arise in certain kinematic distributions relevant for collider phenomenology
where an operator product expansion is missing, e.g. those obtained from the
top-quark decay products, shape observables and the transverse momentum of
massive gauge bosons. Only the last one is found to be free from such
corrections, while the mass of the system comprising the top decay products has
a larger power correction if the perturbative expansion is expressed in terms
of a short-distance mass instead of the pole mass. A proper modelization of
non-perturbative corrections is crucial in the context of shape observables to
obtain reliable strong coupling constant extractions.
|
Mobile Crowdsourcing (MC) is an effective way of engaging large groups of
smart devices to perform tasks remotely while exploiting their built-in
features. It has drawn great attention in the areas of smart cities and urban
computing communities to provide decentralized, fast, and flexible ubiquitous
technological services. The vast majority of previous studies focused on
non-cooperative MC schemes in Internet of Things (IoT) systems. Advanced
collaboration strategies are expected to leverage the capability of MC services
and enable the execution of more complicated crowdsourcing tasks. In this
context, Collaborative Mobile Crowdsourcing (CMC) enables task requesters to
hire groups of IoT devices' users that must communicate with each other and
coordinate their operational activities in order to accomplish complex tasks.
In this paper, we present and discuss the novel CMC paradigm in IoT. Then, we
provide a detailed taxonomy to classify the different components forming CMC
systems. Afterwards, we investigate the challenges in designing CMC tasks and
discuss different team formation strategies involving the crowdsourcing
platform and selected team leaders. We also analyze and compare the
performances of certain proposed CMC recruitment algorithms. Finally, we shed
the light on open research directions to leverage CMC service design.
|
A special place in climatology is taken by the so-called conceptual climate
models. These, relatively simple, sets of differential equations can
successfully describe single mechanisms of the climate. We focus on one family
of such models based on the global energy balance. This gives rise to a
degenerate nonlocal parabolic nonlinear partial differential equation for the
zonally averaged temperature. We construct a fully discrete numerical method
that has an optimal spectral accuracy in space and second order in time. Our
scheme is based on Galerkin formulation of the Legendre basis expansion which
is particularly convenient for this setting. By using extrapolation the
numerical scheme is linear even though the original equation is strongly
nonlinear. We also test our theoretical result during various numerical
simulations that confirm the aforementioned accuracy of the scheme. All
implementations are coded in Julia programming language with the use of
parallelization (multi-threading).
|
Effectively parsing the facade is essential to 3D building reconstruction,
which is an important computer vision problem with a large amount of
applications in high precision map for navigation, computer aided design, and
city generation for digital entertainments. To this end, the key is how to
obtain the shape grammars from 2D images accurately and efficiently. Although
enjoying the merits of promising results on the semantic parsing, deep learning
methods cannot directly make use of the architectural rules, which play an
important role for man-made structures. In this paper, we present a novel
translational symmetry-based approach to improving the deep neural networks.
Our method employs deep learning models as the base parser, and a module taking
advantage of translational symmetry is used to refine the initial parsing
results. In contrast to conventional semantic segmentation or bounding box
prediction, we propose a novel scheme to fuse segmentation with anchor-free
detection in a single stage network, which enables the efficient training and
better convergence. After parsing the facades into shape grammars, we employ an
off-the-shelf rendering engine like Blender to reconstruct the realistic
high-quality 3D models using procedural modeling. We conduct experiments on
three public datasets, where our proposed approach outperforms the
state-of-the-art methods. In addition, we have illustrated the 3D building
models built from 2D facade images.
|
The reflexive completion of a category consists of the Set-valued functors on
it that are canonically isomorphic to their double conjugate. After reviewing
both this construction and Isbell conjugacy itself, we give new examples and
revisit Isbell's main results from 1960 in a modern categorical context. We
establish the sense in which reflexive completion is functorial, and find
conditions under which two categories have equivalent reflexive completions. We
describe the relationship between the reflexive and Cauchy completions,
determine exactly which limits and colimits exist in an arbitrary reflexive
completion, and make precise the sense in which the reflexive completion of a
category is the intersection of the categories of covariant and contravariant
functors on it.
|
Recent work by Jenkins and Sakellariadou claims that cusps on cosmic strings
lead to black hole production. To derive this conclusion they use the hoop
conjecture in the rest frame of the string loop, rather than in the rest frame
of the proposed black hole. Most of the energy they include is the bulk motion
of the string near the cusp. We redo the analysis taking this into account and
find that cusps on cosmic strings with realistic energy scale do not produce
black holes, unless the cusp parameters are extremely fine-tuned.
|
Radio-frequency (14.6 MHz) AC magnetic susceptibility, $\chi^{\prime}_{AC}$,
of \dytio\ was measured using a self-oscillating tunnel-diode resonator.
Measurements were made with the excitation AC field parallel to the
superimposed DC magnetic field up 5 T in a wide temperature range from 50 mK to
100 K. At 14.6 MHz a known broad peak of $\chi^{\prime}_{AC}(T)$ from kHz -
range audio-frequency measurements around 15~K for both [111] and [110]
directions shifts to 45~K, continuing the Arrhenius activated behavior with the
same activation energy barrier of $E_a \approx 230$~K. Magnetic field
dependence of $\chi^{\prime}_{AC}$ along [111] reproduces previously reported
low-temperature two-in-two-out to three-in-one-out spin configuration
transition at about 1~T, and an intermediate phase between 1 and 1.5~T. The
boundaries of the intermediate phase show reasonable overlap with the
literature data and connect at a critical endpoint of the first-order
transition line, suggesting that these low-temperature features are frequency
independent. An unusual upturn of magnetic susceptibility at $T \to 0$ was
observed in magnetic fields between 1.5~T and 2~T for both magnetic field
directions, before fully polarized configuration sets in above 2~T.
|
Powder-based additive manufacturing techniques provide tools to construct
intricate structures that are difficult to manufacture using conventional
methods. In Laser Powder Bed Fusion, components are built by selectively
melting specific areas of the powder bed, to form the two-dimensional
cross-section of the specific part. However, the high occurrence of defects
impacts the adoption of this method for precision applications. Therefore, a
control policy for dynamically altering process parameters to avoid phenomena
that lead to defect occurrences is necessary. A Deep Reinforcement Learning
(DRL) framework that derives a versatile control strategy for minimizing the
likelihood of these defects is presented. The generated control policy alters
the velocity of the laser during the melting process to ensure the consistency
of the melt pool and reduce overheating in the generated product. The control
policy is trained and validated on efficient simulations of the continuum
temperature distribution of the powder bed layer under various laser
trajectories.
|
Canonical Correlation Analysis (CCA) and its regularised versions have been
widely used in the neuroimaging community to uncover multivariate associations
between two data modalities (e.g., brain imaging and behaviour). However, these
methods have inherent limitations: (1) statistical inferences about the
associations are often not robust; (2) the associations within each data
modality are not modelled; (3) missing values need to be imputed or removed.
Group Factor Analysis (GFA) is a hierarchical model that addresses the first
two limitations by providing Bayesian inference and modelling modality-specific
associations. Here, we propose an extension of GFA that handles missing data,
and highlight that GFA can be used as a predictive model. We applied GFA to
synthetic and real data consisting of brain connectivity and non-imaging
measures from the Human Connectome Project (HCP). In synthetic data, GFA
uncovered the underlying shared and specific factors and predicted correctly
the non-observed data modalities in complete and incomplete data sets. In the
HCP data, we identified four relevant shared factors, capturing associations
between mood, alcohol and drug use, cognition, demographics and
psychopathological measures and the default mode, frontoparietal control,
dorsal and ventral networks and insula, as well as two factors describing
associations within brain connectivity. In addition, GFA predicted a set of
non-imaging measures from brain connectivity. These findings were consistent in
complete and incomplete data sets, and replicated previous findings in the
literature. GFA is a promising tool that can be used to uncover associations
between and within multiple data modalities in benchmark datasets (such as,
HCP), and easily extended to more complex models to solve more challenging
tasks.
|
A general framework for obtaining exact transition rate matrices for
stochastic systems on networks is presented and applied to many well-known
compartmental models of epidemiology. The state of the population is described
as a vector in the tensor product space of $N$ individual probability vector
spaces, whose dimension equals the number of compartments of the
epidemiological model $n_c$. The transition rate matrix for the
$n_c^N$-dimensional Markov chain is obtained by taking suitable linear
combinations of tensor products of $n_c$-dimensional matrices. The resulting
transition rate matrix is a sum over bilocal linear operators, which gives
insight in the microscopic dynamics of the system. The more familiar and
non-linear node-based mean-field approximations are recovered by restricting
the exact models to uncorrelated (separable) states. We show how the exact
transition rate matrix for the susceptible-infected (SI) model can be used to
find analytic solutions for SI outbreaks on trees and the cycle graph for
finite $N$.
|
Uncovering how inequality emerges from human interaction is imperative for
just societies. Here we show that the way social groups interact in
face-to-face situations can enable the emergence of degree inequality. We
present a mechanism that integrates group mixing dynamics with individual
preferences, which reproduces group degree inequality found in six empirical
data sets of face-to-face interactions. We uncover the impact of group-size
imbalance on degree inequality, revealing a critical minority group size that
changes social gatherings qualitatively. If the minority group is larger than
this 'critical mass' size, it can be a well-connected, cohesive group; if it is
smaller, minority cohesion widens degree inequality. Finally, we expose the
under-representation of social groups in degree rankings due to mixing dynamics
and propose a way to reduce such biases.
|
We study the Randic index for cactus graphs. It is conjectured to be bounded
below by radius (for other than an even path), and it is known to obey several
bounds based on diameter. We study radius and diameter for cacti then verify
the radius bound and strengthen two diameter bounds for cacti. Along the way,
we produce several other bounds for the Randic index in terms of graph size,
order, and valency for several special classes of graphs, including chemical
nontrivial cacti and cacti with starlike BC-trees.
|
Artificial intelligence (AI) has witnessed a substantial breakthrough in a
variety of Internet of Things (IoT) applications and services, spanning from
recommendation systems to robotics control and military surveillance. This is
driven by the easier access to sensory data and the enormous scale of
pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data
streams. Designing accurate models using such data streams, to predict future
insights and revolutionize the decision-taking process, inaugurates pervasive
systems as a worthy paradigm for a better quality-of-life. The confluence of
pervasive computing and artificial intelligence, Pervasive AI, expanded the
role of ubiquitous IoT systems from mainly data collection to executing
distributed computations with a promising alternative to centralized learning,
presenting various challenges. In this context, a wise cooperation and resource
scheduling should be envisaged among IoT devices (e.g., smartphones, smart
vehicles) and infrastructure (e.g. edge nodes, and base stations) to avoid
communication and computation overheads and ensure maximum performance. In this
paper, we conduct a comprehensive survey of the recent techniques developed to
overcome these resource challenges in pervasive AI systems. Specifically, we
first present an overview of the pervasive computing, its architecture, and its
intersection with artificial intelligence. We then review the background,
applications and performance metrics of AI, particularly Deep Learning (DL) and
online learning, running in a ubiquitous system. Next, we provide a deep
literature review of communication-efficient techniques, from both algorithmic
and system perspectives, of distributed inference, training and online learning
tasks across the combination of IoT devices, edge devices and cloud servers.
Finally, we discuss our future vision and research challenges.
|
In this paper, we disprove EMSO(FO$^2$) convergence law for the binomial
random graph $G(n,p)$ for any constant probability $p$. More specifically, we
prove that there exists an existential monadic second order sentence with 2
first order variables such that, for every $p\in(0,1)$, the probability that it
is true on $G(n,p)$ does not converge.
|
The distributed optimization problem is set up in a collection of nodes
interconnected via a communication network. The goal is to find the minimizer
of a global objective function formed by the addition of partial functions
locally known at each node. A number of methods are available for addressing
this problem, having different advantages. The goal of this work is to achieve
the maximum possible convergence rate. As the first step towards this end, we
propose a new method which we show converges faster than other available
options. As with most distributed optimization methods, convergence rate
depends on a step size parameter. As the second step towards our goal we
complement the proposed method with a fully distributed method for estimating
the optimal step size that maximizes convergence speed. We provide theoretical
guarantees for the convergence of the resulting method in a neighborhood of the
solution. Also, for the case in which the global objective function has a
single local minimum, we provide a different step size selection criterion
together with theoretical guarantees for convergence. We present numerical
experiments showing that, when using the same step size, our method converges
significantly faster than its rivals. Experiments also show that the
distributed step size estimation method achieves an asymptotic convergence rate
very close to the theoretical maximum.
|
We investigate the phenomenology of light GeV-scale fermionic dark matter in
$U(1)_{L_\mu - L_{\tau}}$ gauge extension of the Standard Model. Heavy neutral
fermions alongside with a $S_1(\overline{3}$,$1$,$1/3$) scalar leptoquark and
an inert scalar doublet are added to address the flavor anomalies and light
neutrino mass respectively. The light gauge boson associated with
$U(1)_{L_\mu-L_\tau}$ gauge group mediates dark to visible sector and helps to
obtain the correct relic density. Aided with a colored scalar, we constrain the
new model parameters by using the branching ratios of various $b \to sll$ and
$b \to s \gamma$ decay processes as well as the lepton flavour non-universality
observables $R_{K^{(*)}}$ and then show the implication on the branching ratios
of some rare semileptonic $B \to (K^{(*)}, \phi)+$ missing energy, processes.
|
In recent years, the implications of the generalized (GUP) and extended (EUP)
uncertainty prin-ciples on Maxwell-Boltzmann distribution have been widely
investigated. However, at high energy regimes, the validity of
Maxwell-Boltzmann statistics is under debate and instead, the J\"{u}ttner
distribution is proposed as the distribution function in relativistic limit.
Motivated by these considerations, in the present work, our aim is to study the
effects of GUP and EUP on a system that obeys the J\"{u}ttner distribution. To
achieve this goal, we address a method to get the distribution function by
starting from the partition function and its relation with thermal energy which
finally helps us in finding the corresponding energy density states.
|
Supervised machine learning has several drawbacks that make it difficult to
use in many situations. Drawbacks include: heavy reliance on massive training
data, limited generalizability and poor expressiveness of high-level semantics.
Low-shot Learning attempts to address these drawbacks. Low-shot learning allows
the model to obtain good predictive power with very little or no training data,
where structured knowledge plays a key role as a high-level semantic
representation of human. This article will review the fundamental factors of
low-shot learning technologies, with a focus on the operation of structured
knowledge under different low-shot conditions. We also introduce other
techniques relevant to low-shot learning. Finally, we point out the limitations
of low-shot learning, the prospects and gaps of industrial applications, and
future research directions.
|
Relation extraction is a type of information extraction task that recognizes
semantic relationships between entities in a sentence. Many previous studies
have focused on extracting only one semantic relation between two entities in a
single sentence. However, multiple entities in a sentence are associated
through various relations. To address this issue, we propose a relation
extraction model based on a dual pointer network with a multi-head attention
mechanism. The proposed model finds n-to-1 subject-object relations using a
forward object decoder. Then, it finds 1-to-n subject-object relations using a
backward subject decoder. Our experiments confirmed that the proposed model
outperformed previous models, with an F1-score of 80.8% for the ACE-2005 corpus
and an F1-score of 78.3% for the NYT corpus.
|
Actor-critic style two-time-scale algorithms are very popular in
reinforcement learning, and have seen great empirical success. However, their
performance is not completely understood theoretically. In this paper, we
characterize the global convergence of an online natural actor-critic algorithm
in the tabular setting using a single trajectory. Our analysis applies to very
general settings, as we only assume that the underlying Markov chain is ergodic
under all policies (the so-called Recurrence assumption). We employ
$\epsilon$-greedy sampling in order to ensure enough exploration.
For a fixed exploration parameter $\epsilon$, we show that the natural actor
critic algorithm is $\mathcal{O}(\frac{1}{\epsilon T^{1/4}}+\epsilon)$ close to
the global optimum after $T$ iterations of the algorithm.
By carefully diminishing the exploration parameter $\epsilon$ as the
iterations proceed, we also show convergence to the global optimum at a rate of
$\mathcal{O}(1/T^{1/6})$.
|
Human machine interaction systems are those of much needed in the emerging
technology to make the user aware of what is happening around. It is huge
domain in which the smart material enables the factor of convergence. One such
is the piezoelectric crystals, is a class of smart material and this has an
incredible property of self-sensing actuation (SSA). This property of SSA has
added an indescribable advantage to the robotic field by having the advantages
of exhibiting both the functionality of sensing and actuating characteristics
with reduced devices, space and power. This paper focuses on integrating the
SSA to drive an unmanned ground vehicle with wireless radio control system
which will be of great use in all the automation field. The piezo electric
plate will be used as an input device to send the signal to move the UGV in
certain direction and then, the same piezo-electric plate will be used as an
actuator for haptic feedback with the help of drive circuit if obstacles or
danger is experienced by UGV.
|
The COVID-19 pandemic, which spread rapidly in late 2019, has revealed that
the use of computing and communication technologies provides significant aid in
preventing, controlling, and combating infectious diseases. With the ongoing
research in next-generation networking (NGN), the use of secure and reliable
communication and networking is of utmost importance when dealing with users'
health records and other sensitive information. Through the adaptation of
Artificial Intelligence (AI)-enabled NGN, the shape of healthcare systems can
be altered to achieve smart and secure healthcare capable of coping with
epidemics that may emerge at any given moment. In this article, we envision a
cooperative and distributed healthcare framework that relies on
state-of-the-art computing, communication, and intelligence capabilities,
namely, Federated Learning (FL), mobile edge computing (MEC), and Blockchain,
to enable epidemic (or suspicious infectious disease) discovery, remote
monitoring, and fast health-authority response. The introduced framework can
also enable secure medical data exchange at the edge and between different
health entities. Such a technique, coupled with the low latency and high
bandwidth functionality of 5G and beyond networks, would enable mass
surveillance, monitoring and analysis to occur at the edge. Challenges, issues,
and design guidelines are also discussed in this article with highlights on
some trending solutions.
|
Let $\mathcal{P}$ be a finite set of points in the plane in general position.
For any spanning tree $T$ on $\mathcal{P}$, we denote by $|T|$ the Euclidean
length of $T$. Let $T_{\text{OPT}}$ be a plane (that is, noncrossing) spanning
tree of maximum length for $\mathcal{P}$. It is not known whether such a tree
can be found in polynomial time. Past research has focused on designing
polynomial time approximation algorithms, using low diameter trees. In this
work we initiate a systematic study of the interplay between the approximation
factor and the diameter of the candidate trees. Specifically, we show three
results. First, we construct a plane tree $T_{\text{ALG}}$ with diameter at
most four that satisfies $|T_{\text{ALG}}|\ge \delta\cdot |T_{\text{OPT}}|$ for
$\delta>0.546$, thereby substantially improving the currently best known
approximation factor. Second, we show that the longest plane tree among those
with diameter at most three can be found in polynomial time. Third, for any
$d\ge 3$ we provide upper bounds on the approximation factor achieved by a
longest plane tree with diameter at most $d$ (compared to a longest general
plane tree).
|
Jerky flow in solids results from collective dynamics of dislocations which
gives rise to serrated deformation curves and a complex evolution of the strain
heterogeneity. A rich example of this phenomenon is the Portevin-Le Chatelier
effect in alloys. The corresponding spatiotemporal patterns showed some
universal features which provided a basis for a well-known phenomenological
classification. Recent studies revealed peculiar features in both the stress
serration sequences and the kinematics of deformation bands in Al-based alloys
containing fine microstructure elements, such as nanosize precipitates and (or)
submicron grains. In the present work, jerky flow of an AlMgScZr alloy is
studied using statistical analysis of stress serrations and the accompanying
acoustic emission. As in the case of coarse-grained binary AlMg alloys, the
amplitude distributions of acoustic events obey a power-law scaling which is
usually considered as evidence of avalanchelike dynamics. However, the scaling
exponents display specific dependences on the strain and strain rate for the
investigated materials. The observed effects bear evidence to a competition
between the phenomena of synchronization and randomization of dislocation
avalanches, which may shed light on the mechanisms leading to a high variety of
jerky flow patterns observed in applied alloys.
|
Considering a common case where measurements are obtained from independent
sensors, we present a novel outlier-robust filter for nonlinear dynamical
systems in this work. The proposed method is devised by modifying the
measurement model and subsequently using the theory of Variational Bayes and
general Gaussian filtering. We treat the measurement outliers independently for
independent observations leading to selective rejection of the corrupted data
during inference. By carrying out simulations for variable number of sensors we
verify that an implementation of the proposed filter is computationally more
efficient as compared to the proposed modifications of similar baseline methods
still yielding similar estimation quality. In addition, experimentation results
for various real-time indoor localization scenarios using Ultra-wide Band (UWB)
sensors demonstrate the practical utility of the proposed method.
|
We present Magic Layouts; a method for parsing screenshots or hand-drawn
sketches of user interface (UI) layouts. Our core contribution is to extend
existing detectors to exploit a learned structural prior for UI designs,
enabling robust detection of UI components; buttons, text boxes and similar.
Specifically we learn a prior over mobile UI layouts, encoding common spatial
co-occurrence relationships between different UI components. Conditioning
region proposals using this prior leads to performance gains on UI layout
parsing for both hand-drawn UIs and app screenshots, which we demonstrate
within the context an interactive application for rapidly acquiring digital
prototypes of user experience (UX) designs.
|
The synthesis of 3 and 4 abstract polymer chains divided into two sexes is
considered, where the degree of kinship of the chains is determined by their
overlap. It is shown that the use of some types of entangled bi-photon in
one-way control gives a difference in the degree of kinship between the legal
and nonlegal pairs that is unattainable with classical control. This example
demonstrates the quantum superiority in distributed computing, coming from the
violation of the Bell inequality. It may be of interest for revealing the
quantum mechanisms of synthesis of real biopolymers with directional
properties.
|
Autonomous multi-robot optical inspection systems are increasingly applied
for obtaining inline measurements in process monitoring and quality control.
Numerous methods for path planning and robotic coordination have been developed
for static and dynamic environments and applied to different fields. However,
these approaches may not work for the autonomous multi-robot optical inspection
system due to fast computation requirements of inline optimization, unique
characteristics on robotic end-effector orientations, and complex large-scale
free-form product surfaces. This paper proposes a novel task allocation
methodology for coordinated motion planning of multi-robot inspection.
Specifically, (1) a local robust inspection task allocation is proposed to
achieve efficient and well-balanced measurement assignment among robots; (2)
collision-free path planning and coordinated motion planning are developed via
dynamic searching in robotic coordinate space and perturbation of probe poses
or local paths in the conflicting robots. A case study shows that the proposed
approach can mitigate the risk of collisions between robots and environments,
resolve conflicts among robots, and reduce the inspection cycle time
significantly and consistently.
|
We report on high power operation of Er:Y2O3 ceramic laser at ~1.6 {\mu}m
using low scattering loss, 0.25 at.% Er3+ doped ceramic sample fabricated
in-house via co-precipitation process. The laser is in-band pumped by an Er, Yb
fiber laser at 1535.6 nm and generates 10.2 W of continuous-wave (CW) output
power at 1640.4 nm with a slope efficiency of 25% with respect to the absorbed
pump power. To the best of our knowledge, this is the first demonstration of
~1.6 {\mu}m Er:Y2O3 laser at room temperature. The prospects for further
scaling in output power and lasing efficiency via low Er3+ doping and reduced
energy-transfer upconversion are discussed.
|
In this work, the thermodynamic properties of the organic superconductor
$\lambda$-(BETS)$_2$GaCl$_4$ are investigated to study a high-field
superconducting state known as the putative Fulde-Ferrell-Larkin-Ovchinnikov
(FFLO) phase. We observed a small thermodynamic anomaly in the field $H_{\rm
FFLO}$ $\sim$ 10~T, which corresponds to the Pauli limiting field $H_{\rm P}$.
This anomaly probably originates from a transition from a uniform
superconducting state to the FFLO state. $H_{\rm FFLO}$ does not show a strong
field-angular dependence due to a quasi-isotropic paramagnetic effect in
$\lambda$-(BETS)$_2$GaCl$_4$. The thermodynamic anomaly at $H_{\rm FFLO}$ is
smeared out and low-temperature upper critical field $H_{\rm c2}$ changes
significantly if fields are not parallel to the conducting plane even for a
deviation of $\sim$0.5$^{\circ}$. This behavior indicates that the high-field
state is very unstable, as it is influenced by the strongly anisotropic orbital
effect. Our results are consistent with the theoretical predictions on the FFLO
state, and show that the high-field superconductivity is probably an FFLO state
in $\lambda$-(BETS)$_2$GaCl$_4$ from a thermodynamic point of view.
|
Given the recent development of rotating black-bounce-Kerr spacetimes, for
both theoretical and observational purposes it becomes interesting to see
whether it might be possible to construct black-bounce variants of the entire
Kerr-Newman family. Specifically, herein we shall consider
black-bounce-Reissner-Nordstr\"om and black-bounce-Kerr-Newman spacetimes as
particularly simple and clean everywhere-regular black hole "mimickers" that
deviate from the Kerr-Newman family in a precisely controlled and minimal
manner, and smoothly interpolate between regular black holes and traversable
wormholes. While observationally the electric charges on astrophysical black
holes are likely to be extremely low, $|Q|/m \ll 1$, introducing any non-zero
electric charge has a significant theoretical impact. In particular, we verify
the existence of a Killing tensor (and associated Carter-like constant) but
without the full Killing tower of principal tensor and Killing-Yano tensor,
also we discuss how, assuming general relativity, the black-bounce-Kerr-Newman
solution requires an interesting, non-trivial matter/energy content.
|
In this manuscript we prove the Bernstein inequality and develop the theory
of holonomic D-modules for rings of invariants of finite groups in
characteristic zero, and for strongly F-regular finitely generated graded
algebras with FFRT in prime characteristic. In each of these cases, the ring
itself, its localizations, and its local cohomology modules are holonomic. We
also show that holonomic D-modules, in this context, have finite length. We
obtain these results using a more general version of Bernstein filtrations.
|
Photoelectron circular dichroism (PECD) is a fascinating phenomenon both from
a fundamental science aspect but also due to its emerging role as a highly
sensitive analytic tool for chiral recognition in the gas phase. PECD has been
studied with single-photon as well as multi-photon ionization. The latter has
been investigated in the short pulse limit with femtosecond laser pulses, where
ionization can be thought of as an instantaneous process.
In this contribution, we demonstrate that multiphoton PECD still can be
observed when using an ultra-violet nanosecond pulse to ionize chiral showcase
fenchone molecules. Compared to femtosecond ionization, the magnitude of PECD
is similar, but the lifetime of intermediate molecular states imprints itself
in the photoelectron spectra. Being able to use an industrial nanosecond laser
to investigate PECD furthermore reduces the technical requirements to apply
PECD in analytical chemistry.
|
Inter-beat interval (IBI) measurement enables estimation of heart-rate
variability (HRV) which, in turns, can provide early indication of potential
cardiovascular diseases. However, extracting IBIs from noisy signals is
challenging since the morphology of the signal is distorted in the presence of
the noise. Electrocardiogram (ECG) of a person in heavy motion is highly
corrupted with noise, known as motion-artifact, and IBI extracted from it is
inaccurate. As a part of remote health monitoring and wearable system
development, denoising ECG signals and estimating IBIs correctly from them have
become an emerging topic among signal-processing researchers. Apart from
conventional methods, deep-learning techniques have been successfully used in
signal denoising recently, and diagnosis process has become easier, leading to
accuracy levels that were previously unachievable. We propose a deep-learning
approach leveraging tiramisu autoencoder model to suppress motion-artifact
noise and make the R-peaks of the ECG signal prominent even in the presence of
high-intensity motion. After denoising, IBIs are estimated more accurately
expediting diagnosis tasks. Results illustrate that our method enables IBI
estimation from noisy ECG signals with SNR up to -30dB with average root mean
square error (RMSE) of 13 milliseconds for estimated IBIs. At this noise level,
our error percentage remains below 8% and outperforms other state of the art
techniques.
|
Recommender Systems (RS) have employed knowledge distillation which is a
model compression technique training a compact student model with the knowledge
transferred from a pre-trained large teacher model. Recent work has shown that
transferring knowledge from the teacher's intermediate layer significantly
improves the recommendation quality of the student. However, they transfer the
knowledge of individual representation point-wise and thus have a limitation in
that primary information of RS lies in the relations in the representation
space. This paper proposes a new topology distillation approach that guides the
student by transferring the topological structure built upon the relations in
the teacher space. We first observe that simply making the student learn the
whole topological structure is not always effective and even degrades the
student's performance. We demonstrate that because the capacity of the student
is highly limited compared to that of the teacher, learning the whole
topological structure is daunting for the student. To address this issue, we
propose a novel method named Hierarchical Topology Distillation (HTD) which
distills the topology hierarchically to cope with the large capacity gap. Our
extensive experiments on real-world datasets show that the proposed method
significantly outperforms the state-of-the-art competitors. We also provide
in-depth analyses to ascertain the benefit of distilling the topology for RS.
|
Evidence-based fact checking aims to verify the truthfulness of a claim
against evidence extracted from textual sources. Learning a representation that
effectively captures relations between a claim and evidence can be challenging.
Recent state-of-the-art approaches have developed increasingly sophisticated
models based on graph structures. We present a simple model that can be trained
on sequence structures. Our model enables inter-sentence attentions at
different levels and can benefit from joint training. Results on a large-scale
dataset for Fact Extraction and VERification (FEVER) show that our model
outperforms the graph-based approaches and yields 1.09% and 1.42% improvements
in label accuracy and FEVER score, respectively, over the best published model.
|
A simple vibrational model of heat transfer in two-dimensional (2D) fluids
relates the heat conductivity coefficient to the longitudinal and transverse
sound velocities, specific heat, and the mean interatomic separation. This
model is demonstrated not to contradict the available experimental and
numerical data on heat transfer in 2D complex plasma layers. Additionally, the
heat conductivity coefficient of a 2D one-component plasma with a logarithmic
interaction is evaluated.
|
Neural Architecture Search (NAS) has enabled the possibility of automated
machine learning by streamlining the manual development of deep neural network
architectures defining a search space, search strategy, and performance
estimation strategy. To solve the need for multi-platform deployment of
Convolutional Neural Network (CNN) models, Once-For-All (OFA) proposed to
decouple Training and Search to deliver a one-shot model of sub-networks that
are constrained to various accuracy-latency tradeoffs. We find that the
performance estimation strategy for OFA's search severely lacks
generalizability of different hardware deployment platforms due to single
hardware latency lookup tables that require significant amount of time and
manual effort to build beforehand. In this work, we demonstrate the framework
for building latency predictors for neural network architectures to address the
need for heterogeneous hardware support and reduce the overhead of lookup
tables altogether. We introduce two generalizability strategies which include
fine-tuning using a base model trained on a specific hardware and NAS search
space, and GPU-generalization which trains a model on GPU hardware parameters
such as Number of Cores, RAM Size, and Memory Bandwidth. With this, we provide
a family of latency prediction models that achieve over 50% lower RMSE loss as
compared to with ProxylessNAS. We also show that the use of these latency
predictors match the NAS performance of the lookup table baseline approach if
not exceeding it in certain cases.
|
We give explicit formulas for the geodesic growth series of a Right Angled
Coxeter Group (RACG) based on a link-regular graph that is 4-clique free, i.e.
without tetrahedrons
|
We consider some supervised binary classification tasks and a regression
task, whereas SVM and Deep Learning, at present, exhibit the best
generalization performances. We extend the work [3] on a generalized quadratic
loss for learning problems that examines pattern correlations in order to
concentrate the learning problem into input space regions where patterns are
more densely distributed. From a shallow methods point of view (e.g.: SVM),
since the following mathematical derivation of problem (9) in [3] is incorrect,
we restart from problem (8) in [3] and we try to solve it with one procedure
that iterates over the dual variables until the primal and dual objective
functions converge. In addition we propose another algorithm that tries to
solve the classification problem directly from the primal problem formulation.
We make also use of Multiple Kernel Learning to improve generalization
performances. Moreover, we introduce for the first time a custom loss that
takes in consideration pattern correlation for a shallow and a Deep Learning
task. We propose some pattern selection criteria and the results on 4 UCI
data-sets for the SVM method. We also report the results on a larger binary
classification data-set based on Twitter, again drawn from UCI, combined with
shallow Learning Neural Networks, with and without the generalized quadratic
loss. At last, we test our loss with a Deep Neural Network within a larger
regression task taken from UCI. We compare the results of our optimizers with
the well known solver SVMlight and with Keras Multi-Layers Neural Networks with
standard losses and with a parameterized generalized quadratic loss, and we
obtain comparable results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.