abstract
stringlengths 42
2.09k
|
---|
The use of few-femtosecond, extreme ultraviolet (XUV) pulses, produced by
high-order harmonic generation, in combination with few-femtosecond infrared
(IR) pulses in pump-probe experiments has great potential to disclose ultrafast
dynamics in molecules, nanostructures and solids. A crucial prerequisite is a
reliable characterization of the temporal properties of the XUV and IR pulses.
Several techniques have been developed. The majority of them applies phase
reconstruction algorithms to a photoelectron spectrogram obtained by ionizing
an atomic target in a pump-probe fashion. If the ionizing radiation is a single
harmonic, all the information is encoded in a two-color two-photon signal
called sideband (SB). In this work, we present a simplified model to interpret
the time-frequency mapping of the SB signal and we show that the temporal
dispersion of the pulses directly maps onto the shape of its spectrogram.
Finally, we derive an analytical solution, which allows us to propose a novel
procedure to estimate the second-order dispersion of the XUV and IR pulses in
real time and with no need for iterative algorithms.
|
We prove a slab theorem for convex ancient solutions to mean curvature flow
without any additional hypotheses (such as concavity of the arrival time,
bounded curvature on compact time intervals, or noncollapsing in the sense of
inscribed radii). By carefully exploiting this, we are able to obtain a
Harnack-type inequality and a corresponding compactness theorem for the entire
solutions.
Using the compactness theorem, we are then able to give a short proof of the
equivalence of noncollapsing in the sense of inscribed radii and entire arrival
times for convex ancient mean curvature flows.
Finally, we provide a useful characterization of the non-entire solutions. In
particular, we prove that they are necessarily asymptotic to at least one Grim
hyperplane.
As a consequence, we rule out collapsing singularity models in (not
necessarily embedded) compact $(n-1)$-convex mean curvature flow. We also
remove the noncollapsing hypothesis from certain recent classification results
for ancient solutions.
|
*The following abbreviates the abstract. Please refer to the thesis for the
full abstract.*
After a disaster, locating and extracting victims quickly is critical because
mortality rises rapidly after the first two days. To assist search and rescue
teams and improve response times, teams of camera-equipped aerial robots can
engage in tasks such as mapping buildings and locating victims.
These sensing tasks encapsulate difficult (NP-Hard) problems. One way to
simplify planning for these tasks is to focus on maximizing sensing performance
over a short time horizon. Specifically, consider the problem of how to select
motions for a team of robots to maximize a notion of sensing quality (the
sensing objective) over the near future, say by maximizing the amount of
unknown space in a map that robots will observe over the next several seconds.
By repeating this process regularly, the team can react quickly to new
observations as they work to complete the sensing task. In technical terms,
this planning and control process forms an example of receding-horizon control.
Fortunately, common sensing objectives benefit from well-known monotonicity
properties (e.g. submodularity), and greedy algorithms can exploit these
monotonicity properties to solve the receding-horizon optimization problems
that we study near-optimally.
However, greedy algorithms typically force robots to make decisions
sequentially so that planning time grows with the number of robots. Further,
recent works that investigate sequential greedy planning, have demonstrated
that reducing the number of sequential steps while retaining suboptimality
guarantees can be hard or impossible.
We demonstrate that halting growth in planning time is sometimes possible. To
do so, we introduce novel greedy algorithms involving fixed numbers of
sequential steps.
|
Backdoor attacks are a kind of insidious security threat against machine
learning models. After being injected with a backdoor in training, the victim
model will produce adversary-specified outputs on the inputs embedded with
predesigned triggers but behave properly on normal inputs during inference. As
a sort of emergent attack, backdoor attacks in natural language processing
(NLP) are investigated insufficiently. As far as we know, almost all existing
textual backdoor attack methods insert additional contents into normal samples
as triggers, which causes the trigger-embedded samples to be detected and the
backdoor attacks to be blocked without much effort. In this paper, we propose
to use the syntactic structure as the trigger in textual backdoor attacks. We
conduct extensive experiments to demonstrate that the syntactic trigger-based
attack method can achieve comparable attack performance (almost 100% success
rate) to the insertion-based methods but possesses much higher invisibility and
stronger resistance to defenses. These results also reveal the significant
insidiousness and harmfulness of textual backdoor attacks. All the code and
data of this paper can be obtained at https://github.com/thunlp/HiddenKiller.
|
Symmetries have proven to be important ingredients in the analysis of neural
networks. So far their use has mostly been implicit or seemingly coincidental.
We undertake a systematic study of the role that symmetry plays. In
particular, we clarify how symmetry interacts with the learning algorithm. The
key ingredient in our study is played by Noether's celebrated theorem which,
informally speaking, states that symmetry leads to conserved quantities (e.g.,
conservation of energy or conservation of momentum). In the realm of neural
networks under gradient descent, model symmetries imply restrictions on the
gradient path. E.g., we show that symmetry of activation functions leads to
boundedness of weight matrices, for the specific case of linear activations it
leads to balance equations of consecutive layers, data augmentation leads to
gradient paths that have "momentum"-type restrictions, and time symmetry leads
to a version of the Neural Tangent Kernel.
Symmetry alone does not specify the optimization path, but the more
symmetries are contained in the model the more restrictions are imposed on the
path. Since symmetry also implies over-parametrization, this in effect implies
that some part of this over-parametrization is cancelled out by the existence
of the conserved quantities.
Symmetry can therefore be thought of as one further important tool in
understanding the performance of neural networks under gradient descent.
|
Sensors are being extensively deployed and are expected to expand at
significant rates in the coming years. They typically generate a large volume
of data on the internet of things (IoT) application areas like smart cities,
intelligent traffic systems, smart grid, and e-health. Cloud, edge and fog
computing are potential and competitive strategies for collecting, processing,
and distributing IoT data. However, cloud, edge, and fog-based solutions need
to tackle the distribution of a high volume of IoT data efficiently through
constrained and limited resource network infrastructures. This paper addresses
the issue of conveying a massive volume of IoT data through a network with
limited communications resources (bandwidth) using a cognitive communications
resource allocation based on Reinforcement Learning (RL) with SARSA algorithm.
The proposed network infrastructure (PSIoTRL) uses a Publish/ Subscribe
architecture to access massive and highly distributed IoT data. It is
demonstrated that the PSIoTRL bandwidth allocation for buffer flushing based on
SARSA enhances the IoT aggregator buffer occupation and network link
utilization. The PSIoTRL dynamically adapts the IoT aggregator traffic flushing
according to the Pub/Sub topic's priority and network constraint requirements.
|
We study odd parity perturbations of spherically symmetric black holes with
time-dependent scalar hair in shift-symmetric higher-order scalar-tensor
theories. The analysis is performed in a general way without assuming the
degeneracy conditions. Nevertheless, we end up with second-order equations for
a single master variable, similarly to cosmological tensor modes. We thus
identify the general form of the quadratic Lagrangian for the odd parity
perturbations, leading to a generalization of the Regge-Wheeler equation. We
also investigate the structure of the effective metric for the master variable
and refine the stability conditions. As an application of our generalized
Regge-Wheeler equation, we compute the quasi-normal modes of a certain
nontrivial black hole solution. Finally, our result is extended to include the
matter energy-momentum tensor as a source term.
|
Superconducting qubits are a promising platform for building a larger-scale
quantum processor capable of solving otherwise intractable problems. In order
for the processor to reach practical viability, the gate errors need to be
further suppressed and remain stable for extended periods of time. With recent
advances in qubit control, both single- and two-qubit gate fidelities are now
in many cases limited by the coherence times of the qubits. Here we
experimentally employ closed-loop feedback to stabilize the frequency
fluctuations of a superconducting transmon qubit, thereby increasing its
coherence time by 26\% and reducing the single-qubit error rate from $(8.5 \pm
2.1)\times 10^{-4}$ to $(5.9 \pm 0.7)\times 10^{-4}$. Importantly, the
resulting high-fidelity operation remains effective even away from the qubit
flux-noise insensitive point, significantly increasing the frequency bandwidth
over which the qubit can be operated with high fidelity. This approach is
helpful in large qubit grids, where frequency crowding and parasitic
interactions between the qubits limit their performance.
|
In this paper, we develop a method to assess the sensitivity of local average
treatment effect estimates to potential violations of the monotonicity
assumption of Imbens and Angrist (1994). We parameterize the degree to which
monotonicity is violated using two sensitivity parameters: the first one
determines the share of defiers in the population, and the second one measures
differences in the distributions of outcomes between compliers and defiers. For
each pair of values of these sensitivity parameters, we derive sharp bounds on
the outcome distributions of compliers in the first-order stochastic dominance
sense. We identify the robust region that is the set of all values of
sensitivity parameters for which a given empirical conclusion, e.g. that the
local average treatment effect is positive, is valid. Researchers can assess
the credibility of their conclusion by evaluating whether all the plausible
sensitivity parameters lie in the robust region. We obtain confidence sets for
the robust region through a bootstrap procedure and illustrate the sensitivity
analysis in an empirical application. We also extend this framework to analyze
treatment effects of the entire population.
|
We report the discovery of an extended very-high-energy (VHE) gamma-ray
source around the location of the middle-aged (207.8 kyr) pulsar PSR J0622+3749
with the Large High Altitude Air Shower Observatory (LHAASO). The source is
detected with a significance of $8.2\sigma$ for $E>25$~TeV assuming a Gaussian
template. The best-fit location is (R.A.,
Dec.)$=(95^{\circ}\!.47\pm0^{\circ}\!.11,\,37^{\circ}\!.92 \pm0^{\circ}\!.09)$,
and the extension is $0^{\circ}\!.40\pm0^{\circ}\!.07$. The energy spectrum can
be described by a power-law spectrum with an index of ${-2.92 \pm 0.17_{\rm
stat} \pm 0.02_{\rm sys} }$. No clear extended multi-wavelength counterpart of
the LHAASO source has been found from the radio to sub-TeV bands. The LHAASO
observations are consistent with the scenario that VHE electrons escaped from
the pulsar, diffused in the interstellar medium, and scattered the interstellar
radiation field. If interpreted as the pulsar halo scenario, the diffusion
coefficient, inferred for electrons with median energies of $\sim160$~TeV, is
consistent with those obtained from the extended halos around Geminga and
Monogem and much smaller than that derived from cosmic ray secondaries. The
LHAASO discovery of this source thus likely enriches the class of so-called
pulsar halos and confirms that high-energy particles generally diffuse very
slowly in the disturbed medium around pulsars.
|
This paper develops probabilistic PV forecasters by taking advantage of
recent breakthroughs in deep learning. It tailored forecasting tool, named
encoder-decoder, is implemented to compute intraday multi-output PV quantiles
forecasts to efficiently capture the time correlation. The models are trained
using quantile regression, a non-parametric approach that assumes no prior
knowledge of the probabilistic forecasting distribution. The case study is
composed of PV production monitored on-site at the University of Li\`ege
(ULi\`ege), Belgium. The weather forecasts from the regional climate model
provided by the Laboratory of Climatology are used as inputs of the deep
learning models. The forecast quality is quantitatively assessed by the
continuous ranked probability and interval scores. The results indicate this
architecture improves the forecast quality and is computationally efficient to
be incorporated in an intraday decision-making tool for robust optimization.
|
Nuclear Magnetic Resonance (NMR) shielding constants of transition metals in
solvated complexes are computed at the relativistic density functional theory
(DFT) level. The solvent effects evaluated with subsystem-DFT approaches are
compared with the reference solvent shifts predicted from supermolecular
calculations. Two subsystem-DFT approaches are analyzed -- in the standard
frozen density embedding (FDE) scheme the transition metal complexes are
embedded in an environment of solvent molecules whose density is kept frozen,
in the second approach the densities of the complex and of its environment are
relaxed in the "freeze-and-thaw" procedure. The latter approach improves the
description of the solvent effects in most cases, nevertheless the FDE
deficiencies are rather large in some cases.
|
Static Application Security Testing (SAST) is a popular quality assurance
technique in software engineering. However, integrating SAST tools into
industry-level product development and security assessment poses various
technical and managerial challenges. In this work, we reported a longitudinal
case study of adopting SAST as a part of a human-driven security assessment for
an open-source e-government project. We described how SASTs are selected,
evaluated, and combined into a novel approach for software security assessment.
The approach was preliminarily evaluated using semi-structured interviews. Our
result shows that (1) while some SAST tools out-perform others, it is possible
to achieve better performance by combining more than one SAST tools and (2)
SAST tools should be used towards a practical performance and in the
combination with triangulated approaches for human-driven vulnerability
assessment in real-world projects.
|
This is the opening article of the abstract book of conference "Set-Theoretic
Topology and Topological Algebra" in honor of professor Alexander Arhangelskii
on the occasion of his 80th birthday held in 2018 at Moscow State University.
|
For a non-negative separable random field $Z(t), t\in \mathbb{R}^d$
satisfying some mild assumptions we show that \begin{eqnarray*}
H_Z^\delta =
\lim_{T\to\infty} \frac{1}{T^d} E \{\sup_{ t\in [0,T]^d \cap \delta
\mathbb{Z}^d } Z(t) \} <\infty \end{eqnarray*} for $\delta \ge 0$ where $0
\mathbb{Z}^d := \mathbb{R}^d$ and prove that $H_Z^0$ can be approximated by
$H_Z^\delta$ if $\delta$ tends to 0. These results extend the classical
findings for the Pickands constants $H_{Z}^\delta$, defined for $Z(t)=
\exp\left( \sqrt{ 2} B_\alpha (t)- |t|^{2\alpha }\right), t\in \mathbb{R}$ with
$B_\alpha$ a standard fractional Brownian motion with Hurst parameter $\alpha
\in (0,1]$.
The continuity of $H_{Z}^\delta$ at $\delta=0$ is additionally shown for two
particular extensions of Pickands constants.
|
Characterizing plasticity mechanisms below the ductile-to-brittle transition
temperature is traditionally difficult to accomplish in asystematic fashion.
Here, we use a new experimental setup to perform in situ cryogenic mechanical
testing of pure Sn micropillars at room temperature and at 142{\deg}C.
Subsequent electron microscopy characterization of the micropillars shows a
clear difference in the deformation mechanisms at room temperature and at
cryogenic temperatures. At room temperature, the Sn micropillars deformed
through dislocation plasticity, while at142{\deg}C they exhibited both higher
strength and deformation twinning. Two different orientations were tested, a
symmetric (100) orientation and a non-symmetric (451) orientation. The
deformation mechanisms were found to be the same for both orientations
|
Semiconductor emitter can possibly achieve sharp cutoff wavelength due to its
intrinsic bandgap absorption and almost zero sub-bandgap emission without
doping. A germanium wafer based selective emitter with front-side
antireflection and backside metal coating is studied here for
thermophotovoltaic (TPV) energy conversion. Optical simulation predicts the
spectral emittance above 0.9 in the wavelengths from 1 to 1.85 um and below 0.2
in the sub-bandgap range with sharp cutoff around the bandgap, indicating
superior spectral selectivity behavior. This is confirmed by excellent
agreement with indirectly measured spectral emittance of the fabricated
Ge-based selective emitter sample. Furthermore, the TPV efficiency by paring
the Ge-based selective emitter with a GaSb cell is theoretically analyzed at
different temperatures. This work will facilitate the development of the
semiconductor-based selective emitters for enhancing TPV performance.
|
We prove the equivalence in the covariant phase space of the metric and
connection formulations for Palatini gravity, with nonmetricity and torsion, on
a spacetime manifold with boundary. To this end, we will rely on the
cohomological approach provided by the relative bicomplex framework. Finally,
we discuss some of the physical implications derived from this equivalence in
the context of singularity identification through curvature invariants.
|
A common observation in data-driven applications is that high dimensional
data has a low intrinsic dimension, at least locally. In this work, we consider
the problem of estimating a $d$ dimensional sub-manifold of $\mathbb{R}^D$ from
a finite set of noisy samples. Assuming that the data was sampled uniformly
from a tubular neighborhood of $\mathcal{M}\in \mathcal{C}^k$, a compact
manifold without boundary, we present an algorithm that takes a point $r$ from
the tubular neighborhood and outputs $\hat p_n\in \mathbb{R}^D$, and
$\widehat{T_{\hat p_n}\mathcal{M}}$ an element in the Grassmanian $Gr(d, D)$.
We prove that as the number of samples $n\to\infty$ the point $\hat p_n$
converges to $p\in \mathcal{M}$ and $\widehat{T_{\hat p_n}\mathcal{M}}$
converges to $T_p\mathcal{M}$ (the tangent space at that point) with high
probability. Furthermore, we show that the estimation yields asymptotic rates
of convergence of $n^{-\frac{k}{2k + d}}$ for the point estimation and
$n^{-\frac{k-1}{2k + d}}$ for the estimation of the tangent space. These rates
are known to be optimal for the case of function estimation.
|
Understanding the limits of list-decoding and list-recovery of Reed-Solomon
(RS) codes is of prime interest in coding theory and has attracted a lot of
attention in recent decades. However, the best possible parameters for these
problems are still unknown, and in this paper, we take a step in this
direction. We show the existence of RS codes that are list-decodable or
list-recoverable beyond the Johnson radius for \emph{any} rate, with a
polynomial field size in the block length. In particular, we show that for any
$\epsilon\in (0,1)$ there exist RS codes that are list-decodable from radius
$1-\epsilon$ and rate less than $\frac{\epsilon}{2-\epsilon}$, with constant
list size. We deduce our results by extending and strengthening a recent result
of Ferber, Kwan, and Sauermann on puncturing codes with large minimum distance
and by utilizing the underlying code's linearity.
|
In this research, we propose a new low-precision framework, TENT, to leverage
the benefits of a tapered fixed-point numerical format in TinyML models. We
introduce a tapered fixed-point quantization algorithm that matches the
numerical format's dynamic range and distribution to that of the deep neural
network model's parameter distribution at each layer. An accelerator
architecture for the tapered fixed-point with TENT framework is proposed.
Results show that the accuracy on classification tasks improves up to ~31 %
with an energy overhead of ~17-30 % as compared to fixed-point, for ConvNet and
ResNet-18 models.
|
Determining the clumpiness of matter around galaxies is pivotal to a full
understanding of the spatially inhomogeneous, multi-phase gas in the
circumgalactic medium (CGM). We combine high spatially resolved 3D observations
with hydrodynamical cosmological simulations to measure the cold circumgalactic
gas clumpiness. We present new adaptive-optics-assisted VLT/MUSE observations
of a quadruply lensed quasar, targeting the CGM of 2 foreground $z\sim$1
galaxies observed in absorption. We additionally use zoom-in FOGGIE simulations
with exquisite resolution ($\sim$0.1 kpc scales) in the CGM of galaxies to
compute the physical properties of cold gas traced by Mg\,II absorbers. By
contrasting these mock-observables with the VLT/MUSE observations, we find a
large spread of fractional variations of Mg\,II equivalent widths with physical
separation, both in observations and simulations. The simulations indicate a
dependence of the Mg\,II coherence length on the underlying gas morphology
(filaments vs clumps). The $z_{\rm abs}$=1.168 Mg\,II system shows coherence
over $\gtrsim$ 6 kpc and is associated with an [O\,II] emitting galaxy situated
89 kpc away, with SFR $\geq$ 4.6 $\pm$ {1.5} $\rm M_{\odot}$/yr and
$M_{*}=10^{9.6\pm0.2} M_{\odot}$. Based on this combined analysis, we determine
that the absorber is consistent with being an inflowing filament. The $z_{\rm
abs}$=1.393 Mg\,II system traces dense CGM gas clumps varying in strength over
$\lesssim$ 2 kpc physical scales. Our findings suggest that this absorber is
likely related to an outflowing clump. Our joint approach combining
3D-spectroscopy observations of lensed systems and simulations with extreme
resolution in the CGM put new constraints on the clumpiness of cold CGM gas, a
key diagnostic of the baryon cycle.
|
We investigate how the addition of quantum resources changes the statistical
complexity of quantum circuits by utilizing the framework of quantum resource
theories. Measures of statistical complexity that we consider include the
Rademacher complexity and the Gaussian complexity, which are well-known
measures in computational learning theory that quantify the richness of classes
of real-valued functions. We derive bounds for the statistical complexities of
quantum circuits that have limited access to certain resources and apply our
results to two special cases: (1) stabilizer circuits that are supplemented
with a limited number of T gates and (2) instantaneous quantum polynomial-time
Clifford circuits that are supplemented with a limited number of CCZ gates. We
show that the increase in the statistical complexity of a quantum circuit when
an additional quantum channel is added to it is upper bounded by the free
robustness of the added channel. Finally, we derive bounds for the
generalization error associated with learning from training data arising from
quantum circuits.
|
We analyze the interaction with uniform external fields of nematic liquid
crystals within a recent generalized free-energy posited by Virga and falling
in the class of quartic functionals in the spatial gradients of the nematic
director. We review some known interesting solutions, i. e., uniform
heliconical structures, which correspond to the so-called twist-bend nematic
phase and we also study the transition between this phase and the standard
uniform nematic one. Moreover, we find liquid crystal configurations, which
closely resemble some novel, experimentally detected, structures called
Skyrmion Tubes. Skyrmion Tubes are characterized by a localized
cylindrically-symmetric pattern surrounded by either twist-bend or uniform
nematic phase. We study the equilibrium differential equations and find
numerical solutions and analytical approximations.
|
In the pursuit of natural language understanding, there has been a long
standing interest in tracking state changes throughout narratives. Impressive
progress has been made in modeling the state of transaction-centric dialogues
and procedural texts. However, this problem has been less intensively studied
in the realm of general discourse where ground truth descriptions of states may
be loosely defined and state changes are less densely distributed over
utterances. This paper proposes to turn to simplified, fully observable systems
that show some of these properties: Sports events. We curated 2,263 soccer
matches including time-stamped natural language commentary accompanied by
discrete events such as a team scoring goals, switching players or being
penalized with cards. We propose a new task formulation where, given paragraphs
of commentary of a game at different timestamps, the system is asked to
recognize the occurrence of in-game events. This domain allows for rich
descriptions of state while avoiding the complexities of many other real-world
settings. As an initial point of performance measurement, we include two
baseline methods from the perspectives of sentence classification with temporal
dependence and current state-of-the-art generative model, respectively, and
demonstrate that even sophisticated existing methods struggle on the state
tracking task when the definition of state broadens or non-event chatter
becomes prevalent.
|
We consider Sobolev mappings $f\in W^{1,q}(\Omega,\IC)$, $1<q<\infty$,
between planar domains $\Omega\subset \IC$. We analyse the Radon-Riesz property
for convex functionals of the form \[f\mapsto \int_\Omega \Phi(|Df(z)|,J(z,f))
\; dz \]
and show that under certain criteria, which hold in important cases, weak
convergence in $W_{loc}^{1,q}(\Omega)$ of (for instance) a minimising sequence
can be improved to strong convergence. This finds important applications in the
minimisation problems for mappings of finite distortion and the $L^p$ and
$Exp$\,-Teichm\"uller theories.
|
Athreya, Bufetov, Eskin and Mirzakhani have shown the number of mapping class
group lattice points intersecting a closed ball of radius $R$ in
Teichm\"{u}ller space is asymptotic to $e^{hR}$, where $h$ is the dimension of
the Teichm\"{u}ller space. We show for any pseudo-Anosov mapping class $f$,
there exists a power $n$, such that the number of lattice points of the $f^n$
conjugacy class intersecting a closed ball of radius $R$ is coarsely asymptotic
to $e^{\frac{h}{2}R}$.
|
Clinical diagnostic and treatment decisions rely upon the integration of
patient-specific data with clinical reasoning. Cancer presents a unique context
that influence treatment decisions, given its diverse forms of disease
evolution. Biomedical imaging allows noninvasive assessment of disease based on
visual evaluations leading to better clinical outcome prediction and
therapeutic planning. Early methods of brain cancer characterization
predominantly relied upon statistical modeling of neuroimaging data. Driven by
the breakthroughs in computer vision, deep learning became the de facto
standard in the domain of medical imaging. Integrated statistical and deep
learning methods have recently emerged as a new direction in the automation of
the medical practice unifying multi-disciplinary knowledge in medicine,
statistics, and artificial intelligence. In this study, we critically review
major statistical and deep learning models and their applications in brain
imaging research with a focus on MRI-based brain tumor segmentation. The
results do highlight that model-driven classical statistics and data-driven
deep learning is a potent combination for developing automated systems in
clinical oncology.
|
Multi-Schur functions are symmetric functions that generalize the
supersymmetric Schur functions, the flagged Schur functions, and the refined
dual Grothendieck functions, which have been intensively studied by Lascoux. In
this paper, we give a new free-fermionic presentation of them. The multi-Schur
functions are indexed by a partition and two ``tuples of tuples'' of
indeterminates. We construct a family of linear bases of the fermionic Fock
space that are indexed by such data and prove that they correspond to the
multi-Schur functions through the boson-fermion correspondence. By focusing on
some special bases, which we call refined bases, we give a straightforward
method of expanding a multi-Schur function in the refined dual Grothendieck
polynomials. We also present a sufficient condition for a multi-Schur function
to have its Hall-dual function in the completed ring of symmetric functions.
|
Real-time sensing of ultra-wideband radio-frequency signal with high
frequency resolution is challenging, which is confined by the sampling rate of
electronic analog-to-digital converter and the capability of digital signal
processing. By combining quantum mechanics with compressed sensing, quantum
compressed sensing is proposed for wideband radio-frequency signal frequency
measurement. By using an electro-optical crystal as a sensor which modulates
the wave function of the coherent photons with the signal to be measured. The
frequency spectrum could be recovered by detecting the modulated sparse photons
with a low time-jitter single-photon detector and a time-to-digital converter.
More than 50 GHz real-time analysis bandwidth is demonstrated with the Fourier
transform limit resolution. The further simulation shows it can be extended to
more than 300 GHz with the present technologies.
|
The COVID-19 pandemic has inspired unprecedented data collection and computer
vision modelling efforts worldwide, focusing on diagnosis and stratification of
COVID-19 from medical images. Despite this large-scale research effort, these
models have found limited practical application due in part to unproven
generalization of these models beyond their source study. This study
investigates the generalizability of key published models using the publicly
available COVID-19 Computed Tomography data through cross dataset validation.
We then assess the predictive ability of these models for COVID-19 severity
using an independent new dataset that is stratified for COVID-19 lung
involvement. Each inter-dataset study is performed using histogram
equalization, and contrast limited adaptive histogram equalization with and
without a learning Gabor filter. The study shows high variability in the
generalization of models trained on these datasets due to varied sample image
provenances and acquisition processes amongst other factors. We show that under
certain conditions, an internally consistent dataset can generalize well to an
external dataset despite structural differences between these datasets with f1
scores up to 86%. Our best performing model shows high predictive accuracy for
lung involvement score for an independent dataset for which expertly labelled
lung involvement stratification is available. Creating an ensemble of our best
model for disease positive prediction with our best model for disease negative
prediction using a min-max function resulted in a superior model for lung
involvement prediction with average predictive accuracy of 75% for zero lung
involvement and 96% for 75-100% lung involvement with almost linear
relationship between these stratifications.
|
It is still nontrivial to develop a new fast COVID-19 screening method with
the easier access and lower cost, due to the technical and cost limitations of
the current testing methods in the medical resource-poor districts. On the
other hand, there are more and more ocular manifestations that have been
reported in the COVID-19 patients as growing clinical evidence[1]. This
inspired this project. We have conducted the joint clinical research since
January 2021 at the ShiJiaZhuang City, Heibei province, China, which approved
by the ethics committee of The fifth hospital of ShiJiaZhuang of Hebei Medical
University. We undertake several blind tests of COVID-19 patients by Union
Hospital, Tongji Medical College, Huazhong University of Science and
Technology, Wuhan, China. Meantime as an important part of the ongoing globally
COVID-19 eye test program by AIMOMICS since February 2020, we propose a new
fast screening method of analyzing the eye-region images, captured by common
CCD and CMOS cameras. This could reliably make a rapid risk screening of
COVID-19 with the sustainable stable high performance in different countries
and races. Our model for COVID-19 rapid prescreening have the merits of the
lower cost, fully self-performed, non-invasive, importantly real-time, and thus
enables the continuous health surveillance. We further implement it as the open
accessible APIs, and provide public service to the world. Our pilot experiments
show that our model is ready to be usable to all kinds of surveillance
scenarios, such as infrared temperature measurement device at airports and
stations, or directly pushing to the target people groups smartphones as a
packaged application.
|
Approximately 75% of energy used in petrochemical and refining industries is
consumed by furnaces. Operating furnaces at optimal conditions results in huge
amounts of savings. In this paper, we model the furnace efficiency optimization
as a multi-objective problem involving multiple interactions among the
controlled variables and propose a cooperative game based formulation for the
factory of future. The controlled variables are Absorbed Duty and Coil Outlet
Temperature. We propose a comprehensive solution to select the best combination
of manipulated variables (fired duty, throughput and coil inlet temperature)
satisfying multiple criteria using a cooperative game theory approach. We
compare this approach with the standard multi-objective optimization using
NSGA-II and RNSGA-II algorithms.
|
One of the long-standing goals of quantum transport is to use the noise,
rather than the average current, for information processing. However, achieving
this requires on-demand control of quantum fluctuations in the electric
current. In this paper, we demonstrate theoretically that transport through a
molecular spin-valve provides access to many different statistics of electron
tunneling events. Simply by changing highly tunable parameters, such as
electrode spin-polarization, magnetization angle, and voltage, one is able to
switch between Poisson behavior, bunching and anti-bunching of electron
tunnelings, and positive and negative temporal correlations. The molecular
spin-valve is modeled by a single spin-degenerate molecular orbital with local
electronic repulsion coupled to two ferromagnetic leads with magnetization
orientations allowed to rotate relative to each other. The electron transport
is described via Born-Markov master equation and fluctuations are studied with
higher-order waiting time distributions. For highly magnetized parallel-aligned
electrodes, we find that strong positive temporal correlations emerge in the
voltage range where the second transport channel is partially open. These are
caused by a spin-induced electron-bunching, which does not manifest in the
stationary current alone.
|
Recent technological advancements have proliferated the use of small embedded
devices for collecting, processing, and transferring the security-critical
information. The Internet of Things (IoT) has enabled remote access and control
of these network-connected devices. Consequently, an attacker can exploit
security vulnerabilities and compromise these devices. In this context, the
secure boot becomes a useful security mechanism to verify the integrity and
authenticity of the software state of the devices. However, the current secure
boot schemes focus on detecting the presence of potential malware on the device
but not on disinfecting and restoring the soft-ware to a benign state. This
manuscript presents CARE- the first secure boot framework that provides
detection, resilience, and onboard recovery mechanism for the com-promised
devices. The framework uses a prototype hybrid CARE: Code Authentication and
Resilience Engine to verify the software state and restore it to a benign
state. It uses Physical Memory Protection (PMP) and other security enchaining
techniques of RISC-V processor to pro-vide resilience from modern attacks. The
state-of-the-art comparison and performance analysis results indicate that the
proposed secure boot framework provides a promising resilience and recovery
mechanism with very little 8 % performance and resource overhead
|
Rigid electron rotation of a fully penetrated Rotamak-FRC produces a pressure
flux function that is more peaked than the Solov'ev flux function. This paper
explores the implications of this peaked pressure flux function, including the
isothermal case, which appear when the temperature profile is broader than the
density profile, creating both benefits and challenges to a Rotamak-FRC based
fusion reactor. In this regime, the density distribution becomes very peaked,
enhancing the fusion power. The separatrix has a tendency to become oblate,
which can be mitigated by flux conserving current loops. Plasma extends outside
the separatrix, notably in the open field line region. This model does not
apply to very kinetic FRCs or FRCs in which there are significant ion flows,
but it may have some applicability to their outer layers.
|
We apply unsupervised learning techniques to classify the different phases of
the $J_1-J_2$ antiferromagnetic Ising model on the honeycomb lattice. We
construct the phase diagram of the system using convolutional autoencoders.
These neural networks can detect phase transitions in the system via `anomaly
detection', without the need for any label or a priori knowledge of the phases.
We present different ways of training these autoencoders and we evaluate them
to discriminate between distinct magnetic phases. In this process, we highlight
the case of high temperature or even random training data. Finally, we analyze
the capability of the autoencoder to detect the ground state degeneracy through
the reconstruction error.
|
A key assumption in the theory of nonlinear adaptive control is that the
uncertainty of the system can be expressed in the linear span of a set of known
basis functions. While this assumption leads to efficient algorithms, it limits
applications to very specific classes of systems. We introduce a novel
nonparametric adaptive algorithm that learns an infinite-dimensional density
over parameters to cancel an unknown disturbance in a reproducing kernel
Hilbert space. Surprisingly, the resulting control input admits an analytical
expression that enables its implementation despite its underlying
infinite-dimensional structure. While this adaptive input is rich and
expressive -- subsuming, for example, traditional linear parameterizations --
its computational complexity grows linearly with time, making it comparatively
more expensive than its parametric counterparts. Leveraging the theory of
random Fourier features, we provide an efficient randomized implementation that
recovers the complexity of classical parametric methods while provably
retaining the expressivity of the nonparametric input. In particular, our
explicit bounds only depend polynomially on the underlying parameters of the
system, allowing our proposed algorithms to efficiently scale to
high-dimensional systems. As an illustration of the method, we demonstrate the
ability of the randomized approximation algorithm to learn a predictive model
of a 60-dimensional system consisting of ten point masses interacting through
Newtonian gravitation.
|
Framing involves the positive or negative presentation of an argument or
issue depending on the audience and goal of the speaker (Entman 1983).
Differences in lexical framing, the focus of our work, can have large effects
on peoples' opinions and beliefs. To make progress towards reframing arguments
for positive effects, we create a dataset and method for this task. We use a
lexical resource for "connotations" to create a parallel corpus and propose a
method for argument reframing that combines controllable text generation
(positive connotation) with a post-decoding entailment component (same
denotation). Our results show that our method is effective compared to strong
baselines along the dimensions of fluency, meaning, and
trustworthiness/reduction of fear.
|
Recent observational missions have uncovered a significant number of compact
multi-exoplanet systems. The tight orbital spacing of these systems has led to
much effort being applied to the understanding of their stability; however, a
key limitation of the majority of these studies is the termination of
simulations as soon as the orbits of two planets cross. In this work we explore
the stability of compact, three-planet systems and continue our simulations all
the way to the first collision of planets to yield a better understanding of
the lifetime of these systems. We perform over $25,000$ integrations of a
Sun-like star orbited by three Earth-like secondaries for up to a billion
orbits to explore a wide parameter space of initial conditions in both the
co-planar and inclined cases, with a focus on the initial orbital spacing. We
calculate the probability of collision over time and determine the probability
of collision between specific pairs of planets. We find systems that persist
for over $10^8$ orbits after an orbital crossing and show how the
post-instability survival time of systems depends upon the initial orbital
separation, mutual inclination, planetary radius, and the closest encounter
experienced. Additionally, we examine the effects of very small changes in the
initial positions of the planets upon the time to collision and show the effect
that the choice of integrator can have upon simulation results. We generalise
our results throughout to show both the behaviour of systems with an inner
planet initially located at $1$ AU and $0.25$ AU.
|
Visual information extraction (VIE) has attracted considerable attention
recently owing to its various advanced applications such as document
understanding, automatic marking and intelligent education. Most existing works
decoupled this problem into several independent sub-tasks of text spotting
(text detection and recognition) and information extraction, which completely
ignored the high correlation among them during optimization. In this paper, we
propose a robust visual information extraction system (VIES) towards real-world
scenarios, which is a unified end-to-end trainable framework for simultaneous
text detection, recognition and information extraction by taking a single
document image as input and outputting the structured information.
Specifically, the information extraction branch collects abundant visual and
semantic representations from text spotting for multimodal feature fusion and
conversely, provides higher-level semantic clues to contribute to the
optimization of text spotting. Moreover, regarding the shortage of public
benchmarks, we construct a fully-annotated dataset called EPHOIE
(https://github.com/HCIILAB/EPHOIE), which is the first Chinese benchmark for
both text spotting and visual information extraction. EPHOIE consists of 1,494
images of examination paper head with complex layouts and background, including
a total of 15,771 Chinese handwritten or printed text instances. Compared with
the state-of-the-art methods, our VIES shows significant superior performance
on the EPHOIE dataset and achieves a 9.01% F-score gain on the widely used
SROIE dataset under the end-to-end scenario.
|
We prove a Liouville type theorem for the linearly perturbed Paneitz
equation: For $\epsilon>0$ small enough, if $u_\epsilon$ is a positive smooth
solution of
$$P_{S^3} u_\epsilon+\epsilon u_\epsilon=-u_\epsilon^{-7} \qquad
\mathrm{~~on~~}S^3,$$ where $P_{S^3}$ is the Paneitz operator of the round
metric $g_{S^3}$, then $u_\epsilon$ is constant. This confirms a conjecture
proposed by Fengbo Hang and Paul Yang in [ Int. Math. Res. Not. IMRN, 2020 (11)
].
|
This paper proposes an $SE_2(3)$ based extended Kalman filtering (EKF)
framework for the inertial-integrated state estimation problem. The error
representation using the straight difference of two vectors in the inertial
navigation system may not be reasonable as it does not take the direction
difference into consideration.
Therefore, we choose to use the $SE_2(3)$ matrix Lie group to represent the
state of the inertial-integrated navigation system which consequently leads to
the common frame error representation.
With the new velocity and position error definition, we leverage the group
affine dynamics with the autonomous error properties and derive the error state
differential equation for the inertial-integrated navigation on the
north-east-down (NED) navigation frame and the earth-centered earth-fixed
(ECEF) frame, respectively, the corresponding EKF, terms as $SE_2(3)$ based EKF
has also been derived. It provides a new perspective on the geometric EKF with
a more sophisticated formula for the inertial-integrated navigation system.
Furthermore, we design two new modified error dynamics on the NED frame and the
ECEF frame respectively by introducing new auxiliary vectors. Finally the
equivalence of the left-invariant EKF and left $SE_2(3)$ based EKF have been
shown in navigation frame and ECEF frame.
|
Fine-tuning pre-trained cross-lingual language models can transfer
task-specific supervision from one language to the others. In this work, we
propose to improve cross-lingual fine-tuning with consistency regularization.
Specifically, we use example consistency regularization to penalize the
prediction sensitivity to four types of data augmentations, i.e., subword
sampling, Gaussian noise, code-switch substitution, and machine translation. In
addition, we employ model consistency to regularize the models trained with two
augmented versions of the same training set. Experimental results on the XTREME
benchmark show that our method significantly improves cross-lingual fine-tuning
across various tasks, including text classification, question answering, and
sequence labeling.
|
Image denoising is of great importance for medical imaging system, since it
can improve image quality for disease diagnosis and downstream image analyses.
In a variety of applications, dynamic imaging techniques are utilized to
capture the time-varying features of the subject, where multiple images are
acquired for the same subject at different time points. Although
signal-to-noise ratio of each time frame is usually limited by the short
acquisition time, the correlation among different time frames can be exploited
to improve denoising results with shared information across time frames. With
the success of neural networks in computer vision, supervised deep learning
methods show prominent performance in single-image denoising, which rely on
large datasets with clean-vs-noisy image pairs. Recently, several
self-supervised deep denoising models have been proposed, achieving promising
results without needing the pairwise ground truth of clean images. In the field
of multi-image denoising, however, very few works have been done on extracting
correlated information from multiple slices for denoising using self-supervised
deep learning methods. In this work, we propose Deformed2Self, an end-to-end
self-supervised deep learning framework for dynamic imaging denoising. It
combines single-image and multi-image denoising to improve image quality and
use a spatial transformer network to model motion between different slices.
Further, it only requires a single noisy image with a few auxiliary
observations at different time frames for training and inference. Evaluations
on phantom and in vivo data with different noise statistics show that our
method has comparable performance to other state-of-the-art unsupervised or
self-supervised denoising methods and outperforms under high noise levels.
|
In an intelligent transportation system, the key problem of traffic
forecasting is how to extract the periodic temporal dependencies and complex
spatial correlation. Current state-of-the-art methods for traffic flow
forecasting are based on graph architectures and sequence learning models, but
they do not fully exploit spatial-temporal dynamic information in the traffic
system. Specifically, the temporal dependence of the short-range is diluted by
recurrent neural networks, and the existing sequence model ignores local
spatial information because the convolution operation uses global average
pooling. Besides, there will be some traffic accidents during the transitions
of objects causing congestion in the real world that trigger increased
prediction deviation. To overcome these challenges, we propose the
Spatial-Temporal Conv-sequence Learning (STCL), in which a focused temporal
block uses unidirectional convolution to effectively capture short-term
periodic temporal dependence, and a spatial-temporal fusion module is able to
extract the dependencies of both interactions and decrease the feature
dimensions. Moreover, the accidents features impact on local traffic
congestion, and position encoding is employed to detect anomalies in complex
traffic situations. We conduct a large number of experiments on real-world
tasks and verify the effectiveness of our proposed method.
|
In the segmentation of fine-scale structures from natural and biomedical
images, per-pixel accuracy is not the only metric of concern. Topological
correctness, such as vessel connectivity and membrane closure, is crucial for
downstream analysis tasks. In this paper, we propose a new approach to train
deep image segmentation networks for better topological accuracy. In
particular, leveraging the power of discrete Morse theory (DMT), we identify
global structures, including 1D skeletons and 2D patches, which are important
for topological accuracy. Trained with a novel loss based on these global
structures, the network performance is significantly improved especially near
topologically challenging locations (such as weak spots of connections and
membranes). On diverse datasets, our method achieves superior performance on
both the DICE score and topological metrics.
|
This paper presents a one-sided immersed boundary (IB) method using kernel
functions constructed via a moving least squares (MLS) method. The resulting
kernels effectively couple structural degrees of freedom to fluid variables on
only one side of the fluid-structure interface. This reduces spurious feedback
forcing and internal flows that are typically observed in IB models that use
isotropic kernel functions to couple the structure to fluid degrees of freedom
on both sides of the interface. The method developed here extends the original
MLS methodology introduced by Vanella and Balaras (J Comput Phys, 2009). Prior
IB/MLS methods have used isotropic kernel functions that coupled fluid
variables on both sides of the boundary to the interfacial degrees of freedom.
The original IB/MLS approach converts the cubic spline weights typically
employed in MLS reconstruction into an IB kernel function that satisfies
particular discrete moment conditions. This paper shows that the same approach
can be used to construct one-sided kernel functions (kernel functions are
referred to as generating functions in the MLS literature). We also examine the
performance of the new approach for a family of kernel functions introduced by
Peskin. It is demonstrated that the one-sided MLS construction tends to
generate non-monotone interpolation kernels with large over- and undershoots.
We present two simple weight shifting strategies to construct generating
functions that are positive and monotone, which enhances the stability of the
resulting IB methodology. Benchmark cases are used to test the order of
accuracy and verify the one-sided IB/MLS simulations in both two and three
spatial dimensions. This new IB/MLS method is also used to simulate flow over
the Ahmed car model, which highlights the applicability of this methodology for
modeling complex engineering flows.
|
Most of the existing spoken language understanding systems can perform only
semantic frame parsing based on a single-round user query. They cannot take
users' feedback to update/add/remove slot values through multiround
interactions with users. In this paper, we introduce a novel multi-step spoken
language understanding system based on adversarial learning that can leverage
the multiround user's feedback to update slot values. We perform two
experiments on the benchmark ATIS dataset and demonstrate that the new system
can improve parsing performance by at least $2.5\%$ in terms of F1, with only
one round of feedback. The improvement becomes even larger when the number of
feedback rounds increases. Furthermore, we also compare the new system with
state-of-the-art dialogue state tracking systems and demonstrate that the new
interactive system can perform better on multiround spoken language
understanding tasks in terms of slot- and sentence-level accuracy.
|
While compressing a colloidal state by optical means alone has been
previously achieved through a specific time-dependence of the trap stiffness,
realizing quickly the reverse transformation stumbles upon the necessity of a
transiently expulsive trap. To circumvent this difficulty, we propose to drive
the colloids by a combination of optical trapping and diffusiophoretic forces,
both time-dependent. Forcing via diffusiophoresis is enforced by controlling
the salt concentration at the boundary of the domain where the colloids are
confined. The method takes advantage of the separation of time scales between
salt and colloidal dynamics, and realizes a fast decompression in an optical
trap that remains confining at all times. We thereby obtain a so-called
shortcut to adiabaticity protocol where colloidal dynamics, enslaved to salt
dynamics, can nevertheless be controlled as desired.
|
Classical scaling relationships for rheological quantities such as the
$\mu(J)$-rheology have become increasingly popular for closures of two-phase
flow modeling. However, these frameworks have been derived for monodisperse
particles. We aim to extend these considerations to sediment transport modeling
by using a more realistic sediment composition. We investigate the rheological
behavior of sheared sediment beds composed of polydisperse spherical particles
in a laminar Couette-type shear flow. The sediment beds consist of particles
with a diameter size ratio of up to ten, which corresponds to grains ranging
from fine to coarse sand. The data was generated using fully coupled, grain
resolved direct numerical simulations using a combined lattice Boltzmann -
discrete element method. These highly-resolved data yield detailed
depth-resolved profiles of the relevant physical quantities that determine the
rheology, i.e., the local shear rate of the fluid, particle volume fraction,
total shear, and granular pressure. A comparison against experimental data
shows excellent agreement for the monodisperse case. We improve upon the
parameterization of the $\mu(J)$-rheology by expressing its empirically derived
parameters as a function of the maximum particle volume fraction. Furthermore,
we extend these considerations by exploring the creeping regime for viscous
numbers much lower than used by previous studies to calibrate these
correlations. Considering the low viscous numbers of our data, we found that
the friction coefficient governing the quasi-static state in the creeping
regime tends to a finite value for vanishing shear, which decreases the
critical friction coefficient by a factor of three for all cases investigated.
|
There has been increasing attention to semi-supervised learning (SSL)
approaches in machine learning to forming a classifier in situations where the
training data for a classifier consists of a limited number of classified
observations but a much larger number of unclassified observations. This is
because the procurement of classified data can be quite costly due to high
acquisition costs and subsequent financial, time, and ethical issues that can
arise in attempts to provide the true class labels for the unclassified data
that have been acquired. We provide here a review of statistical SSL approaches
to this problem, focussing on the recent result that a classifier formed from a
partially classified sample can actually have smaller expected error rate than
that if the sample were completely classified.
|
We study the effects caused by Rashba and Dresselhaus spin-orbit coupling
over the thermoelectric transport properties of a single-electron transistor,
viz., a quantum dot connected to one-dimensional leads. Using linear response
theory and employing the numerical renormalization group method, we calculate
the thermopower, electrical and thermal conductances, dimensionless
thermoelectric figure of merit, and study the Wiedemann-Franz law, showing
their temperature maps. Our results for all those properties indicate that
spin-orbit coupling drives the system into the Kondo regime. We show that the
thermoelectric transport properties, in the presence of spin-orbit coupling,
obey the expected universality of the Kondo strong coupling fixed point. In
addition, our results show a notable increase in the thermoelectric figure of
merit, caused by the spin-orbit coupling in the one-dimensional quantum dot
leads.
|
In the class of reduced Abelian torsion-free groups $G$ of finite rank, we
describe TI-groups, this means that every associative ring on $G$ is filial. If
every associative multiplication on $G$ is the zero multiplication, then $G$ is
called a $nil_a$-group. It is proved that a reduced Abelian torsion-free group
$G$ of finite rank is a $TI$-group if and only if $G$ is a homogeneous Murley
group or $G$ is a $nil_a$-group. We also study the interrelations between the
class of homogeneous Murley groups and the class of $nil_a$-groups. For any
type $t\ne (\infty,\infty,\ldots)$ and every integer $n>1$, there exist
$2^{\aleph_0}$ pairwise non-quasi-isomorphic homogeneous Murley groups of type
$t$ and rank $n$ which are $nil_a$-groups. We describe types $t$ such that
there exists a homogeneous Murley group of type $t$ which is not a
$nil_a$-group. This paper will be published in Beitr\"{a}ge zur Algebra und
Geometrie / Contributions to Algebra and Geometry.
|
The present paper is devoted to an algebraic treatment of the joint spectral
theory within the framework of Noetherian modules over an algebra finite
extension of an algebraically closed field. We prove the spectral mapping
theorem and analyze the index of tuples in purely algebraic case. The index
function over tuples from the coordinate ring of a variety is naturally
extended up to a numerical Tor-polynomial. Based on Serre's multiplicity
formula, we deduce that Tor-polynomial is just the Samuel polynomial of the
local algebra.
|
We address aspects of coarse-graining in classical Statistical Physics from
the viewpoint of the symplectic non-squeezing theorem. We make some comments
regarding the implications of the symplectic non-squeezing theorem for the
BBGKY hierarchy. We also see the cubic cells appearing in coarse-graining as a
direct consequence of the uniqueness of Hofer's metric on the group of
Hamiltonian diffeomorphisms of the phase space.
|
Next generation wireless networks are expected to be extremely complex due to
their massive heterogeneity in terms of the types of network architectures they
incorporate, the types and numbers of smart IoT devices they serve, and the
types of emerging applications they support. In such large-scale and
heterogeneous networks (HetNets), radio resource allocation and management
(RRAM) becomes one of the major challenges encountered during system design and
deployment. In this context, emerging Deep Reinforcement Learning (DRL)
techniques are expected to be one of the main enabling technologies to address
the RRAM in future wireless HetNets. In this paper, we conduct a systematic
in-depth, and comprehensive survey of the applications of DRL techniques in
RRAM for next generation wireless networks. Towards this, we first overview the
existing traditional RRAM methods and identify their limitations that motivate
the use of DRL techniques in RRAM. Then, we provide a comprehensive review of
the most widely used DRL algorithms to address RRAM problems, including the
value- and policy-based algorithms. The advantages, limitations, and use-cases
for each algorithm are provided. We then conduct a comprehensive and in-depth
literature review and classify existing related works based on both the radio
resources they are addressing and the type of wireless networks they are
investigating. To this end, we carefully identify the types of DRL algorithms
utilized in each related work, the elements of these algorithms, and the main
findings of each related work. Finally, we highlight important open challenges
and provide insights into several future research directions in the context of
DRL-based RRAM. This survey is intentionally designed to guide and stimulate
more research endeavors towards building efficient and fine-grained DRL-based
RRAM schemes for future wireless networks.
|
Quantum Computing has been evolving in the last years. Although nowadays
quantum algorithms performance has shown superior to their classical
counterparts, quantum decoherence and additional auxiliary qubits needed for
error tolerance routines have been huge barriers for quantum algorithms
efficient use. These restrictions lead us to search for ways to minimize
algorithms costs, i.e the number of quantum logical gates and the depth of the
circuit. For this, quantum circuit synthesis and quantum circuit optimization
techniques are explored. We studied the viability of using Projective
Simulation, a reinforcement learning technique, to tackle the problem of
quantum circuit synthesis for noise quantum computers with limited number of
qubits. The agent had the task of creating quantum circuits up to 5 qubits to
generate GHZ states in the IBM Tenerife (IBM QX4) quantum processor. Our
simulations demonstrated that the agent had a good performance but its capacity
for learning new circuits decreased as the number of qubits increased.
|
In this era of Gaia and ALMA, dynamical stellar mass measurements provide
benchmarks that are independent of observations of stellar characteristics and
their uncertainties. These benchmarks can then be used to validate and improve
stellar evolutionary models, which can lead to both imprecise and inaccurate
mass predictions for pre-main-sequence, low-mass stars. We present the
dynamical stellar masses derived from disks around three M-stars (FP Tau,
J0432+1827, and J1100-7619) using ALMA observations of $^{12}$CO (J=2--1) and
$^{13}$CO (J=2--1) emission. These are the first dynamical stellar mass
measurements for J0432+1827 and J1100-7619 and the most precise measurement for
FP Tau. Fiducial stellar evolutionary model tracks, which do not include any
treatment of magnetic activity, agree with the dynamical measurement of
J0432+1827 but underpredict the mass by $\sim$60\% for FP Tau and $\sim$80\%
for J1100-7619. Possible explanations for the underpredictions include
inaccurate assumptions of stellar effective temperature, undetected binarity
for J1100-7619, and that fiducial stellar evolutionary models are not complex
enough to represent these stars. In the former case, the stellar effective
temperatures would need to be increased by $\sim$40K to $\sim$340K to reconcile
the fiducial model predictions with the dynamically-measured masses. In the
latter case, we show that the dynamical masses can be reproduced using results
from stellar evolutionary models with starspots, which incorporate fractional
starspot coverage to represent the manifestation of magnetic activity. Folding
in low-mass M-stars from the literature and assuming that the stellar effective
temperatures are imprecise but accurate, we find tentative evidence of a
relationship between fractional starspot coverage and observed effective
temperature for these young, cool stars.
|
Understanding the physical processes involved in interfacial heat transfer is
critical for the interpretation of thermometric measurements and the
optimization of heat dissipation in nanoelectronic devices that are based on
transition metal dichalcogenide (TMD) semiconductors. We model the phononic and
electronic contributions to the thermal boundary conductance (TBC) variability
for the MoS$_{2}$-SiO$_{2}$ and WS$_{2}$-SiO$_{2}$ interface. A
phenomenological theory to model diffuse phonon transport at disordered
interfaces is introduced and yields $G$=13.5 and 12.4 MW/K/m$^{2}$ at 300 K for
the MoS$_{2}$-SiO$_{2}$ and WS$_{2}$-SiO$_{2} $ interface, respectively. We
compare its predictions to those of the coherent phonon model and find that the
former fits the MoS$_{2}$-SiO$_{2}$ data from experiments and simulations
significantly better. Our analysis suggests that heat dissipation at the
TMD-SiO$_{2}$ interface is dominated by phonons scattered diffusely by the
rough interface although the electronic TBC contribution can be significant
even at low electron densities ($n\leq10^{12}$ cm$^{-2}$) and may explain some
of the variation in the experimental TBC data from the literature. The physical
insights from our study can be useful for the development of thermally aware
designs in TMD-based nanoelectronics.
|
Large-scale finite element simulations of complex physical systems governed
by partial differential equations crucially depend on adaptive mesh refinement
(AMR) to allocate computational budget to regions where higher resolution is
required. Existing scalable AMR methods make heuristic refinement decisions
based on instantaneous error estimation and thus do not aim for long-term
optimality over an entire simulation. We propose a novel formulation of AMR as
a Markov decision process and apply deep reinforcement learning (RL) to train
refinement policies directly from simulation. AMR poses a new problem for RL in
that both the state dimension and available action set changes at every step,
which we solve by proposing new policy architectures with differing generality
and inductive bias. The model sizes of these policy architectures are
independent of the mesh size and hence scale to arbitrarily large and complex
simulations. We demonstrate in comprehensive experiments on static function
estimation and the advection of different fields that RL policies can be
competitive with a widely-used error estimator and generalize to larger, more
complex, and unseen test problems.
|
We investigate wave-vortex interaction emerging from an expanding compact
vortex cluster in a two-dimensional Bose-Einstein condensate. We adapt
techniques developed for compact gravitational objects to derive the
characteristic modes of the wave-vortex interaction perturbatively around an
effective vortex flow field. We demonstrate the existence of orbits or
sound-rings, in analogy to gravitational light-rings, and compute the
characteristic spectrum for the out-of-equilibrium vortex cluster. The spectrum
obtained from numerical simulations of a stochastic Gross-Pitaevskii equation
exhibiting an expanding vortex cluster is in excellent agreement with
analytical predictions. Our findings are relevant for 2d-quantum turbulence,
the semi-classical limit around fluid flows, and rotating compact objects
exhibiting discrete circulation.
|
A $C^*$-algebra satisfies the Universal Coefficient Theorem (UCT) of
Rosenberg and Schochet if it is equivalent in Kasparov's $KK$-theory to a
commutative $C^*$-algebra. This paper is motivated by the problem of
establishing the range of validity of the UCT, and in particular, whether the
UCT holds for all nuclear $C^*$-algebras.
We introduce the idea of a $C^*$-algebra that "decomposes" over a class
$\mathcal{C}$ of $C^*$-algebras. Roughly, this means that locally, there are
approximately central elements that approximately cut the $C^*$-algebra into
two $C^*$-subalgebras from $\mathcal{C}$ that have well-behaved intersection.
We show that if a $C^*$-algebra decomposes over the class of nuclear, UCT
$C^*$-algebras, then it satisfies the UCT. The argument is based on controlled
$KK$-theory, as introduced by the authors in earlier work. Nuclearity is used
via Kasparov's Hilbert module version of Voiculescu's theorem, and Haagerup's
theorem that nuclear $C^*$-algebras are amenable
We say that a $C^*$-algebra has finite complexity if it is in the smallest
class of $C^*$-algebras containing the finite-dimensional $C^*$-algebras, and
closed under decomposability; our main result implies that all $C^*$-algebras
in this class satisfy the UCT. The class of $C^*$-algebras with finite
complexity is large, and comes with an ordinal-number invariant measuring the
complexity level. We conjecture that a $C^*$-algebra of finite nuclear
dimension and real rank zero has finite complexity; this (and several other
related conjectures) would imply the UCT for all separable nuclear
$C^*$-algebras. We also give new local formulations of the UCT, and some other
necessary and sufficient conditions for the UCT to hold for all nuclear
$C^*$-algebras.
|
How to effectively remove the noise while preserving the image structure
features is a challenging issue in the field of image denoising. In recent
years, fractional PDE based methods have attracted more and more research
efforts due to the ability to balance the noise removal and the preservation of
image edges and textures. Among the existing fractional PDE algorithms, there
are only a few using spatial fractional order derivatives, and all the
fractional derivatives involved are one-sided derivatives. In this paper, an
efficient feature-preserving fractional PDE algorithm is proposed for image
denoising based on a nonlinear spatial-fractional anisotropic diffusion
equation. Two-sided Grumwald-Letnikov fractional derivatives were used in the
PDE model which are suitable to depict the local self-similarity of images. The
Short Memory Principle is employed to simplify the approximation scheme.
Experimental results show that the proposed method is of a satisfactory
performance, i.e. it keeps a remarkable balance between noise removal and
feature preserving, and has an extremely high structural retention property.
|
The electronic and superconducting properties of Fe1-xSe single-crystal
flakes grown hydrothermally are studied by the transport measurements under
zero and high magnetic fields up to 38.5 T. The results contrast sharply with
those previously reported for nematically ordered FeSe by
chemical-vapor-transport (CVT) growth. No signature of the electronic
nematicity, but an evident metal-to-nonmetal crossover with increasing
temperature, is detected in the normal state of the present hydrothermal
samples. Interestingly, a higher superconducting critical temperature Tc of
13.2 K is observed compared to a suppressed Tc of 9 K in the presence of the
nematicity in the CVT FeSe. Moreover, the upper critical field in the
zero-temperature limit is found to be isotropic with respect to the field
direction and to reach a higher value of ~42 T, which breaks the Pauli limit by
a factor of 1.8.
|
We establish limit laws for the distribution in small intervals of the roots
of the quadratic congruence $\mu^2 \equiv D \bmod m$, with $D > 0$ square-free
and $D\not\equiv 1 \bmod 4$. This is achieved by translating the problem to
convergence of certain geodesic random line processes in the hyperbolic plane.
This geometric interpretation allows us in particular to derive an explicit
expression for the pair correlation density of the roots.
|
Cathodes are critical components of rechargeable batteries. Conventionally,
the search for cathode materials relies on experimental trial-and-error and a
traversing of existing computational/experimental databases. While these
methods have led to the discovery of several commercially-viable cathode
materials, the chemical space explored so far is limited and many phases will
have been overlooked, in particular those that are metastable. We describe a
computational framework for battery cathode exploration, based on ab initio
random structure searching (AIRSS), an approach that samples local minima on
the potential energy surface to identify new crystal structures. We show that,
by delimiting the search space using a number of constraints, including
chemically aware minimum interatomic separations, cell volumes, and space group
symmetries, AIRSS can efficiently predict both thermodynamically stable and
metastable cathode materials. Specifically, we investigate LiCoO2, LiFePO4, and
LixCuyFz to demonstrate the efficiency of the method by rediscovering the known
crystal structures of these cathode materials. The effect of parameters, such
as minimum separations and symmetries, on the efficiency of the sampling is
discussed in detail. The adaptation of the minimum interatomic distances, on a
species-pair basis, from low-energy optimized structures to efficiently capture
the local coordination environment of atoms, is explored. A family of novel
cathode materials based, on the transition-metal oxalates,is proposed. They
demonstrate superb energy density, oxygen-redox stability, and lithium
diffusion properties. This article serves both as an introduction to the
computational framework, and as a guide to battery cathode material discovery
using AIRSS.
|
For a given Finsler-Minkowski norm $\mathcal{F}$ in $\mathbb{R}^N$ and a
bounded smooth domain $\Omega\subset\mathbb{R}^N$ $\big(N\geq 2\big)$, we
establish the following weighted anisotropic Sobolev inequality $$
S\left(\int_{\Omega}|u|^q
f\,dx\right)^\frac{1}{q}\leq\left(\int_{\Omega}\mathcal{F}(\nabla u)^p
w\,dx\right)^\frac{1}{p},\quad\forall\,u\in
W_0^{1,p}(\Omega,w)\leqno{\mathcal{(P)}} $$ where $W_0^{1,p}(\Omega,w)$ is the
weighted Sobolev space under a class of $p$-admissible weights $w$, where $f$
is some nonnegative integrable function in $\Omega$. We discuss the case
$0<q<1$ and observe that $$ \mu(\Omega):=\inf_{u\in
W_{0}^{1,p}(\Omega,w)}\Bigg\{\int_{\Omega}\mathcal{F}(\nabla u)^p
w\,dx:\int_{\Omega}|u|^{q}f\,dx=1\Bigg\}\leqno{\mathcal{(Q)}} $$ is associated
with singular weighted anisotropic $p$-Laplace equations. To this end, we also
study existence and regularity properties of solutions for weighted anisotropic
$p$-Laplace equations under the mixed and exponential singularities.
|
Neural network (NN) training and generalization in the infinite-width limit
are well-characterized by kernel methods with a neural tangent kernel (NTK)
that is stationary in time. However, finite-width NNs consistently outperform
corresponding kernel methods, suggesting the importance of feature learning,
which manifests as the time evolution of NTKs. Here, we analyze the phenomenon
of kernel alignment of the NTK with the target functions during gradient
descent. We first provide a mechanistic explanation for why alignment between
task and kernel occurs in deep linear networks. We then show that this behavior
occurs more generally if one optimizes the feature map over time to accelerate
learning while constraining how quickly the features evolve. Empirically,
gradient descent undergoes a feature learning phase, during which top
eigenfunctions of the NTK quickly align with the target function and the loss
decreases faster than power law in time; it then enters a kernel gradient
descent (KGD) phase where the alignment does not improve significantly and the
training loss decreases in power law. We show that feature evolution is faster
and more dramatic in deeper networks. We also found that networks with multiple
output nodes develop separate, specialized kernels for each output channel, a
phenomenon we termed kernel specialization. We show that this class-specific
alignment is does not occur in linear networks.
|
Let $\mathcal{A}$ be a generalized q-Weyl algebra, it is generated by
$u,v,Z,Z^{-1}$ with relations $ZuZ^{-1}=q^2u$, $ZvZ^{-1}=q^{-2}v$,
$uv=P(q^{-1}Z)$, $vu=P(qZ)$, where $P$ is a Laurent polynomial. A Hermitian
form $(\cdot,\cdot)$ on $\mathcal{A}$ is called invariant if
$(Za,b)=(a,bZ^{-1})$, $(ua,b)=(a,sbv)$, $(va,b)=(a,s^{-1}bu)$ for some $s\in
\mathbb{C}$ with $|s|=1$ and all $a,b\in \mathcal{A}$. In this paper we
classify positive definite invariant Hermitian forms on generalized q-Weyl
algebras.
|
Forest roads in Romania are unique natural wildlife sites used for recreation
by countless tourists. In order to protect and maintain these roads, we propose
RovisLab AMTU (Autonomous Mobile Test Unit), which is a robotic system designed
to autonomously navigate off-road terrain and inspect if any deforestation or
damage occurred along tracked route. AMTU's core component is its embedded
vision module, optimized for real-time environment perception. For achieving a
high computation speed, we use a learning system to train a multi-task Deep
Neural Network (DNN) for scene and instance segmentation of objects, while the
keypoints required for simultaneous localization and mapping are calculated
using a handcrafted FAST feature detector and the Lucas-Kanade tracking
algorithm. Both the DNN and the handcrafted backbone are run in parallel on the
GPU of an NVIDIA AGX Xavier board. We show experimental results on the test
track of our research facility.
|
Tokenization is fundamental to pretrained language models (PLMs). Existing
tokenization methods for Chinese PLMs typically treat each character as an
indivisible token. However, they ignore the unique feature of the Chinese
writing system where additional linguistic information exists below the
character level, i.e., at the sub-character level. To utilize such information,
we propose sub-character (SubChar for short) tokenization. Specifically, we
first encode the input text by converting each Chinese character into a short
sequence based on its glyph or pronunciation, and then construct the vocabulary
based on the encoded text with sub-word tokenization. Experimental results show
that SubChar tokenizers have two main advantages over existing tokenizers: 1)
They can tokenize inputs into much shorter sequences, thus improving the
computational efficiency. 2) Pronunciation-based SubChar tokenizers can encode
Chinese homophones into the same transliteration sequences and produce the same
tokenization output, hence being robust to all homophone typos. At the same
time, models trained with SubChar tokenizers perform competitively on
downstream tasks. We release our code at
https://github.com/thunlp/SubCharTokenization to facilitate future work.
|
Space weather phenomena such as solar flares, have massive destructive power
when reaches certain amount of magnitude. Such high magnitude solar flare event
can interfere space-earth radio communications and neutralize space-earth
electronics equipment. In the current study, we explorer the deep learning
approach to build a solar flare forecasting model and examine its limitations
along with the ability of features extraction, based on the available
time-series data. For that purpose, we present a multi-layer 1D Convolutional
Neural Network (CNN) to forecast solar flare events probability occurrence of M
and X classes at 1,3,6,12,24,48,72,96 hours time frame. In order to train and
evaluate the performance of the model, we utilised the available Geostationary
Operational Environmental Satellite (GOES) X-ray time series data, ranged
between July 1998 and January 2019, covering almost entirely the solar cycles
23 and 24. The forecasting model were trained and evaluated in two different
scenarios (1) random selection and (2) chronological selection, which were
compare afterward. Moreover we compare our results to those considered as
state-of-the-art flare forecasting models, both with similar approaches and
different ones.The majority of the results indicates that (1) chronological
selection obtain a degradation factor of 3\% versus the random selection for
the M class model and elevation factor of 2\% for the X class model. (2) When
consider utilizing only X-ray time-series data, the suggested model achieve
high score results compare to other studies. (3) The suggested model combined
with solely X-ray time-series fails to distinguish between M class magnitude
and X class magnitude solar flare events. All source code are available at
https://github.com/vladlanda/Low-Dimensional-Convolutional-Neural-Network-For-Solar-Flares-GOES-Time-Series-Classification
|
Powerful new, high resolution, high sensitivity, multi-frequency, wide-field
radio surveys such as the Australian Square Kilometre Array Pathfinder (ASKAP)
Evolutionary Map of the Universe (EMU) are emerging. They will offer fresh
opportunities to undertake new determinations of useful parameters for various
kinds of extended astrophysical phenomena. Here, we consider specific
application to angular size determinations of Planetary Nebulae (PNe) via a new
radio continuum Spectral Energy Distribution (SED) fitting technique. We show
that robust determinations of angular size can be obtained, comparable to the
best optical and radio observations but with the potential for consistent
application across the population. This includes unresolved and/or heavily
obscured PNe that are extremely faint or even non-detectable in the optical.
|
Plant reflectance spectra - the profile of light reflected by leaves across
different wavelengths - supply the spectral signature for a species at a
spatial location to enable estimation of functional and taxonomic diversity for
plants. We consider leaf spectra as "responses" to be explained spatially.
These spectra/reflectances are functions over a wavelength band that respond to
the environment.
Our motivating data are gathered for several families from the Cape Floristic
Region (CFR) in South Africa and lead us to develop rich novel spatial models
that can explain spectra for genera within families. Wavelength responses for
an individual leaf are viewed as a function of wavelength, leading to
functional data modeling. Local environmental features become covariates. We
introduce wavelength - covariate interaction since the response to
environmental regressors may vary with wavelength, so may variance. Formal
spatial modeling enables prediction of reflectances for genera at unobserved
locations with known environmental features. We incorporate spatial dependence,
wavelength dependence, and space-wavelength interaction (in the spirit of
space-time interaction). We implement out-of-sample validation to select a best
model, discovering that the model features listed above are all informative for
the functional data analysis. We then supply interpretation of the results
under the selected model.
|
Using the Yebes 40m and IRAM 30m radiotelescopes, we detected two series of
harmonically related lines in space that can be fitted to a symmetric rotor.
The lines have been seen towards the cold dense cores TMC-1, L483, L1527, and
L1544. High level of theory ab initio calculations indicate that the best
possible candidate is the acetyl cation, CH3CO+, which is the most stable
product resulting from the protonation of ketene. We have produced this species
in the laboratory and observed its rotational transitions Ju = 10 up to Ju =
27. Hence, we report the discovery of CH3CO+ in space based on our
observations, theoretical calculations, and laboratory experiments. The derived
rotational and distortion constants allow us to predict the spectrum of CH3CO+
with high accuracy up to 500 GHz. We derive an abundance ratio
N(H2CCO)/N(CH3CO+) = 44. The high abundance of the protonated form of H2CCO is
due to the high proton affinity of the neutral species. The other isomer,
H2CCOH+, is found to be 178.9 kJ/mol above CH3CO+. The observed intensity ratio
between the K=0 and K=1 lines, 2.2, strongly suggests that the A and E symmetry
states have suffered interconversion processes due to collisions with H and/or
H2, or during their formation through the reaction of H3+ with H2CCO.
|
Modular symmetry offers the possibility to provide an origin of discrete
flavour symmetry and to break it along particular symmetry preserving
directions without introducing flavons or driving fields. It is also possible
to use a weighton field to account for charged fermion mass hierarchies rather
than a Froggatt-Nielsen mechanism. Such an approach can be applied to flavoured
Grand Unified Theories (GUTs) which can be greatly simplified using modular
forms. As an example, we consider a modular version of a previously proposed
$S_4\times SU(5)$ GUT, with Gatto-Sartori-Tonin and Georgi-Jarlskog relations,
in which all flavons and driving fields are removed, with their effect replaced
by modular forms with moduli assumed to be at various fixed points, rendering
the theory much simpler. In the neutrino sector there are two right-handed
neutrinos constituting a Littlest Seesaw model satisfying Constrained
Sequential Dominance (CSD) where the two columns of the Dirac neutrino mass
matrix are proportional to $(0,1, -1)$ and $(1, n, 2-n)$ respectively, and
$n=1+\sqrt{6}\approx 3.45$ is prescribed by the modular symmetry, with
predictions subject to charged lepton mixing corrections. We perform a
numerical analysis, showing quark and lepton mass and mixing correlations
around the best fit points.
|
The security of code-based cryptography usually relies on the hardness of the
syndrome decoding (SD) problem for the Hamming weight. The best generic
algorithms are all improvements of an old algorithm by Prange, and they are
known under the name of Information Set Decoding (ISD) algorithms. This work
aims to extend ISD algorithms' scope by changing the underlying weight function
and alphabet size of SD. More precisely, we show how to use Wagner's algorithm
in the ISD framework to solve SD for a wide range of weight functions. We also
calculate the asymptotic complexities of ISD algorithms both in the classical
and quantum case. We then apply our results to the Lee metric, which currently
receives a significant amount of attention. By providing the parameters of SD
for which decoding in the Lee weight seems to be the hardest, our study could
have several applications for designing code-based cryptosystems and their
security analysis, especially against quantum adversaries.
|
Identifying universal properties of non-equilibrium quantum states is a major
challenge in modern physics. A fascinating prediction is that classical
hydrodynamics emerges universally in the evolution of any interacting quantum
system. Here, we experimentally probe the quantum dynamics of 51 individually
controlled ions, realizing a long-range interacting spin chain. By measuring
space-time resolved correlation functions in an infinite temperature state, we
observe a whole family of hydrodynamic universality classes, ranging from
normal diffusion to anomalous superdiffusion, that are described by L\'evy
flights. We extract the transport coefficients of the hydrodynamic theory,
reflecting the microscopic properties of the system. Our observations
demonstrate the potential for engineered quantum systems to provide key
insights into universal properties of non-equilibrium states of quantum matter.
|
We report results from searches for anisotropic stochastic gravitational-wave
backgrounds using data from the first three observing runs of the Advanced LIGO
and Advanced Virgo detectors. For the first time, we include Virgo data in our
analysis and run our search with a new efficient pipeline called {\tt PyStoch}
on data folded over one sidereal day. We use gravitational-wave radiometry
(broadband and narrow band) to produce sky maps of stochastic
gravitational-wave backgrounds and to search for gravitational waves from point
sources. A spherical harmonic decomposition method is employed to look for
gravitational-wave emission from spatially-extended sources. Neither technique
found evidence of gravitational-wave signals. Hence we derive 95\%
confidence-level upper limit sky maps on the gravitational-wave energy flux
from broadband point sources, ranging from $F_{\alpha, \Theta} < {\rm (0.013 -
7.6)} \times 10^{-8} {\rm erg \, cm^{-2} \, s^{-1} \, Hz^{-1}},$ and on the
(normalized) gravitational-wave energy density spectrum from extended sources,
ranging from $\Omega_{\alpha, \Theta} < {\rm (0.57 - 9.3)} \times 10^{-9} \,
{\rm sr^{-1}}$, depending on direction ($\Theta$) and spectral index
($\alpha$). These limits improve upon previous limits by factors of $2.9 -
3.5$. We also set 95\% confidence level upper limits on the frequency-dependent
strain amplitudes of quasimonochromatic gravitational waves coming from three
interesting targets, Scorpius X-1, SN 1987A and the Galactic Center, with best
upper limits range from $h_0 < {\rm (1.7-2.1)} \times 10^{-25},$ a factor of
$\geq 2.0$ improvement compared to previous stochastic radiometer searches.
|
Advances in differentiable numerical integrators have enabled the use of
gradient descent techniques to learn ordinary differential equations (ODEs). In
the context of machine learning, differentiable solvers are central for Neural
ODEs (NODEs), a class of deep learning models with continuous depth, rather
than discrete layers. However, these integrators can be unsatisfactorily slow
and inaccurate when learning systems of ODEs from long sequences, or when
solutions of the system vary at widely different timescales in each dimension.
In this paper we propose an alternative approach to learning ODEs from data: we
represent the underlying ODE as a vector field that is related to another base
vector field by a differentiable bijection, modelled by an invertible neural
network. By restricting the base ODE to be amenable to integration, we can
drastically speed up and improve the robustness of integration. We demonstrate
the efficacy of our method in training and evaluating continuous neural
networks models, as well as in learning benchmark ODE systems. We observe
improvements of up to two orders of magnitude when integrating learned ODEs
with GPUs computation.
|
We examine the structure of global conformal multiplets in 2D celestial CFT.
For a 4D bulk theory containing massless particles of spin
$s=\{0,\frac{1}{2},1,\frac{3}{2},2\}$ we classify and construct all
SL(2,$\mathbb{C}$) primary descendants which are organized into 'celestial
diamonds'. This explicit construction is achieved using a wavefunction-based
approach that allows us to map 4D scattering amplitudes to celestial CFT
correlators of operators with SL(2,$\mathbb{C}$) conformal dimension $\Delta$
and spin $J$. Radiative conformal primary wavefunctions have $J=\pm s$ and give
rise to conformally soft theorems for special values of $\Delta \in
\frac{1}{2}\mathbb{Z}$. They are located either at the top of celestial
diamonds, where they descend to trivial null primaries, or at the left and
right corners, where they descend both to and from generalized conformal
primary wavefunctions which have $|J|\leq s$. Celestial diamonds naturally
incorporate degeneracies of opposite helicity particles via the 2D shadow
transform relating radiative primaries and account for the global and
asymptotic symmetries in gauge theory and gravity.
|
Many studies on machine learning (ML) for computer-aided diagnosis have so
far been mostly restricted to high-quality research data. Clinical data
warehouses, gathering routine examinations from hospitals, offer great promises
for training and validation of ML models in a realistic setting. However, the
use of such clinical data warehouses requires quality control (QC) tools.
Visual QC by experts is time-consuming and does not scale to large datasets. In
this paper, we propose a convolutional neural network (CNN) for the automatic
QC of 3D T1-weighted brain MRI for a large heterogeneous clinical data
warehouse. To that purpose, we used the data warehouse of the hospitals of the
Greater Paris area (Assistance Publique-H\^opitaux de Paris [AP-HP]).
Specifically, the objectives were: 1) to identify images which are not proper
T1-weighted brain MRIs; 2) to identify acquisitions for which gadolinium was
injected; 3) to rate the overall image quality. We used 5000 images for
training and validation and a separate set of 500 images for testing. In order
to train/validate the CNN, the data were annotated by two trained raters
according to a visual QC protocol that we specifically designed for application
in the setting of a data warehouse. For objectives 1 and 2, our approach
achieved excellent accuracy (balanced accuracy and F1-score \textgreater 90\%),
similar to the human raters. For objective 3, the performance was good but
substantially lower than that of human raters. Nevertheless, the automatic
approach accurately identified (balanced accuracy and F1-score \textgreater
80\%) low quality images, which would typically need to be excluded. Overall,
our approach shall be useful for exploiting hospital data warehouses in medical
image computing.
|
Although depth extraction with passive sensors has seen remarkable
improvement with deep learning, these approaches may fail to obtain correct
depth if they are exposed to environments not observed during training. Online
adaptation, where the neural network trains while deployed, with unsupervised
learning provides a convenient solution. However, online adaptation causes a
neural network to forget the past. Thus, past training is wasted and the
network is not able to provide good results if it observes past scenes. This
work deals with practical online-adaptation where the input is online and
temporally-correlated, and training is completely unsupervised. Regularization
and replay-based methods without task boundaries are proposed to avoid
catastrophic forgetting while adapting to online data. Experiments are
performed on different datasets with both structure-from-motion and stereo.
Results of forgetting as well as adaptation are provided, which are superior to
recent methods. The proposed approach is more inline with the artificial
general intelligence paradigm as the neural network learns the scene where it
is deployed without any supervision (target labels and tasks) and without
forgetting about the past. Code is available at github.com/umarKarim/cou_stereo
and github.com/umarKarim/cou_sfm.
|
Consider the setting where there are B>1 candidate statistical models, and
one is interested in model selection. Two common approaches to solve this
problem are to select a single model or to combine the candidate models through
model averaging. Instead, we select a subset of the combined parameter space
associated with the models. Specifically, a model averaging perspective is used
to increase the parameter space, and a model selection criterion is used to
select a subset of this expanded parameter space. We account for the
variability of the criterion by adapting Yekutieli (2012)'s method to Bayesian
model averaging (BMA). Yekutieli (2012)'s method treats model selection as a
truncation problem. We truncate the joint support of the data and the parameter
space to only include small values of the covariance penalized error (CPE)
criterion. The CPE is a general expression that contains several information
criteria as special cases. Simulation results show that as long as the
truncated set does not have near zero probability, we tend to obtain lower mean
squared error than BMA. Additional theoretical results are provided that
provide the foundation for these observations. We apply our approach to a
dataset consisting of American Community Survey (ACS) period estimates to
illustrate that this perspective can lead to improvements of a single model.
|
Deep learning classifiers are now known to have flaws in the representations
of their class. Adversarial attacks can find a human-imperceptible perturbation
for a given image that will mislead a trained model. The most effective methods
to defend against such attacks trains on generated adversarial examples to
learn their distribution. Previous work aimed to align original and adversarial
image representations in the same way as domain adaptation to improve
robustness. Yet, they partially align the representations using approaches that
do not reflect the geometry of space and distribution. In addition, it is
difficult to accurately compare robustness between defended models. Until now,
they have been evaluated using a fixed perturbation size. However, defended
models may react differently to variations of this perturbation size. In this
paper, the analogy of domain adaptation is taken a step further by exploiting
optimal transport theory. We propose to use a loss between distributions that
faithfully reflect the ground distance. This leads to SAT (Sinkhorn Adversarial
Training), a more robust defense against adversarial attacks. Then, we propose
to quantify more precisely the robustness of a model to adversarial attacks
over a wide range of perturbation sizes using a different metric, the Area
Under the Accuracy Curve (AUAC). We perform extensive experiments on both
CIFAR-10 and CIFAR-100 datasets and show that our defense is globally more
robust than the state-of-the-art.
|
As governments around the world decide to deploy digital health passports as
a tool to curb the spread of Covid-19, it becomes increasingly important to
consider how these can be constructed with privacy-by-design.
In this paper we discuss the privacy and security issues of common approaches
for constructing digital health passports. We then show how to construct, and
deploy, secure and private digital health passports, in a simple and efficient
manner. We do so by using a protocol for distributed password-based token
issuance, secret sharing and by leveraging modern smart phones' secure
hardware.
Our solution only requires a constant amount of asymmetric cryptographic
operations and a single round of communication between the user and the party
verifying the user's digital health passport, and only two rounds between the
user and the server issuing the digital health passport.
|
According to Freud "words were originally magic and to this day words have
retained much of their ancient magical power". By words, behaviors are
transformed and problems are solved. The way we use words reveals our
intentions, goals and values. Novel tools for text analysis help understand the
magical power of words. This power is multiplied, if it is combined with the
study of social networks, i.e. with the analysis of relationships among social
units. This special issue of the International Journal of Information
Management, entitled "Combining Social Network Analysis and Text Mining: from
Theory to Practice", includes heterogeneous and innovative research at the
nexus of text mining and social network analysis. It aims to enrich work at the
intersection of these fields, which still lags behind in theoretical,
empirical, and methodological foundations. The nine articles accepted for
inclusion in this special issue all present methods and tools that have
business applications. They are summarized in this editorial introduction.
|
Neural networks for stock price prediction(NNSPP) have been popular for
decades. However, most of its study results remain in the research paper and
cannot truly play a role in the securities market. One of the main reasons
leading to this situation is that the prediction error(PE) based evaluation
results have statistical flaws. Its prediction results cannot represent the
most critical financial direction attributes. So it cannot provide investors
with convincing, interpretable, and consistent model performance evaluation
results for practical applications in the securities market. To illustrate, we
have used data selected from 20 stock datasets over six years from the Shanghai
and Shenzhen stock market in China, and 20 stock datasets from NASDAQ and NYSE
in the USA. We implement six shallow and deep neural networks to predict stock
prices and use four prediction error measures for evaluation. The results show
that the prediction error value only partially reflects the model accuracy of
the stock price prediction, and cannot reflect the change in the direction of
the model predicted stock price. This characteristic determines that PE is not
suitable as an evaluation indicator of NNSPP. Otherwise, it will bring huge
potential risks to investors. Therefore, this paper establishes an experiment
platform to confirm that the PE method is not suitable for the NNSPP
evaluation, and provides a theoretical basis for the necessity of creating a
new NNSPP evaluation method in the future.
|
The Radar Echo Telescope for Cosmic Rays (RET-CR) is a recently initiated
experiment designed to detect the englacial cascade of a cosmic-ray initiated
air shower via in-ice radar, toward the goal of a full-scale, next-generation
experiment to detect ultra high energy neutrinos in polar ice. For cosmic rays
with a primary energy greater than 10 PeV, roughly 10% of an air-shower's
energy reaches the surface of a high elevation ice-sheet ($\gtrsim$2 km)
concentrated into a radius of roughly 10 cm. This penetrating shower core
creates an in-ice cascade many orders of magnitude more dense than the
preceding in-air cascade. This dense cascade can be detected via the radar echo
technique, where transmitted radio is reflected from the ionization deposit
left in the wake of the cascade. RET-CR will test the radar echo method in
nature, with the in-ice cascade of a cosmic-ray initiated air-shower serving as
a test beam. We present the projected event rate and sensitivity based upon a
three part simulation using CORSIKA, GEANT4, and RadioScatter. RET-CR expects
$\sim$1 radar echo event per day.
|
The near-infinite chemical diversity of natural and artificial macromolecules
arises from the vast range of possible component monomers, linkages, and
polymers topologies. This enormous variety contributes to the ubiquity and
indispensability of macromolecules but hinders the development of general
machine learning methods with macromolecules as input. To address this, we
developed GLAMOUR, a framework for chemistry-informed graph representation of
macromolecules that enables quantifying structural similarity, and
interpretable supervised learning for macromolecules.
|
Understanding the formation of lead halide (LH) perovskite solution
precursors is crucial to gain insight into the evolution of these materials to
thin films for solar cells. Using density-functional theory in conjunction with
the polarizable continuum model, we investigate 18 complexes with chemical
formula PbX$_2$M$_4$, where X = Cl, Br, I and M are common solvent molecules.
Through the analysis of structural properties, binding energies, and charge
distributions, we clarify the role of halogen species and solvent molecules in
the formation of LH perovskite precursors. We find that interatomic distances
are critically affected by the halogen species, while the energetic stability
is driven by the solvent coordination to the backbones. Regardless of the
solvent, lead iodide complexes are more strongly bound than the others. Based
on the charge distribution analysis, we find that all solvent molecules bind
covalently with the LH backbones and that Pb-I and Pb-Br bonds lose ionicity in
solution. Our results contribute to clarify the physical properties of LH
perovskite solution precursors and offer a valuable starting point for further
investigations on their crystalline intermediates.
|
We investigate the effect of the population III (Pop III) stars supernova
explosion~(SN) on the high redshifts reionization history using the latest
Planck data. It is predicted that massive Pop~III stars~($130M_\odot\leq M\leq
270M_\odot$) explode energetically at the end of their stellar life as
pair-instability supernovae (PISNe). In the explosion, supernova remnants grow
as hot ionized bubbles and enhance the ionization fraction in the early stage
of the reionization history. This enhancement affects the optical depth of the
cosmic microwave background~(CMB) and generates the additional anisotropy of
the CMB polarization on large scales. Therefore, analyzing the Planck
polarization data allows us to examine the Pop III star SNe and the abundance
of their progenitors, massive Pop III stars. In order to model the SN
contribution to reionization, we introduce a new parameter $\zeta$, which
relates to the abundance of the SNe to the collapse fraction of the Universe.
Using the Markov chain Monte Carlo method with the latest Planck polarization
data, we obtain the constraint on our model parameter, $\zeta$. Our constraint
tells us that observed CMB polarization is consistent with the abundance of
PISNe predicted from the star formation rate and initial mass function of Pop
III stars in recent cosmological simulations. We also suggest that combining
further observations on the late reionization history such as high redshift
quasi-stellar object~(QSO) observations can provide tighter constraints and
important information on the nature of Pop III stars.
|
Neural Networks (NNs) can be used to solve Ordinary and Partial Differential
Equations (ODEs and PDEs) by redefining the question as an optimization
problem. The objective function to be optimized is the sum of the squares of
the PDE to be solved and of the initial/boundary conditions. A feed forward NN
is trained to minimise this loss function evaluated on a set of collocation
points sampled from the domain where the problem is defined. A compact and
smooth solution, that only depends on the weights of the trained NN, is then
obtained. This approach is often referred to as PINN, from Physics Informed
Neural Network~\cite{raissi2017physics_1, raissi2017physics_2}. Despite the
success of the PINN approach in solving various classes of PDEs, an
implementation of this idea that is capable of solving a large class of ODEs
and PDEs with good accuracy and without the need to finely tune the
hyperparameters of the network, is not available yet. In this paper, we
introduce a new implementation of this concept - called dNNsolve - that makes
use of dual Neural Networks to solve ODEs/PDEs. These include: i) sine and
sigmoidal activation functions, that provide a more efficient basis to capture
both secular and periodic patterns in the solutions; ii) a newly designed
architecture, that makes it easy for the the NN to approximate the solution
using the basis functions mentioned above. We show that dNNsolve is capable of
solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions, without
the need of hyperparameter fine-tuning.
|
In the status quo, dementia is yet to be cured. Precise diagnosis prior to
the onset of the symptoms can prevent the rapid progression of the emerging
cognitive impairment. Recent progress has shown that Electroencephalography
(EEG) is the promising and cost-effective test to facilitate the detection of
neurocognitive disorders. However, most of the existing works have been using
only resting-state EEG. The efficiencies of EEG signals from various cognitive
tasks, for dementia classification, have yet to be thoroughly investigated. In
this study, we designed four cognitive tasks that engage different cognitive
performances: attention, working memory, and executive function. We
investigated these tasks by using statistical analysis on both time and
frequency domains of EEG signals from three classes of human subjects: Dementia
(DEM), Mild Cognitive Impairment (MCI), and Normal Control (NC). We also
further evaluated the classification performances of two features extraction
methods: Principal Component Analysis (PCA) and Filter Bank Common Spatial
Pattern (FBCSP). We found that the working memory related tasks yielded good
performances for dementia recognition in both cases using PCA and FBCSP.
Moreover, FBCSP with features combination from four tasks revealed the best
sensitivity of 0.87 and the specificity of 0.80. To our best knowledge, this is
the first work that concurrently investigated several cognitive tasks for
dementia recognition using both statistical analysis and classification scores.
Our results yielded essential information to design and aid in conducting
further experimental tasks to early diagnose dementia patients.
|
Frailty models are survival analysis models which account for heterogeneity
and random effects in the data. In these models, the random effect (the
frailty) is assumed to have a multiplicative effect on the hazard. In this
paper, we present frailty models using phase-type distributions as the
frailties. We explore the properties of the proposed frailty models and derive
expectation-maximization algorithms for maximum-likelihood estimation. The
algorithms' performance is illustrated in several numerical examples of
practical significance.
|
Is it possible to use convolutional neural networks pre-trained without any
natural images to assist natural image understanding? The paper proposes a
novel concept, Formula-driven Supervised Learning. We automatically generate
image patterns and their category labels by assigning fractals, which are based
on a natural law existing in the background knowledge of the real world.
Theoretically, the use of automatically generated images instead of natural
images in the pre-training phase allows us to generate an infinite scale
dataset of labeled images. Although the models pre-trained with the proposed
Fractal DataBase (FractalDB), a database without natural images, does not
necessarily outperform models pre-trained with human annotated datasets at all
settings, we are able to partially surpass the accuracy of ImageNet/Places
pre-trained models. The image representation with the proposed FractalDB
captures a unique feature in the visualization of convolutional layers and
attentions.
|
Node classification and link prediction are widely studied in graph
representation learning. While both transductive node classification and link
prediction operate over a single input graph, they have so far been studied
separately. Node classification models take an input graph with node features
and incomplete node labels, and implicitly assume that the graph is
relationally complete, i.e., no edges are missing. By contrast, link prediction
models are solely motivated by relational incompleteness of the input graphs,
and do not typically leverage node features or classes. We propose a unifying
perspective and study the problems of (i) transductive node classification over
incomplete graphs and (ii) link prediction over graphs with node features,
introduce a new dataset for this setting, WikiAlumni, and conduct an extensive
benchmarking study.
|
Fourier phase retrieval is the problem of reconstructing a signal given only
the magnitude of its Fourier transformation. Optimization-based approaches,
like the well-established Gerchberg-Saxton or the hybrid input output
algorithm, struggle at reconstructing images from magnitudes that are not
oversampled. This motivates the application of learned methods, which allow
reconstruction from non-oversampled magnitude measurements after a learning
phase. In this paper, we want to push the limits of these learned methods by
means of a deep neural network cascade that reconstructs the image successively
on different resolutions from its non-oversampled Fourier magnitude. We
evaluate our method on four different datasets (MNIST, EMNIST, Fashion-MNIST,
and KMNIST) and demonstrate that it yields improved performance over other
non-iterative methods and optimization-based methods.
|
Let $(R,\frak m)$ be a Noetherian local ring of prime characteristic $p>0$,
and $t$ an integer such that $H_{\frak m}^j(R)/0^F_{H^j_{\frak m}(R)}$ has
finite length for all $j<t$. The aim of this paper is to show that there exists
an uniform bound for Frobenius test exponents of ideals generated by filter
regular sequences of length at most $t$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.