abstract
stringlengths 42
2.09k
|
---|
Erosion, as a key control of landslide dynamics, significantly increases the
destructive power by rapidly amplifying its volume, mobility and impact energy.
Mobility is directly linked to the threat posed by an erosive landslide. No
clear-cut mechanical condition has been presented so far for when, how and how
much energy the erosive landslide gains or loses, resulting in enhanced or
reduced mobility. We pioneer a mechanical model for the energy budget of an
erosive landslide that controls its mobility. A fundamentally new understanding
is that the increased inertia due to the increased mass is related to an
entrainment velocity. With this, the true inertia of an erosive landslide can
be ascertained, making a breakthrough in correctly determining the mobility of
the erosive landslide. Outstandingly, erosion velocity regulates the energy
budget and decides whether the landslide mobility will be enhanced or reduced.
This provides the first-ever explicit mechanical quantification of the state of
erosional energy and a precise description of mobility. This addresses the
long-standing question of why many erosive landslides generate higher mobility,
while others reduce mobility. By introducing three key concepts:
erosion-velocity, entrainment-velocity and energy-velocity, we demonstrate that
erosion and entrainment are essentially different processes. Landslides gain
energy and enhance mobility if the erosion velocity is greater than the
entrainment velocity. We introduce two dimensionless numbers, mobility scaling
and erosion number, delivering explicit measure of mobility. We establish a
mechanism of landslide-propulsion providing the erosion-thrust to the
landslide. Analytically obtained velocity indicates that erosion controls the
landslide dynamics. We also present a full set of dynamical equations in
conservative form which correctly includes the erosion induced net momentum
production.
|
We report fabrication of EuSb$_2$ single-crystalline films and investigation
of their quantum transport. First-principles calculations demonstrate that
EuSb$_2$ is a magnetic topological nodal-line semimetal protected by
nonsymmorphic symmetry. Observed Shubnikov-de Haas oscillations with multiple
frequency components exhibit small effective masses and two-dimensional
field-angle dependence even in a 250 nm thick film, further suggesting possible
contributions of surface states. This finding of the high-mobility magnetic
topological semimetal will trigger further investigation of exotic quantum
transport phenomena by controlling magnetic order in topological semimetal
films.
|
Self-organization is frequently observed in active collectives, from ant
rafts to molecular motor assemblies. General principles describing
self-organization away from equilibrium have been challenging to identify. We
offer a unifying framework that models the behavior of complex systems as
largely random, while capturing their configuration-dependent response to
external forcing. This allows derivation of a Boltzmann-like principle for
understanding and manipulating driven self-organization. We validate our
predictions experimentally in shape-changing robotic active matter, and outline
a methodology for controlling collective behavior. Our findings highlight how
emergent order depends sensitively on the matching between external patterns of
forcing and internal dynamical response properties, pointing towards future
approaches for design and control of active particle mixtures and
metamaterials.
|
This study more complex digital platforms in early stages in the two-sided
market to produce powerful network effects. In this study, I use Transfer
Entropy to look for super users who connect hominids in different networks to
achieve higher network effects in the digital platform in the two-sided market,
which has recently become more complex. And this study also aims to redefine
the decision criteria of product managers by helping them define users with
stronger network effects. With the development of technology, the structure of
the industry is becoming more difficult to interpret and the complexity of
business logic is increasing. This phenomenon is the biggest problem that makes
it difficult for start-ups to challenge themselves. I hope this study will help
product managers create new digital economic networks, enable them to make
prioritized, data-driven decisions, and find users who can be the hub of the
network even in small products.
|
The time division multiple access (TDMA) technique has been applied in
automotive multiple-input multiple-output (MIMO) radar. However, it suffers
from the transmit energy loss, and as a result the parameter estimation
performance degradation when the number of transmit elements increases. To
tackle these problem, a transmit beamspace (TB) Doppler division multiple
access (DDMA) approach is proposed. First, a phase modulation matrix with empty
Doppler spectrum is introduced. By exploiting the empty Doppler spectrum, a
test function based on sequential detection is developed to mitigate the
Doppler ambiguity in DDMA waveform. Then, a discrete Fourier transform
(DFT)-based TB in slow-time is formed.The proposed method can achieve waveform
diversity in Doppler domain and generate a TB in slow-time that concentrates
the transmitted power in a fixed spatial region to improve the transmit energy
distribution for automotive MIMO radar, which is favored by medium/long range
radar (MRR/LRR) applications. As compared to the conventional TDMA technique,
the proposed TB DDMA approach can fully exploit the transmission capabilities
of all transmit elements to ensure that the emitted power is efficiently used
and inherits easy implementation. Moreover, the proposed TB DDMA method avoids
the trade-off between the active time for each transmit antenna and the frame
time. Simulation results verify the effectiveness of the proposed TB DDMA
approach for automotive MIMO radar.
|
The pore-solid interface and its characteristics play a key role in chemical
interactions between minerals in the solid soil matrix and the liquid in pore
space and, consequently, solute transport in soils. Specific surface area
(SSA), typically measured to characterize the pore-solid interface, depends not
only on the particles size distribution, but also particle shapes and surface
roughness. In this note, we investigate the effects of surface roughness and
probing molecule size on SSA estimation, employ concepts from fractals, and
theoretically estimate specific surface area from particle size distribution
and water retention curve (WRC). The former is used to characterize the
particle sizes and the latter to approximately quantify the pore-solid
interface roughness by determining the surface fractal dimension Ds. To
evaluate our approach, we use five Washington and twenty one Arizona soils for
which both particle size distributions and water retention curves were
accurately measured over a wide range of particle sizes and matric potentials.
Comparison with the experiments show that the proposed method estimates the SSA
reasonably well with root mean square error RMSE = 16.8 and 30.1 m2/g for the
Washington and Arizona datasets, respectively.
|
The present work proposes a deep-learning-based approach for the
classification of COVID-19 coughs from non-COVID-19 coughs and that can be used
as a low-resource-based tool for early detection of the onset of such
respiratory diseases. The proposed system uses the ResNet-50 architecture, a
popularly known Convolutional Neural Network (CNN) for image recognition tasks,
fed with the log-Mel spectrums of the audio data to discriminate between the
two types of coughs. For the training and validation of the proposed deep
learning model, this work utilizes the Track-1 dataset provided by the DiCOVA
Challenge 2021 organizers. Additionally, to increase the number of
COVID-positive samples and to enhance variability in the training data, it has
also utilized a large open-source database of COVID-19 coughs collected by the
EPFL CoughVid team. Our developed model has achieved an average validation AUC
of 98.88%. Also, applying this model on the Blind Test Set released by the
DiCOVA Challenge, the system has achieved a Test AUC of 75.91%, Test
Specificity of 62.50%, and Test Sensitivity of 80.49%. Consequently, this
submission has secured 16th position in the DiCOVA Challenge 2021 leader-board.
|
We introduce the generalized Heisenberg algebra appropriate for realizations
of the $\mathfrak{gl}(n)$ algebra. Linear realizations of the
$\mathfrak{gl}(n)$ algebra are presented and the corresponding star product,
coproduct of momenta and twist are constructed. The dual realization and dual
$\mathfrak{gl}(n)$ algebra are considered. Finally, we present a general
realization of the $\mathfrak{gl}(n)$ algebra, the corresponding coproduct of
momenta and two classes of twists. These results can be applied to physical
theories on noncommutative spaces of the $\mathfrak{gl}(n)$ type.
|
This paper investigates the transient stability of power systems co-dominated
by different types of grid-forming (GFM) devices. Synchronous generators (SGs
and VSGs) and droop-controlled inverters are typical GFM devices in modern
power systems. SGs/VSGs are able to provide inertia while droop-controlled
inverters are generally inertialess. The transient stability of power systems
dominated by homogeneous GFM devices has been extensively studied. Regarding
the hybrid system jointly dominated by heterogeneous GFM devices, the transient
stability is rarely reported. This paper aims to fill this gap. It is found
that the synchronization behavior of the hybrid system can be described by a
second-order motion equation, resembling the swing equation of SGs. Moreover,
two significant differences from conventional power systems are discovered. The
first is that the droop control dramatically enhances the damping effect,
greatly affecting the transient stability region. The second is that the
frequency state variable exhibits a jump at the moment of fault disturbances,
thus impacting the post-fault initial-state location and stability assessment.
The underlying mechanism behind the two new characteristics is clarified and
the impact on the transient stability performance is analyzed and verified. The
findings provide new insights into transient stability of power systems hosting
heterogeneous devices.
|
We present DeepMVI, a deep learning method for missing value imputation in
multidimensional time-series datasets. Missing values are commonplace in
decision support platforms that aggregate data over long time stretches from
disparate sources, and reliable data analytics calls for careful handling of
missing data. One strategy is imputing the missing values, and a wide variety
of algorithms exist spanning simple interpolation, matrix factorization methods
like SVD, statistical models like Kalman filters, and recent deep learning
methods. We show that often these provide worse results on aggregate analytics
compared to just excluding the missing data. DeepMVI uses a neural network to
combine fine-grained and coarse-grained patterns along a time series, and
trends from related series across categorical dimensions. After failing with
off-the-shelf neural architectures, we design our own network that includes a
temporal transformer with a novel convolutional window feature, and kernel
regression with learned embeddings. The parameters and their training are
designed carefully to generalize across different placements of missing blocks
and data characteristics. Experiments across nine real datasets, four different
missing scenarios, comparing seven existing methods show that DeepMVI is
significantly more accurate, reducing error by more than 50% in more than half
the cases, compared to the best existing method. Although slower than simpler
matrix factorization methods, we justify the increased time overheads by
showing that DeepMVI is the only option that provided overall more accurate
analytics than dropping missing values.
|
The quantum relative entropy is a measure of the distinguishability of two
quantum states, and it is a unifying concept in quantum information theory:
many information measures such as entropy, conditional entropy, mutual
information, and entanglement measures can be realized from it. As such, there
has been broad interest in generalizing the notion to further understand its
most basic properties, one of which is the data processing inequality. The
quantum f-divergence of Petz is one generalization of the quantum relative
entropy, and it also leads to other relative entropies, such as the Petz--Renyi
relative entropies. In this contribution, I introduce the optimized quantum
f-divergence as a related generalization of quantum relative entropy. I prove
that it satisfies the data processing inequality, and the method of proof
relies upon the operator Jensen inequality, similar to Petz's original
approach. Interestingly, the sandwiched Renyi relative entropies are particular
examples of the optimized f-divergence. Thus, one benefit of this approach is
that there is now a single, unified approach for establishing the data
processing inequality for both the Petz--Renyi and sandwiched Renyi relative
entropies, for the full range of parameters for which it is known to hold.
|
Dynamics and textures of magnetic domain walls (DWs) may largely alter the
electronic behaviors in a Weyl semimetal system via emergent gauge fields.
However, very little is known about even the basic properties of these domain
walls in Weyl materials. In this work, we imaged the spontaneous magnetization
and magnetic susceptibility of a ferromagnetic (FM) Weyl semimetal CeAlSi using
scanning SQUID microscopy. We observed the ferromagnetic DWs lined-up with the
[100] direction (or other degenerate directions). We also discovered the
coexistence of stable and metastable domain phases, which arise likely due to
magnetoelastic and magnetostriction effects and are expected to be highly
tunable with small strains. We applied an in-plane external field as the CeAlSi
sample was cooled down to below the magnetic phase transition of 8.3K, showing
that the pattern of FM domains is strongly correlated with both the amplitude
and the orientation of the external field even for weak fields of a few Gauss.
The area of stable domains increases with field and reaches maximum when the
field is parallel to the main crystallographic axes of the CeAlSi crystal. Our
results suggest that the manipulation of these heterogeneous phases can provide
a practical way to study the interplay between magnetism and electronic
properties in Weyl systems, and that these systems can even serve as a new
platform for magnetic sensors.
|
This research focuses on predicting the demand for air taxi urban air
mobility (UAM) services during different times of the day in various geographic
regions of New York City using machine learning algorithms (MLAs). Several
ride-related factors (such as month of the year, day of the week and time of
the day) and weather-related variables (such as temperature, weather conditions
and visibility) are used as predictors for four popular MLAs, namely, logistic
regression, artificial neural networks, random forests, and gradient boosting.
Experimental results suggest gradient boosting to consistently provide higher
prediction performance. Specific locations, certain time periods and weekdays
consistently emerged as critical predictors.
|
A long-lasting belief is that the gravitational stress by the moon would be
responsible for earthquakes because of causing a tidal deformation of Earth's
crust. Even worse, earthquakes are sometimes said to be correlated with
eclipses. We review the origin of this wrong statement and show that the idea
is owed to a fallacious perception of coincidence. In ancient times the two
catastrophes were linked interpreting the announcement of Doomsday, while in
modern times a quasi-scientific essay disseminated such an interrelation
shortly before the theory of tectonics.
|
The need for open scientific knowledge graphs is ever increasing. While there
are large repositories of open access articles and free publication indexes,
there are still few free knowledge graphs exposing citation networks, and often
their coverage is partial. Consequently, most evaluation processes based on
citation counts rely on commercial citation databases. Things are changing
thanks to the Initiative for Open Citations (I4OC, https://i4oc.org) and the
Initiative for Open Abstracts (I4OA, https://i4oa.org), whose goal is to
campaign for scholarly publishers to open the reference lists and the other
metadata of their articles. This paper investigates the growth of the open
bibliographic metadata and open citations in two scientific knowledge graphs,
OpenCitations' COCI and Crossref, with an experiment on the Italian National
Scientific Qualification (NSQ), the National process for University Professor
qualification which uses data from commercial indexes. We simulated the
procedure by only using such open data and explored similarities and
differences with the official results. The outcomes of the experiment show that
the amount of open bibliographic metadata and open citation data currently
available in the two scientific knowledge graphs adopted is not yet enough for
obtaining results similar to those provided using commercial databases.
|
In this work, we analyze the creation of the discharge asymmetry and the
concomitant formation of the DC self-bias voltage in capacitively coupled radio
frequency plasmas driven by multi-frequency waveforms, as a function of the
electrode surface characteristics. For this latter, we consider and vary the
coefficients that characterize the elastic reflection of the electrons from the
surfaces and the ion-induced secondary electron yield. Our investigations are
based on Particle-in-Cell/Monte Carlo Collision simulations of the plasma and
on a model that aids the understanding of the computational results. Electron
reflection from the electrodes is found to affect slightly the discharge
asymmetry in the presence of multi-frequency excitation, whereas secondary
electrons cause distinct changes to the asymmetry of the plasma as a function
of the phase angle between the harmonics of the driving voltage waveform and as
a function the number of these harmonics.
|
End-to-end (E2E) modeling is advantageous for automatic speech recognition
(ASR) especially for Japanese since word-based tokenization of Japanese is not
trivial, and E2E modeling is able to model character sequences directly. This
paper focuses on the latest E2E modeling techniques, and investigates their
performances on character-based Japanese ASR by conducting comparative
experiments. The results are analyzed and discussed in order to understand the
relative advantages of long short-term memory (LSTM), and Conformer models in
combination with connectionist temporal classification, transducer, and
attention-based loss functions. Furthermore, the paper investigates on
effectivity of the recent training techniques such as data augmentation
(SpecAugment), variational noise injection, and exponential moving average. The
best configuration found in the paper achieved the state-of-the-art character
error rates of 4.1%, 3.2%, and 3.5% for Corpus of Spontaneous Japanese (CSJ)
eval1, eval2, and eval3 tasks, respectively. The system is also shown to be
computationally efficient thanks to the efficiency of Conformer transducers.
|
We propose a new sampling algorithm combining two quite powerful ideas in the
Markov chain Monte Carlo literature -- adaptive Metropolis sampler and
two-stage Metropolis-Hastings sampler. The proposed sampling method will be
particularly very useful for high-dimensional posterior sampling in Bayesian
models with expensive likelihoods. In the first stage of the proposed
algorithm, an adaptive proposal is used based on the previously sampled states
and the corresponding acceptance probability is computed based on an
approximated inexpensive target density. The true expensive target density is
evaluated while computing the second stage acceptance probability only if the
proposal is accepted in the first stage. The adaptive nature of the algorithm
guarantees faster convergence of the chain and very good mixing properties. On
the other hand, the two-stage approach helps in rejecting the bad proposals in
the inexpensive first stage, making the algorithm computationally efficient. As
the proposals are dependent on the previous states the chain loses its Markov
property, but we prove that it retains the desired ergodicity property. The
performance of the proposed algorithm is compared with the existing algorithms
in two simulated and two real data examples.
|
The integration of behavioral phenomena into mechanistic models of cognitive
function is a fundamental staple of cognitive science. Yet, researchers are
beginning to accumulate increasing amounts of data without having the temporal
or monetary resources to integrate these data into scientific theories. We seek
to overcome these limitations by incorporating existing machine learning
techniques into an open-source pipeline for the automated construction of
quantitative models. This pipeline leverages the use of neural architecture
search to automate the discovery of interpretable model architectures, and
automatic differentiation to automate the fitting of model parameters to data.
We evaluate the utility of these methods based on their ability to recover
quantitative models of human information processing from synthetic data. We
find that these methods are capable of recovering basic quantitative motifs
from models of psychophysics, learning and decision making. We also highlight
weaknesses of this framework and discuss future directions for their
mitigation.
|
Many advances that have improved the robustness and efficiency of deep
reinforcement learning (RL) algorithms can, in one way or another, be
understood as introducing additional objectives, or constraints, in the policy
optimization step. This includes ideas as far ranging as exploration bonuses,
entropy regularization, and regularization toward teachers or data priors when
learning from experts or in offline RL. Often, task reward and auxiliary
objectives are in conflict with each other and it is therefore natural to treat
these examples as instances of multi-objective (MO) optimization problems. We
study the principles underlying MORL and introduce a new algorithm,
Distillation of a Mixture of Experts (DiME), that is intuitive and
scale-invariant under some conditions. We highlight its strengths on standard
MO benchmark problems and consider case studies in which we recast offline RL
and learning from experts as MO problems. This leads to a natural algorithmic
formulation that sheds light on the connection between existing approaches. For
offline RL, we use the MO perspective to derive a simple algorithm, that
optimizes for the standard RL objective plus a behavioral cloning term. This
outperforms state-of-the-art on two established offline RL benchmarks.
|
The first attempt is made to provide a quantitative theoretical
interpretation of the WASA-at-COSY experimental data on the basic double-pion
production reactions $pn \to d \pi^0\pi^0$ and $pn \to d \pi^+\pi^-$ in the
energy region $T_p =1$ - $1.3$ GeV [P. Adlarson et al., Phys. Lett. B 721, 229
(2013)]. The data are analyzed within a model based on production and decay of
an intermediate $I(J^P)=0(3^+)$ dibaryon resonance $\mathcal{D}_{03}$ (denoted
also as $d^*(2380)$). The observed decrease of the near-threshold enhancement
(the so-called ABC effect) in the reaction $pn \to d \pi^+\pi^-$ in comparison
to that in the reaction $pn \to d \pi^0\pi^0$ is explained (at least partially)
to be due to isospin symmetry violation in the two-pion decay of an
intermediate near-threshold scalar $\sigma$ meson emitted from the
$\mathcal{D}_{03}$ dibaryon resonance under conditions of the partial chiral
symmetry restoration.
|
Visual navigation is often cast as a reinforcement learning (RL) problem.
Current methods typically result in a suboptimal policy that learns general
obstacle avoidance and search behaviours. For example, in the target-object
navigation setting, the policies learnt by traditional methods often fail to
complete the task, even when the target is clearly within reach from a human
perspective. In order to address this issue, we propose to learn to imagine a
latent representation of the successful (sub-)goal state. To do so, we have
developed a module which we call Foresight Imagination (ForeSIT). ForeSIT is
trained to imagine the recurrent latent representation of a future state that
leads to success, e.g. either a sub-goal state that is important to reach
before the target, or the goal state itself. By conditioning the policy on the
generated imagination during training, our agent learns how to use this
imagination to achieve its goal robustly. Our agent is able to imagine what the
(sub-)goal state may look like (in the latent space) and can learn to navigate
towards that state. We develop an efficient learning algorithm to train ForeSIT
in an on-policy manner and integrate it into our RL objective. The integration
is not trivial due to the constantly evolving state representation shared
between both the imagination and the policy. We, empirically, observe that our
method outperforms the state-of-the-art methods by a large margin in the
commonly accepted benchmark AI2THOR environment. Our method can be readily
integrated or added to other model-free RL navigation frameworks.
|
We present a general theory of interpolation error estimates for smooth
functions and inverse inequalities on anisotropic meshes. In our theory, the
error of interpolation is bound in terms of the diameter of a simplex and a
geometric parameter. In the two-dimensional case, our geometric parameter is
equivalent to the circumradius of a triangle. In the three-dimensional case,
our geometric parameter also represents the flatness of a tetrahedron. This
paper also includes corrections to an error in "General theory of interpolation
error estimates on anisotropic meshes" (Japan Journal of Industrial and Applied
Mathematics, 38 (2021) 163-191), in which Theorem 2 was incorrect.
|
The nonzero bulk viscosity signals breaking of the scale invariance. We
demonstrate that a disorder in two-dimensional noninteracting electron gas in a
perpendicular magnetic field results in the nonzero disorder-averaged bulk
viscosity. We derive analytic expression for the bulk viscosity within the
self-consistent Born approximation. This residual bulk viscosity provides the
lower bound for the bulk viscosity of 2D interacting electrons at low enough
temperatures.
|
Cochlear implants (CIs) are implantable medical devices that can restore the
hearing sense of people suffering from profound hearing loss. The CI uses a set
of electrode contacts placed inside the cochlea to stimulate the auditory nerve
with current pulses. The exact location of these electrodes may be an important
parameter to improve and predict the performance with these devices. Currently
the methods used in clinics to characterize the geometry of the cochlea as well
as to estimate the electrode positions are manual, error-prone and time
consuming. We propose a Markov random field (MRF) model for CI electrode
localization for cone beam computed tomography (CBCT) data-sets. Intensity and
shape of electrodes are included as prior knowledge as well as distance and
angles between contacts. MRF inference is based on slice sampling particle
belief propagation and guided by several heuristics. A stochastic search finds
the best maximum a posteriori estimation among sampled MRF realizations. We
evaluate our algorithm on synthetic and real CBCT data-sets and compare its
performance with two state of the art algorithms. An increase of localization
precision up to 31.5% (mean), or 48.6% (median) respectively, on real CBCT
data-sets is shown.
|
We investigate the $\Lambda$-polytopes, a convex-linear structure recently
defined and applied to the classical simulation of quantum computation with
magic states by sampling. There is one such polytope, $\Lambda_n$, for every
number $n$ of qubits. We establish two properties of the family $\{\Lambda_n,
n\in \mathbb{N}\}$, namely (i) Any extremal point (vertex) $A_\alpha \in
\Lambda_m$ can be used to construct vertices in $\Lambda_n$, for all $n>m$.
(ii) For vertices obtained through this mapping, the classical simulation of
quantum computation with magic states can be efficiently reduced to the
classical simulation based on the preimage $A_\alpha$. In addition, we describe
a new class of vertices in $\Lambda_2$ which is outside the known
classification. While the hardness of classical simulation remains an open
problem for most extremal points of $\Lambda_n$, the above results extend
efficient classical simulation of quantum computations beyond the presently
known range.
|
GAN inversion aims to invert a given image back into the latent space of a
pretrained GAN model, for the image to be faithfully reconstructed from the
inverted code by the generator. As an emerging technique to bridge the real and
fake image domains, GAN inversion plays an essential role in enabling the
pretrained GAN models such as StyleGAN and BigGAN to be used for real image
editing applications. Meanwhile, GAN inversion also provides insights on the
interpretation of GAN's latent space and how the realistic images can be
generated. In this paper, we provide an overview of GAN inversion with a focus
on its recent algorithms and applications. We cover important techniques of GAN
inversion and their applications to image restoration and image manipulation.
We further elaborate on some trends and challenges for future directions.
|
Non-Maximum Suppression (NMS) is essential for object detection and affects
the evaluation results by incorporating False Positives (FP) and False
Negatives (FN), especially in crowd occlusion scenes. In this paper, we raise
the problem of weak connection between the training targets and the evaluation
metrics caused by NMS and propose a novel NMS-Loss making the NMS procedure can
be trained end-to-end without any additional network parameters. Our NMS-Loss
punishes two cases when FP is not suppressed and FN is wrongly eliminated by
NMS. Specifically, we propose a pull loss to pull predictions with the same
target close to each other, and a push loss to push predictions with different
targets away from each other. Experimental results show that with the help of
NMS-Loss, our detector, namely NMS-Ped, achieves impressive results with Miss
Rate of 5.92% on Caltech dataset and 10.08% on CityPersons dataset, which are
both better than state-of-the-art competitors.
|
For abelian surfaces of Picard rank 1, we perform explicit computations of
the cohomological rank functions of the ideal sheaf of one point, and in
particular of the basepoint-freeness threshold. Our main tool is the relation
between cohomological rank functions and Bridgeland stability. In virtue of
recent results of Caucci and Ito, these computations provide new information on
the syzygies of polarized abelian surfaces.
|
The Zakharov system in dimension $d\leqslant 3$ is shown to be locally
well-posed in Sobolev spaces $H^s \times H^l$, extending the previously known
result. We construct new solution spaces by modifying the $X^{s,b}$ spaces,
specifically by introducing temporal weights. We use contraction mapping
principle to prove local well-posedness in the same. The result obtained is
sharp up to endpoints.
|
A common trait involving the opinion dynamics in social networks is an anchor
on interacting network to characterize the opinion formation process among
participating social actors, such as information flow, cooperative and
antagonistic influence, etc. Nevertheless, interacting networks are generally
public for social groups, as well as other individuals who may be interested
in. This blocks a more precise interpretation of the opinion formation process
since social actors always have complex feeling, motivation and behavior, even
beliefs that are personally private. In this paper, we formulate a general
configuration on describing how individual's opinion evolves in a distinct
fashion. It consists of two functional networks: interacting network and
appraisal network. Interacting network inherits the operational properties as
DeGroot iterative opinion pooling scheme while appraisal network, forming a
belief system, quantifies certain cognitive orientation to interested
individuals' beliefs, over which the adhered attitudes may have the potential
to be antagonistic. We explicitly show that cooperative appraisal network
always leads to consensus in opinions. Antagonistic appraisal network, however,
causes opinion cluster. It is verified that antagonistic appraisal network
affords to guarantee consensus by imposing some extra restrictions. They hence
bridge a gap between the consensus and the clusters in opinion dynamics. We
further attain a gauge on the appraisal network by means of the random convex
optimization approach. Moreover, we extend our results to the case of mutually
interdependent issues.
|
The multiple Birkhoff recurrence theorem states that for any $d\in\mathbb N$,
every system $(X,T)$ has a multiply recurrent point $x$, i.e. $(x,x,\ldots, x)$
is recurrent under $\tau_d=:T\times T^2\times \ldots \times T^d$. It is natural
to ask if there always is a multiply minimal point, i.e. a point $x$ such that
$(x,x,\ldots,x)$ is $\tau_d$-minimal. A negative answer is presented in this
paper via studying the horocycle flows.
However, it is shown that for any minimal system $(X,T)$ and any non-empty
open set $U$, there is $x\in U$ such that $\{n\in{\mathbb Z}: T^nx\in U,
\ldots, T^{dn}x\in U\}$ is piecewise syndetic; and that for a PI minimal
system, any $M$-subsystem of $(X^d, \tau_d)$ is minimal.
|
We generalize the Hasse invariant of local class field theory to the tame
Brauer group of a higher dimensional local field, and use it to study the
arithmetic of central simple algebras over such fields, which are given {\it a
priori} as tensor products of standard cyclic algebras.
We also compute the tame Brauer dimension (or {\it period-index bound}) and
the cyclic length of a general henselian-valued field of finite rank and finite
residue field.
|
A waveform model for the eccentric binary black holes named SEOBNRE has been
used to analyze the LIGO-Virgo's gravitational wave data by several groups. The
accuracy of this model has been validated by comparing it with numerical
relativity. However, SEOBNRE is a time-domain model, and the efficiency for
generating waveforms is a bottleneck in data analysis. To overcome this
disadvantage, we offer a reduced-order surrogate model for eccentric binary
black holes based on the SEOBNRE waveforms. This surrogate model (SEOBNRE\_S)
can simulate the complete inspiral-merger-ringdown waves with enough accuracy,
covering eccentricities from 0 to 0.25 (0.1), and mass ratio from 1:1 to 5:1
(2:1) for nonspinning (spinning) binaries. The speed of waveform generation is
accelerated about $10^2 \sim 10^3$ times than the original SEOBNRE model.
Therefore SEOBNRE\_S could be helpful in the analysis of LIGO data to find
potential eccentricities.
|
In the recent paper arXiv:1807.02721, B. Lawrence and A. Venkatesh develop a
method of proving finiteness theorems in arithmetic geometry by studying the
geometry of families over a base variety. Their results include a new proof of
both the $S$-unit theorem and Faltings' theorem, obtained by constructing and
studying suitable abelian-by-finite families over
$\mathbb{P}^1\setminus\{0,1,\infty\}$ and over an arbitrary curve of genus
$\geq 2$ respectively. In this paper, we apply this strategy to reprove
Siegel's theorem: we construct an abelian-by-finite family on a punctured
elliptic curve to prove finiteness of $S$-integral points on elliptic curves.
|
We have adapted the Vera C. Rubin Observatory Legacy Survey of Space and Time
(LSST) Science Pipelines to process data from the Gravitational-Wave Optical
Transient Observer (GOTO) prototype. In this paper, we describe how we used the
Rubin Observatory LSST Science Pipelines to conduct forced photometry
measurements on nightly GOTO data. By comparing the photometry measurements of
sources taken on multiple nights, we find that the precision of our photometry
is typically better than 20~mmag for sources brighter than 16 mag. We also
compare our photometry measurements against colour-corrected PanSTARRS
photometry, and find that the two agree to within 10~mmag (1$\sigma$) for
bright (i.e., $\sim14^{\rm th}$~mag) sources to 200~mmag for faint (i.e.,
$\sim18^{\rm th}$~mag) sources. Additionally, we compare our results to those
obtained by GOTO's own in-house pipeline, {\sc gotophoto}, and obtain similar
results. Based on repeatability measurements, we measure a $5\sigma$ L-band
survey depth of between 19 and 20 magnitudes, depending on observing
conditions. We assess, using repeated observations of non-varying standard SDSS
stars, the accuracy of our uncertainties, which we find are typically
overestimated by roughly a factor of two for bright sources (i.e., $<15^{\rm
th}$~mag), but slightly underestimated (by roughly a factor of 1.25) for
fainter sources ($>17^{\rm th}$~mag). Finally, we present lightcurves for a
selection of variable sources, and compare them to those obtained with the
Zwicky Transient Factory and GAIA. Despite the Rubin Observatory LSST Science
Pipelines still undergoing active development, our results show that they are
already delivering robust forced photometry measurements from GOTO data.
|
We consider the problem of an inextensible but flexible fiber advected by a
steady chaotic flow, and ask the simple question whether the fiber can
spontaneously knot itself. Using a 1D Cosserat model, a simple local viscous
drag model and discrete contact forces, we explore the probability of finding
knots at any given time when the fiber is interacting with the ABC class of
flows. The bending rigidity is shown to have a marginal effect compared to that
of increasing the fiber length. Complex knots are formed up to 11 crossings,
but some knots are more probable than others. The finite-time Lyapunov exponent
of the flow is shown to have a positive effect on the knot probability.
Finally, contact forces appear to be crucial since knotted configurations can
remain stable for times much longer than the turnover time of the flow,
something that is not observed when the fiber can freely cross itself.
|
We give a survey for the results in [Yeu20a, Yeu20b, Yeu20c], which attempts
to relate the derived categories under general classes of flips and flops. We
indicate how the approach fails because of what appears to be a formal problem.
We give some ideas, and record some failed attempts, to fix this problem. We
also present some new examples.
|
Motivated by the need for decentralized learning, this paper aims at
designing a distributed algorithm for solving nonconvex problems with general
linear constraints over a multi-agent network. In the considered problem, each
agent owns some local information and a local variable for jointly minimizing a
cost function, but local variables are coupled by linear constraints. Most of
the existing methods for such problems are only applicable for convex problems
or problems with specific linear constraints. There still lacks a distributed
algorithm for such problems with general linear constraints and under nonconvex
setting. In this paper, to tackle this problem, we propose a new algorithm,
called "proximal dual consensus" (PDC) algorithm, which combines a proximal
technique and a dual consensus method. We build the theoretical convergence
conditions and show that the proposed PDC algorithm can converge to an
$\epsilon$-Karush-Kuhn-Tucker solution within $\mathcal{O}(1/\epsilon)$
iterations. For computation reduction, the PDC algorithm can choose to perform
cheap gradient descent per iteration while preserving the same order of
$\mathcal{O}(1/\epsilon)$ iteration complexity. Numerical results are presented
to demonstrate the good performance of the proposed algorithms for solving a
regression problem and a classification problem over a network where agents
have only partial observations of data features.
|
Fall prevalence is high among elderly people, which is challenging due to the
severe consequences of falling. This is why rapid assistance is a critical
task. Ambient assisted living (AAL) uses recent technologies such as 5G
networks and the internet of medical things (IoMT) to address this research
area. Edge computing can reduce the cost of cloud communication, including high
latency and bandwidth use, by moving conventional healthcare services and
applications closer to end-users. Artificial intelligence (AI) techniques such
as deep learning (DL) have been used recently for automatic fall detection, as
well as supporting healthcare services. However, DL requires a vast amount of
data and substantial processing power to improve its performance for the IoMT
linked to the traditional edge computing environment. This research proposes an
effective fall detection framework based on DL algorithms and mobile edge
computing (MEC) within 5G wireless networks, the aim being to empower
IoMT-based healthcare applications. We also propose the use of a deep gated
recurrent unit (DGRU) neural network to improve the accuracy of existing
DL-based fall detection methods. DGRU has the advantage of dealing with
time-series IoMT data, and it can reduce the number of parameters and avoid the
vanishing gradient problem. The experimental results on two public datasets
show that the DGRU model of the proposed framework achieves higher accuracy
rates compared to the current related works on the same datasets.
|
Semantic image segmentation aims to obtain object labels with precise
boundaries, which usually suffers from overfitting. Recently, various data
augmentation strategies like regional dropout and mix strategies have been
proposed to address the problem. These strategies have proved to be effective
for guiding the model to attend on less discriminative parts. However, current
strategies operate at the image level, and objects and the background are
coupled. Thus, the boundaries are not well augmented due to the fixed semantic
scenario. In this paper, we propose ObjectAug to perform object-level
augmentation for semantic image segmentation. ObjectAug first decouples the
image into individual objects and the background using the semantic labels.
Next, each object is augmented individually with commonly used augmentation
methods (e.g., scaling, shifting, and rotation). Then, the black area brought
by object augmentation is further restored using image inpainting. Finally, the
augmented objects and background are assembled as an augmented image. In this
way, the boundaries can be fully explored in the various semantic scenarios. In
addition, ObjectAug can support category-aware augmentation that gives various
possibilities to objects in each category, and can be easily combined with
existing image-level augmentation methods to further boost performance.
Comprehensive experiments are conducted on both natural image and medical image
datasets. Experiment results demonstrate that our ObjectAug can evidently
improve segmentation performance.
|
An approximate but straight forward projection method to molecular many
alpha-particle states is proposed and the overlap to the shell model space is
determined. The resulting space is in accordance with the shell model, but
still contains states which are not completely symmetric under permutations of
the alpha-particles, which is one reason to call the construction
semi-microscopic. A new contribution is the construction of the 6- and
7-$\alpha$-particle spaces. The errors of the method propagate toward larger
number of alpha-particles and larger shell excitations. In order to show the
effectiveness of the construction proposed, the so obtained spaces are applied,
within an algebraic cluster model, to $^{20}$Ne, $^{24}$Mg and $^{28}$Si, each
treated as a many-alpha-particle system. Former results on $^{12}$C and
$^{16}$O are resumed.
|
We report the discovery of a sextuply-eclipsing sextuple star system from
TESS data, TIC 168789840, also known as TYC 7037-89-1, the first known sextuple
system consisting of three eclipsing binaries. The target was observed in
Sectors 4 and 5 during Cycle 1, with lightcurves extracted from TESS Full Frame
Image data. It was also previously observed by the WASP survey and ASAS-SN. The
system consists of three gravitationally-bound eclipsing binaries in a
hierarchical structure of an inner quadruple system with an outer binary
subsystem. Follow-up observations from several different observatories were
conducted as a means of determining additional parameters. The system was
resolved by speckle interferometry with a 0."42 separation between the inner
quadruple and outer binary, inferring an estimated outer period of ~2 kyr. It
was determined that the fainter of the two resolved components is an 8.217 day
eclipsing binary, which orbits the inner quadruple that contains two eclipsing
binaries with periods of 1.570 days and 1.306 days. MCMC analysis of the
stellar parameters has shown that the three binaries of TIC 168789840 are
"triplets", as each binary is quite similar to the others in terms of mass,
radius, and Teff. As a consequence of its rare composition, structure, and
orientation, this object can provide important new insight into the formation,
dynamics, and evolution of multiple star systems. Future observations could
reveal if the intermediate and outer orbital planes are all aligned with the
planes of the three inner eclipsing binaries.
|
An $L^2$ version of the classical Denjoy-Carleman theorem regarding
quasi-analytic functions was proved by P. Chernoff on $\mathbb R^n$ using
iterates of the Laplacian. We give a simple proof of this theorem which
generalizes the result on $\mathbb R^n$ for any $p\in [1, 2]$. We then extend
this result to Riemannian symmetric spaces of compact and noncompact type for
$K$-biinvariant functions.
|
Consider the complete graph on $n$ vertices. To each vertex assign an Ising
spin that can take the values $-1$ or $+1$. Each spin $i \in [n]=\{1,2,\dots,
n\}$ interacts with a magnetic field $h \in [0,\infty)$, while each pair of
spins $i,j \in [n]$ interact with each other at coupling strength $n^{-1}
J(i)J(j)$, where $J=(J(i))_{i \in [n]}$ are i.i.d. non-negative random
variables drawn from a prescribed probability distribution $\mathcal{P}$. Spins
flip according to a Metropolis dynamics at inverse temperature $\beta \in
(0,\infty)$. We show that there are critical thresholds $\beta_c$ and
$h_c(\beta)$ such that, in the limit as $n\to\infty$, the system exhibits
metastable behaviour if and only if $\beta \in (\beta_c, \infty)$ and $h \in
[0,h_c(\beta))$. Our main result are sharp asymptotics, up to a multiplicative
error $1+o_n(1)$, of the average crossover time from any metastable state to
the set of states with lower free energy. We use standard techniques of the
potential-theoretic approach to metastability. The leading order term in the
asymptotics does not depend on the realisation of $J$, while the correction
terms do. The leading order of the correction term is $\sqrt{n}$ times a
centred Gaussian random variable with a complicated variance depending on
$\beta,h,\mathcal{P}$ and on the metastable state. The critical thresholds
$\beta_c$ and $h_c(\beta)$ depend on $\mathcal{P}$, and so does the number of
metastable states. We derive an explicit formula for $\beta_c$ and identify
some properties of $\beta \mapsto h_c(\beta)$. Interestingly, the latter is not
necessarily monotone, meaning that the metastable crossover may be re-entrant.
|
Many numerical schemes for hyperbolic systems require a piecewise polynomial
reconstruction of the cell averaged values, and to simulate perturbed steady
states accurately we require a so called 'well balanced' reconstruction scheme.
For the shallow water system this involves reconstructing in surface elevation,
to which modifications must be made as the fluid depth becomes small to ensure
positivity.
We investigate the scheme proposed in Skevington (2021) though numerical
experiments, demonstrating its ability to resolve steady and near steady states
at high accuracy. We also present a modification to the scheme which enables
the resolution of slowly moving shocks and dam break problems without
compromising the well balanced property.
|
Adaptable, reconfigurable and programmable are key functionalities for the
next generation of silicon-based photonic processors, neural and quantum
networks. Phase change technology offers proven non-volatile electronic
programmability, however the materials used to date have shown prohibitively
high optical losses which are incompatible with integrated photonic platforms.
Here, we demonstrate the capability of the previously unexplored material
Sb$_2$Se$_3$ for ultralow-loss programmable silicon photonics. The favorable
combination of large refractive index contrast and ultralow losses seen in
Sb$_2$Se$_3$ facilitates an unprecedented optical phase control exceeding
10$\pi$ radians in a Mach-Zehnder interferometer. To demonstrate full control
over the flow of light, we introduce nanophotonic digital patterning as a
conceptually new approach at a footprint orders of magnitude smaller than state
of the art interferometer meshes. Our approach enables a wealth of
possibilities in high-density reconfiguration of optical functionalities on
silicon chip.
|
We present a new stochastic particle system on networks which describes the
flocking behavior and pattern formation. More precisely, we consider
Cucker-Smale particles with decentralized formation control and multiplicative
noises on symmetric and connected networks. Under suitable assumptions on the
initial configurations and the network structure, we establish time-asymptotic
stochastic flocking behavior and pattern formation of solutions for the
proposed stochastic particle system. Our approach is based on the Lyapunov
functional energy estimates, and it does not require any spectral information
of the graph associated with the network structure.
|
This article studies descent theory in the setting of Berkovich spaces. We
give sufficient conditions for a given fibered category over the category of
k-affinoid algebras to be a stack for the Berkovich analogue of the
faithfully-flat topology. We give some applications to the faithfully flat
descent of morphisms and show that some descent data are always effective. We
also show that the property of being algebraic for a morphism between the
analytification of two schemes is a local property for the faithfully-flat
topology.
|
We prove that a compact polyhedron $P$ collapses to a subpolyhedron $Q$ if
and only if there exists a piecewise linear free deformation retraction of $P$
onto $Q$.
|
In this paper we give an explicit expression for a star product on the super
Minkowski space written in the supertwistor formalism. The big cell of the
super Grassmannian Gr(2|0, 4|1) is identified with the chiral, super Minkowki
space. The super Grassmannian is an homogeneous space under the action of the
complexification SL(4|1) of SU(2,2|1), the superconformal group in dimension 4,
signature (1,3) and supersymmetry N=1. The quantization is done by substituting
the groups and homogeneous spaces by their quantum deformed counterparts. The
calculations are done in Manin's formalism. When we restrict to the big cell we
can compute explicitly an expression for the super star product in the
Minkowski superspace associated to this deformation and the choice of a certain
basis of monomials.
|
We prove that the topological type of a normal surface singularity $(X,0)$
provides finite bounds for the multiplicity and polar multiplicity of $(X,0)$,
as well as for the combinatorics of the families of generic hyperplane sections
and of polar curves of the generic plane projections of $(X,0)$. A key
ingredient in our proof is a topological bound of the growth of the Mather
discrepancies of $(X,0)$, which allows us to bound the number of point blowups
necessary to achieve factorization of any resolution of $(X,0)$ through its
Nash transform. This fits in the program of polar explorations, the quest to
determine the generic polar variety of a singular surface germ, to which the
final part of the paper is devoted.
|
Fast radio bursts (FRBs) are very short and bright transients visible over
extragalactic distances. The radio pulse undergoes dispersion caused by free
electrons along the line of sight, most of which are associated with the
large-scale structure (LSS). The total dispersion measure therefore increases
with the line of sight and provides a distance estimate to the source. We
present the first measurement of the Hubble constant using the dispersion
measure -- redshift relation of FRBs with identified host counterpart and
corresponding redshift information. A sample of nine currently available FRBs
yields a constraint of $H_0 = 62.3 \pm 9.1 \,\rm{km}
\,\rm{s}^{-1}\,\rm{Mpc}^{-1}$, accounting for uncertainty stemming from the
LSS, host halo and Milky Way contributions to the observed dispersion measure.
The main current limitation is statistical, and we estimate that a few hundred
events with corresponding redshifts are sufficient for a per cent measurement
of $H_0$. This is a number well within reach of ongoing FRB searches. We
perform a forecast using a realistic mock sample to demonstrate that a
high-precision measurement of the expansion rate is possible without relying on
other cosmological probes. FRBs can therefore arbitrate the current tension
between early and late time measurements of $H_0$ in the near future.
|
Sentence-level relation extraction (RE) aims at identifying the relationship
between two entities in a sentence. Many efforts have been devoted to this
problem, while the best performing methods are still far from perfect. In this
paper, we revisit two problems that affect the performance of existing RE
models, namely entity representation and noisy or ill-defined labels. Our
improved baseline model, incorporated with entity representations with typed
markers, achieves an F1 of 74.6% on TACRED, significantly outperforms previous
SOTA methods. Furthermore, the presented new baseline achieves an F1 of 91.1%
on the refined Re-TACRED dataset, demonstrating that the pre-trained language
models achieve unexpectedly high performance on this task. We release our code
to the community for future research.
|
MnBi$_2$Te$_4$ (MBT) is a promising antiferromagnetic topological insulator
whose films provide access to novel and technologically important topological
phases, including quantum anomalous Hall states and axion insulators. MBT
device behavior is expected to be sensitive to the various collinear and
non-collinear magnetic phases that are accessible in applied magnetic fields.
Here, we use classical Monte Carlo simulations and electronic structure models
to calculate the ground state magnetic phase diagram as well as topological and
optical properties for few layer films with thicknesses up to six septuple
layers. Using magnetic interaction parameters appropriate for MBT, we find that
it is possible to prepare a variety of different magnetic stacking sequences,
some of which have sufficient symmetry to disallow non-reciprocal optical
response and Hall transport coefficients. Other stacking arrangements do yield
large Faraday and Kerr signals, even when the ground state Chern number
vanishes.
|
In experiments that study social phenomena, such as peer influence or herd
immunity, the treatment of one unit may influence the outcomes of others. Such
"interference between units" violates traditional approaches for causal
inference, so that additional assumptions are required to model the underlying
social mechanism. We propose an approach that requires no such assumptions,
allowing for interference that is both unmodeled and strong, with confidence
intervals found using only the randomization of treatment. Additionally, the
approach allows for the usage of regression, matching, or weighting, as may
best fit the application at hand. Inference is done by bounding the
distribution of the estimation error over all possible values of the unknown
counterfactual, using an integer program. Examples are shown using a vaccine
trial and two experiments investigating social influence.
|
In this Letter, we explore nonrelativistic string solutions in various
subsectors of the $ SU(1,2|3) $ SMT strings that correspond to different spin
groups and satisfy the respective BPS bounds. In particular, we carry out an
explicit analysis on rotating string solutions in the light of recently
proposed SMT limits. We explore newly constructed SMT limits of type IIB
(super) strings on $ AdS_5 \times S^5 $ and estimate the corresponding leading
order stringy corrections near the respective BPS bounds.
|
We introduce a sequential learning algorithm to address a robust controller
tuning problem, which in effect, finds (with high probability) a candidate
solution satisfying the internal performance constraint to a chance-constrained
program which has black-box functions. The algorithm leverages ideas from the
areas of randomised algorithms and ordinal optimisation, and also draws
comparisons with the scenario approach; these have all been previously applied
to finding approximate solutions for difficult design problems. By exploiting
statistical correlations through black-box sampling, we formally prove that our
algorithm yields a controller meeting the prescribed probabilistic performance
specification. Additionally, we characterise the computational requirement of
the algorithm with a probabilistic lower bound on the algorithm's stopping
time. To validate our work, the algorithm is then demonstrated for tuning model
predictive controllers on a diesel engine air-path across a fleet of vehicles.
The algorithm successfully tuned a single controller to meet a desired tracking
error performance, even in the presence of the plant uncertainty inherent
across the fleet. Moreover, the algorithm was shown to exhibit a sample
complexity comparable to the scenario approach.
|
We classify all fundamental electrically charged thin shells in general
relativity, i.e., static spherically symmetric perfect fluid thin shells with a
Minkowski spacetime interior and a Reissner-Nordstr\"om spacetime exterior,
characterized by the spacetime mass and electric charge. The fundamental shell
can exist in three states, nonextremal, extremal, and overcharged. The
nonextremal state allows the shell to be located such that its radius can be
outside its own gravitational radius, or can be inside its own Cauchy radius.
The extremal state allows the shell to be located such that its radius can be
outside its own gravitational radius, or can be inside it. The overcharged
state allows the shell to be located anywhere. There is a further division, one
has to specify the orientation of the shell, i.e., whether the normal out of
the shell points toward increasing or decreasing radii. There is still a
subdivision in the extremal state when the shell is at the gravitational
radius, in that the shell can approach it from above or from below. The shell
is assumed to be composed of an electrically charged perfect fluid, and the
energy conditions are tested. Carter-Penrose diagrams are drawn for the shell
spacetimes. There are fourteen cases in the classification of the fundamental
shells, namely, nonextremal star shells, nonextremal tension shell black holes,
nonextremal tension shell regular and nonregular black holes, nonextremal
compact shell naked singularities, Majumdar-Papapetrou star shells, extremal
tension shell singularities, extremal tension shell regular and nonregular
black holes, Majumdar-Papapetrou compact shell naked singularities,
Majumdar-Papapetrou shell quasiblack holes, extremal null shell quasinonblack
holes, extremal null shell singularities, Majumdar-Papapetrou null shell
singularities, overcharged star shells, and overcharged compact shell naked
singularities.
|
The Stochastic Loewner equation, introduced by Schramm, gives us a powerful
way to study and classify critical random curves and interfaces in
two-dimensional statistical mechanics. New kind of stochastic Loewner equation,
called fractional stochastic Loewner evolution (FSLE), has been proposed for
the first time. Using the fractional time series as the driving function of the
Loewner equation and local fractional integrodifferential operators, we
introduce a large class of fractal curves. We argue that the FSLE curves,
besides the fractal dimension calculations, have crucial differences which
caused by the Hurst index of the driving function. This extension opens a new
way to classify different types of scaling curves based on the Hurst index of
the corresponding driving function. Such formalism appear to be suitable to
deal with the study of a wide range of two-dimensional curves appearing in
statistical mechanics or natural phenomena.
|
We present the photometric and spectroscopic analysis of three Type II SNe:
2014cx, 2014cy and 2015cz. SN 2014cx is a conventional Type IIP with a shallow
slope (0.2 mag/50d) and an atypical short plateau ($\sim$86 d). SNe 2014cy and
2015cz show relatively large decline rates (0.88 and 1.64 mag/50d,
respectively) at early times before settling to the plateau phase, unlike the
canonical Type IIP/L SN light curves. All of them are normal luminosity SN II
with an absolute magnitude at mid-plateau of
M$_{V,14cx}^{50}$=$-$16.6$\pm$0.4$\,\rm{mag}$,
M$_{V,14cy}^{50}$=$-$16.5$\,\pm\,$0.2$\,\rm{mag}$ and
M$_{V,15cz}^{50}$=$-$17.4$\,\pm\,$0.3$\,\rm{mag}$. A relatively broad range of
$^{56}$Ni masses is ejected in these explosions (0.027-0.070 M$_\odot$). The
spectra show the classical evolution of Type II SNe, dominated by a blue
continuum with broad H lines at early phases and narrower metal lines with P
Cygni profiles during the plateau. High-velocity H I features are identified in
the plateau spectra of SN 2014cx at 11600 km s$^{-1}$, possibly a sign of
ejecta-circumstellar interaction. The spectra of SN 2014cy exhibit strong
absorption profile of H I similar to normal luminosity events whereas strong
metal lines akin to sub-luminous SNe. The analytical modelling of the
bolometric light curve of the three events yields similar radii for the three
objects within errors (478, 507 and 608 R$_\odot$ for SNe 2014cx, 2014cy and
2015cz, respectively) and a range of ejecta masses (15.0, 22.2 and 18.7
M$_\odot$ for SNe 2014cx, 2014cy and 2015cz), and a modest range of explosion
energies (3.3 - 6.0 foe where 1 foe = 10$^{51}$ erg).
|
Compton scattering imaging using high-energy synchrotron x-rays allows the
visualization of the spatio-temporal lithiation state in lithium-ion batteries
probed in-operando. Here, we apply this imaging technique to the commercial
18650-type cylindrical lithium-ion battery. Our analysis of the lineshapes of
the Compton scattering spectra taken from different electrode layers reveals
the emergence of inhomogeneous lithiation patterns during the charge-discharge
cycles. Moreover, these patterns exhibit oscillations in time where the
dominant period corresponds to the time scale of the charging curve.
|
Classifiers tend to propagate biases present in the data on which they are
trained. Hence, it is important to understand how the demographic identities of
the annotators of comments affect the fairness of the resulting model. In this
paper, we focus on the differences in the ways men and women annotate comments
for toxicity, investigating how these differences result in models that amplify
the opinions of male annotators. We find that the BERT model as-sociates toxic
comments containing offensive words with male annotators, causing the model to
predict 67.7% of toxic comments as having been annotated by men. We show that
this disparity between gender predictions can be mitigated by removing
offensive words and highly toxic comments from the training data. We then apply
the learned associations between gender and language to toxic language
classifiers, finding that models trained exclusively on female-annotated data
perform 1.8% better than those trained solely on male-annotated data and that
training models on data after removing all offensive words reduces bias in the
model by 55.5% while increasing the sensitivity by 0.4%.
|
We study leptonic CP and flavor violations in supersymmetric (SUSY) grand
unified theory (GUT) with right handed neutrinos, paying attention to the
renormalization group effects on the slepton mass matrices due to the neutrino
and GUT Yukawa interactions. In particular, we study in detail the impacts of
the so-called Casas- Ibarra parameters on CP and flavor violating observables.
The renormalization group effects induce CP and flavor violating elements of
the SUSY breaking scalar mass squared matrices, which may result in sizable
leptonic CP and flavor violating signals. Assuming seesaw formula for the
active neutrino masses, the renormalization group effects have been often
thought to be negligible as the right-handed neutrino masses become small. With
the most general form of the neutrino Yukawa matrix, i.e., taking into account
the Casas-Ibarra parameters, however, this is not the case. We found that the
maximal possible sizes of signals of leptonic CP and flavor violating processes
are found to be insensitive to the mass scale of the right-handed neutrinos and
that they are as large as (or larger than) the present experimental bounds
irrespective of the right-handed neutrino masses.
|
Resonant transmission of light is a surface-wave assisted phenomenon that
enables funneling light through subwavelength apertures milled in otherwise
opaque metallic screens. In this work, we introduce a deep learning approach to
efficiently compute and design the optical response of a single subwavelength
slit perforated in a metallic screen and surrounded by periodic arrangements of
indentations. First, we show that a semi-analytical framework based on a
coupled-mode theory formalism is a robust and efficient method to generate the
large training datasets required in the proposed approach. Second, we discuss
how simple, densely connected artificial neural networks can accurately learn
the mapping from the geometrical parameters defining the topology of the system
to its corresponding transmission spectrum. Finally, we report on a deep
learning tandem architecture able to perform inverse design tasks for the
considered class of systems. We expect this work to stimulate further work on
the application of deep learning to the analysis of light-matter interaction in
nanostructured metallic films.
|
Given an on-board diagnostics (OBD) dataset and a physics-based emissions
prediction model, this paper aims to develop an accurate and
computational-efficient AI (Artificial Intelligence) method that predicts
vehicle emissions. The problem is of societal importance because vehicular
emissions lead to climate change and impact human health. This problem is
challenging because the OBD data does not contain enough parameters needed by
high-order physics models. Conversely, related work has shown that low-order
physics models have poor predictive accuracy when using available OBD data.
This paper uses a divergent window co-occurrence pattern detection method to
develop a spatiotemporal variability-aware AI model for predicting emission
values from the OBD datasets. We conducted a case study using real-world OBD
data from a local public transportation agency. Results show that the proposed
AI method has approximately 65% improved predictive accuracy than a non-AI
low-order physics model and is approximately 35% more accurate than a baseline
model.
|
The present work demonstrates a robust protocol for probing localized
electronic structure in condensed-phase systems, operating in terms of a
recently proposed theory for decomposing the results of Kohn-Sham density
functional theory in a basis of spatially localized molecular orbitals
[Eriksen, J. Chem. Phys. 153, 214109 (2020)]. In an initial application to
liquid, ambient water and the assessment of the solvation energy and the
embedded dipole moment of H$_2$O in solution, we find that both properties are
amplified on average -- in accordance with expectation -- and that correlations
are indeed observed to exist between them. However, the simulated
solvent-induced shift to the dipole moment of water is found to be
significantly dampened with respect to typical literature values. The local
nature of our methodology has further allowed us to evaluate the convergence of
bulk properties with respect to the extent of the underlying one-electron basis
set, ranging from single-$\zeta$ to full (augmented) quadruple-$\zeta$ quality.
Albeit a pilot example, our work paves the way towards future studies of local
effects and defects in more complex phases, e.g., liquid mixtures and even
solid-state crystals.
|
In a complex community, species continuously adapt to each other. On rare
occasions, the adaptation of a species can lead to the extinction of others,
and even its own. "Adaptive dynamics" is the standard mathematical framework to
describe evolutionary changes in community interactions, and in particular,
predict adaptation driven extinction. Unfortunately, most authors implement the
equations of adaptive dynamics through computer simulations, that require
assuming a large number of questionable parameters and fitness functions. In
this study we present analytical solutions to adaptive dynamics equations,
thereby clarifying how outcomes depend on any computational input. We develop
general formulas that predict equilibrium abundances over evolutionary time
scales. Additionally, we predict which species will go extinct next, and when
this will happen.
|
We obtain an estimate for the cubic Weyl sum which improves the bound
obtained from Weyl differencing for short ranges of summation. In particular,
we show that for any $\varepsilon>0$ there exists some $\delta>0$ such that for
any coprime integers $a,q$ and real number $\gamma$ we have \begin{align*}
\sum_{1\le n \le N}e\left(\frac{an^3}{q}+\gamma n\right)\ll (qN)^{1/4}
q^{-\delta}, \end{align*} provided $q^{1/3+\varepsilon}\le N \le
q^{1/2-\varepsilon}$. Our argument builds on some ideas of Enflo.
|
We show that every irreducible integral Apollonian packing can be set in the
Euclidean space so that all of its tangency spinors and all reduced coordinates
and co-curvatures are integral. As a byproduct, we prove that in any integral
Descartes configuration, the sum of the curvatures of two adjacent disks can be
written as a sum of two squares. Descartes groups are defined, and an
interesting occurrence of the Fibonacci sequence is found.
|
Neural networks are prone to learning shortcuts -- they often model simple
correlations, ignoring more complex ones that potentially generalize better.
Prior works on image classification show that instead of learning a connection
to object shape, deep classifiers tend to exploit spurious correlations with
low-level texture or the background for solving the classification task. In
this work, we take a step towards more robust and interpretable classifiers
that explicitly expose the task's causal structure. Building on current
advances in deep generative modeling, we propose to decompose the image
generation process into independent causal mechanisms that we train without
direct supervision. By exploiting appropriate inductive biases, these
mechanisms disentangle object shape, object texture, and background; hence,
they allow for generating counterfactual images. We demonstrate the ability of
our model to generate such images on MNIST and ImageNet. Further, we show that
the counterfactual images can improve out-of-distribution robustness with a
marginal drop in performance on the original classification task, despite being
synthetic. Lastly, our generative model can be trained efficiently on a single
GPU, exploiting common pre-trained models as inductive biases.
|
The transference principle of Green and Tao enabled various authors to
transfer Szemer\'edi's theorem on long arithmetic progressions in dense sets to
various sparse sets of integers, mostly sparse sets of primes. In this paper,
we provide a transference principle which applies to general affine-linear
configurations of finite complexity. We illustrate the broad applicability of
our transference principle with the case of almost twin primes, by which we
mean either Chen primes or ''bounded gap primes'', as well as with the case of
primes of the form $x^2+y^2+1$. Thus, we show that in these sets of primes the
existence of solutions to finite complexity systems of linear equations is
determined by natural local conditions. These applications rely on a recent
work of the last two authors on Bombieri-Vinogradov type estimates for
nilsequences.
|
In a context of global economy, addressing SMEs performance within a local
framework appears rather a naive approach. The key drawback of such an approach
stems from its restriction to socio-economic factors that might lead to biased
decisions regarding potential venues for performance improvement. In practice,
the key objective of performance analysis consists in identifying benchmarks
for best managerial practices with respect to resource allocation as well as
production level setting. Conducting the analysis within a specific country,
let it be a developing country, may be misleading. Although, the best of the
class (benchmark) can be a valid reference for its peers within the same class,
its status might not be preserved if the analysis is projected outside the
borders of the class. Indeed, the likelihood for outperformance is high. In
order to set targets for global competition, decision makers ought to look at
the concept of performance from a broader geographical perspective, instead of
confining it to a local scope. Here, we analyze, through a case study, SMEs
performance within local and global production technology frameworks and we
highlight the impact of the economy scope on various decisions. Data
envelopment analysis (DEA) is used as a mathematical tool to support such
decisions.
|
The spread of a matrix is defined as the maximum of distances between any two
eigenvalues of that matrix. In this paper we investigate spread maximization as
a function on compact convex subset of the set of real symmetric matrices. We
provide some general results and further, we study spread maximizing problem on
the set of symmetric matrices with entries restricted to the interval. In
particular, we develop some results by X. Zhan, S. M. Fallat and J. J. Xing.
|
We discuss a new mass matrix with specific texture zeros for the quarks. The
three flavor mixing angles for the quarks are functions of the quark masses and
can be calculated. The following ratios among CKM matrix elements are given by
ratios of quark masses: |Vtd/Vts| ' q md /ms and |Vub/Vcb| ' p mu/mc . Also we
can calculate two CKM matrix elements: |Vcb| ' |Vts| ' 2 (ms/mb ). This
relation as well as the relation |Vtd/Vts| ' q md /ms are in good agreement
with the experimental data. There is a problem with the relation |Vub/Vcb| ' p
mu/mc , probably due to wrong estimates of the quark masses mu and m
|
Context. Solar magnetic pores are, due to their concentrated magnetic fields,
suitable guides for magnetoacoustic waves. Recent observations have shown that
propagating energy flux in pores is subject to strong damping with height;
however, the reason is still unclear. Aims. We investigate possible damping
mechanisms numerically to explain the observations. Methods. We performed 2D
numerical magnetohydrodynamic (MHD) simulations, starting from an equilibrium
model of a single pore inspired by the observed properties. Energy was inserted
into the bottom of the domain via different vertical drivers with a period of
30s. Simulations were performed with both ideal MHD and non-ideal effects.
Results. While the analysis of the energy flux for ideal and non-ideal MHD
simulations with a plane driver cannot reproduce the observed damping, the
numerically predicted damping for a localized driver closely corresponds with
the observations. The strong damping in simulations with localized driver was
caused by two geometric effects, geometric spreading due to diverging field
lines and lateral wave leakage.
|
In the conventional robust $T$-colluding private information retrieval (PIR)
system, the user needs to retrieve one of the possible messages while keeping
the identity of the requested message private from any $T$ colluding servers.
Motivated by the possible heterogeneous privacy requirements for different
messages, we consider the $(N, T_1:K_1, T_2:K_2)$ two-level PIR system with a
total of $K_2$ messages in the system, where $T_1\geq T_2$ and $K_1\leq K_2$.
Any one of the $K_1$ messages needs to be retrieved privately against $T_1$
colluding servers, and any one of the full set of $K_2$ messages needs to be
retrieved privately against $T_2$ colluding servers. We obtain a lower bound to
the capacity by proposing two novel coding schemes, namely the non-uniform
successive cancellation scheme and the non-uniform block cancellation scheme. A
capacity upper bound is also derived. The gap between the upper bound and the
lower bounds is analyzed, and shown to vanish when $T_1=T_2$. Lastly, we show
that the upper bound is in general not tight by providing a stronger bound for
a special setting.
|
By implicitly recognizing a user based on his/her speech input, speaker
identification enables many downstream applications, such as personalized
system behavior and expedited shopping checkouts. Based on whether the speech
content is constrained or not, both text-dependent (TD) and text-independent
(TI) speaker recognition models may be used. We wish to combine the advantages
of both types of models through an ensemble system to make more reliable
predictions. However, any such combined approach has to be robust to incomplete
inputs, i.e., when either TD or TI input is missing. As a solution we propose a
fusion of embeddings network foenet architecture, combining joint learning with
neural attention. We compare foenet with four competitive baseline methods on a
dataset of voice assistant inputs, and show that it achieves higher accuracy
than the baseline and score fusion methods, especially in the presence of
incomplete inputs.
|
Explaining the decisions of models is becoming pervasive in the image
processing domain, whether it is by using post-hoc methods or by creating
inherently interpretable models. While the widespread use of surrogate
explainers is a welcome addition to inspect and understand black-box models,
assessing the robustness and reliability of the explanations is key for their
success. Additionally, whilst existing work in the explainability field
proposes various strategies to address this problem, the challenges of working
with data in the wild is often overlooked. For instance, in image
classification, distortions to images can not only affect the predictions
assigned by the model, but also the explanation. Given a clean and a distorted
version of an image, even if the prediction probabilities are similar, the
explanation may still be different. In this paper we propose a methodology to
evaluate the effect of distortions in explanations by embedding perceptual
distances that tailor the neighbourhoods used to training surrogate explainers.
We also show that by operating in this way, we can make the explanations more
robust to distortions. We generate explanations for images in the Imagenet-C
dataset and demonstrate how using a perceptual distances in the surrogate
explainer creates more coherent explanations for the distorted and reference
images.
|
We develop a microscopic and atomistic theory of electron spin-based qubits
in gated quantum dots in a single layer of transition metal dichalcogenides.
The qubits are identified with two degenerate locked spin and valley states in
a gated quantum dot. The two-qubit states are accurately described using a
multi-million atom tight-binding model solved in wavevector space. The
spin-valley locking and strong spin-orbit coupling result in two degenerate
states, one of the qubit states being spin-down located at the $+K$ valley of
the Brillouin zone, and the other state located at the $-K$ valley with spin
up. We describe the qubit operations necessary to rotate the spin-valley qubit
as a combination of the applied vertical electric field, enabling spin-orbit
coupling in a single valley, with a lateral strongly localized valley-mixing
gate.
|
Automated systems that negotiate with humans have broad applications in
pedagogy and conversational AI. To advance the development of practical
negotiation systems, we present CaSiNo: a novel corpus of over a thousand
negotiation dialogues in English. Participants take the role of campsite
neighbors and negotiate for food, water, and firewood packages for their
upcoming trip. Our design results in diverse and linguistically rich
negotiations while maintaining a tractable, closed-domain environment. Inspired
by the literature in human-human negotiations, we annotate persuasion
strategies and perform correlation analysis to understand how the dialogue
behaviors are associated with the negotiation performance. We further propose
and evaluate a multi-task framework to recognize these strategies in a given
utterance. We find that multi-task learning substantially improves the
performance for all strategy labels, especially for the ones that are the most
skewed. We release the dataset, annotations, and the code to propel future work
in human-machine negotiations: https://github.com/kushalchawla/CaSiNo
|
We introduce codimension three magnetically charged surface operators in
five-dimensional (5d) $\mathcal{N}=1$ supersymmetric gauge on $T^2 \times
\mathbb{R}^3$. We evaluate the vacuum expectation values (vevs) of surface
operators by supersymmetric localization techniques. Contributions of Monopole
bubbling effects to the path integral are given by elliptic genera of world
volume theories on D-branes. Our result gives an elliptic deformation of the
SUSY localization formula \cite{Ito:2011ea} (resp. \cite{Okuda:2019emk,
Assel:2019yzd}) of BPS 't Hooft loops (resp. bare monopole operators) in 4d
$\mathcal{N}=2$ (resp. 3d $\mathcal{N}=4$) gauge theories. We define
deformation quantizations of vevs of surface operators in terms of the
Weyl-Wigner transform, where the $\Omega$-background parameter plays the role
of the Planck constant. For 5d $\mathcal{N}=1^*$ gauge theory, we find that the
deformation quantization of the surface operators in the anti-symmetric
representations agrees with the type A elliptic Ruijsenaars operators. The
mutual commutativity of these difference operators is related to the
commutativity of products of 't Hooft surface operators.
|
Squeezed, nonclassical states are an integral tool of quantum metrology due
to their ability to push the sensitivity of a measurement apparatus beyond the
limits of classical states. While their creation in light has become a standard
technique, the production of squeezed states of the collective excitations in
gases of ultracold atoms, the phonons of a Bose-Einstein condensate (BEC), is a
comparably recent problem. This task is continuously gaining relevance with a
growing number of proposals for BEC-based quantum metrological devices and the
possibility to apply them in the detection of gravitational waves. The
objective of this thesis is to find whether the recently described effect of an
oscillating external potential on a uniform BEC can be exploited to generate
two-mode squeezed phonon states, given present day technology. This question
brings together elements of a range of fields beyond cold atoms, such as
general relativity and Efimov physics. To answer it, the full transformation
caused by the oscillating potential on an initially thermal phononic state is
considered, allowing to find an upper bound for the magnitude of this
perturbation as well as to quantify the quality of the final state with respect
to its use in metrology. These findings are then applied to existing
experiments to judge the feasibility of the squeezing scheme and while the
results indicate that they are not well suited for it, a setup is proposed that
allows for its efficient implementation and seems within experimental reach. In
view of the vast parameter space leaving room for optimization, the considered
mechanism could find applications not only in the gravitational wave detector
that originally motivated this work, but more generally in the field of quantum
metrology based on ultracold atoms.
|
Artificial intelligence (AI) is supposed to help us make better choices. Some
of these choices are small, like what route to take to work, or what music to
listen to. Others are big, like what treatment to administer for a disease or
how long to sentence someone for a crime. If AI can assist with these big
decisions, we might think it can also help with hard choices, cases where
alternatives are neither better, worse nor equal but on a par. The aim of this
paper, however, is to show that this view is mistaken: the fact of parity shows
that there are hard limits on AI in decision making and choices that AI cannot,
and should not, resolve.
|
This paper shows a simple parameter substitution, which makes use of the
reciprocal relation of typical objective functions with typical random
parameters. Thereby, the accuracy of first-order probabilistic analysis
improves significantly at almost no additional computational cost. The
parameter substitution requires a transformation of the stochastic distribution
of the substituted parameter, which is explained for different cases.
|
The Covering Salesman Problem (CSP) is a generalization of the Traveling
Salesman Problem in which the tour is not required to visit all vertices, as
long as all vertices are covered by the tour. The objective of CSP is to find a
minimum length Hamiltonian cycle over a subset of vertices that covers an
undirected graph. In this paper, valid inequalities from the generalized
traveling salesman problem are applied to the CSP in addition to new valid
inequalities that explore distinct aspects of the problem. A branch-and-cut
framework assembles exact and heuristic separation routines for integer and
fractional CSP solutions. Computational experiments show that the proposed
framework outperformed methodologies from literature with respect to optimality
gaps. Moreover, optimal solutions were proven for several previously unsolved
instances.
|
In this paper, we investigate the complexity of the central path of
semidefinite optimization through the lens of real algebraic geometry. To that
end, we propose an algorithm to compute real univariate representations
describing the central path and its limit point, where the limit point is
described by taking the limit of central solutions, as bounded points in the
field of algebraic Puiseux series. As a result, we derive an upper bound
$2^{O(m+n^2)}$ on the degree of the Zariski closure of the central path, when
$\mu$ is sufficiently small, and for the complexity of describing the limit
point, where $m$ and $n$ denote the number of affine constraints and size of
the symmetric matrix, respectively. Furthermore, by the application of the
quantifier elimination to the real univariate representations, we provide a
lower bound $1/\gamma$, with $\gamma =2^{O(m+n^2)}$, on the convergence rate of
the central path.
|
We take a broad look at the problem of identifying the magnetic solar causes
of space weather. With the lackluster performance of extrapolations based upon
magnetic field measurements in the photosphere, we identify a region in the
near UV part of the spectrum as optimal for studying the development of
magnetic free energy over active regions. Using data from SORCE, Hubble Space
Telescope, and SKYLAB, along with 1D computations of the near-UV (NUV) spectrum
and numerical experiments based on the MURaM radiation-MHD and HanleRT
radiative transfer codes, we address multiple challenges. These challenges are
best met through a combination of near UV lines of bright \ion{Mg}{2}, and
lines of \ion{Fe}{2} and \ion{Fe}{1} (mostly within the $4s-4p$ transition
array) which form in the chromosphere up to $2\times10^4$ K. Both Hanle and
Zeeman effects can in principle be used to derive vector magnetic fields.
However, for any given spectral line the $\tau=1$ surfaces are generally
geometrically corrugated owing to fine structure such as fibrils and spicules.
By using multiple spectral lines spanning different optical depths, magnetic
fields across nearly-horizontal surfaces can be inferred in regions of low
plasma $\beta$, from which free energies, magnetic topology and other
quantities can be derived.
Based upon the recently-reported successful suborbital space measurements of
magnetic fields with the CLASP2 instrument,
we argue that a modest space-borne telescope will be able to make significant
advances in the attempts to predict solar eruptions. Difficulties associated
with blended lines are shown to be minor in an Appendix.
|
To investigate the influence of the orifice geometry on near-field coherent
structures in a jet, Fourier-POD is applied. Velocity and vorticity snapshots
obtained from tomographic particle image velocimetry at the downstream distance
of two equivalent orifice diameters are analysed. Jets issuing from a circular
orifice and from a fractal orifice are examined, where the fractal geometry is
obtained from a repeating fractal pattern applied to a base square shape. While
in the round jet energy is mostly contained at wavenumber m=0, associated to
the characteristic Kelvin-Helmholtz vortex rings, in the fractal jet modal
structures at the fundamental azimuthal wavenumber m=4 capture the largest
amount of energy. The second part of the study focuses on the relationship
between streamwise vorticity and streamwise velocity, to characterise the role
of the orifice geometry on the lift-up mechanism recently found to be active in
turbulent jets. The averaging of the streamwise vorticity conditioned on
intense positive fluctuations of streamwise velocity reveals a pair of
vorticity structures of opposite sign flanking the conditioning point, inducing
a radial flow towards the jet periphery. This pair of structures is observed in
both jets, even if the azimuthal extent of this pattern is 30% larger in the
jet issuing from the circular orifice. This evidences that the orifice geometry
directly influences the interaction between velocity and vorticity.
|
Purpose: Radiation therapy treatment planning is a trial-and-error, often
time-consuming process. An optimal dose distribution based on a specific
anatomy can be predicted by pre-trained deep learning (DL) models. However,
dose distributions are often optimized based on not only patient-specific
anatomy but also physician preferred trade-offs between planning target volume
(PTV) coverage and organ at risk (OAR) sparing. Therefore, it is desirable to
allow physicians to fine-tune the dose distribution predicted based on patient
anatomy. In this work, we developed a DL model to predict the individualized 3D
dose distributions by using not only the anatomy but also the desired PTV/OAR
trade-offs, as represented by a dose volume histogram (DVH), as inputs.
Methods: The desired DVH, fine-tuned by physicians from the initially predicted
DVH, is first projected onto the Pareto surface, then converted into a vector,
and then concatenated with mask feature maps. The network output for training
is the dose distribution corresponding to the Pareto optimal DVH. The
training/validation datasets contain 77 prostate cancer patients, and the
testing dataset has 20 patients. Results: The trained model can predict a 3D
dose distribution that is approximately Pareto optimal. We calculated the
difference between the predicted and the optimized dose distribution for the
PTV and all OARs as a quantitative evaluation. The largest average error in
mean dose was about 1.6% of the prescription dose, and the largest average
error in the maximum dose was about 1.8%. Conclusions: In this feasibility
study, we have developed a 3D U-Net model with the anatomy and desired DVH as
inputs to predict an individualized 3D dose distribution. The predicted dose
distributions can be used as references for dosimetrists and physicians to
rapidly develop a clinically acceptable treatment plan.
|
Under the environment of big data streams, it is a common situation where the
variable set of a model may change according to the condition of data streams.
In this paper, we propose a homogenization strategy to represent the
heterogenous models that are gradually updated in the process of data streams.
With the homogenized representations, we can easily construct various online
updating statistics such as parameter estimation, residual sum of squares and
$F$-statistic for the heterogenous updating regression models. The main
difference from the classical scenarios is that the artificial covariates in
the homogenized models are not identically distributed as the natural
covariates in the original models, consequently, the related theoretical
properties are distinct from the classical ones. The asymptotical properties of
the online updating statistics are established, which show that the new method
can achieve estimation efficiency and oracle property, without any constraint
on the number of data batches. The behavior of the method is further
illustrated by various numerical examples from simulation experiments.
|
Infrared nanospectroscopy based on Fourier transform infrared near-field
spectroscopy (nano-FTIR) is an emerging nanoanalytical tool with large
application potential for label-free mapping and identification of organic and
inorganic materials with nanoscale spatial resolution. However, the detection
of thin molecular layers and nanostructures on standard substrates is still
challenged by weak signals. Here, we demonstrate a significant enhancement of
nano-FTIR signals of a thin organic layer by exploiting polariton-resonant
tip-substrate coupling and surface polariton illumination of the probing tip.
When the molecular vibration matches the tip-substrate resonance, we achieve up
to nearly one order of magnitude signal enhancement on a phonon-polaritonic
quartz (c-SiO2) substrate, as compared to nano-FTIR spectra obtained on metal
(Au) substrates, and up to two orders of magnitude when compared to the
standard infrared spectroscopy substrate CaF2. Our results will be of critical
importance for boosting nano-FTIR spectroscopy towards the routine detection of
monolayers and single molecules.
|
We define an attractive gravity probe surface (AGPS) as a compact 2-surface
$S_\alpha$ with positive mean curvature $k$ satisfying $r^a D_a k / k^2 \ge
\alpha$ (for a constant $\alpha>-1/2$) in the local inverse mean curvature
flow, where $r^a D_a k$ is the derivative of $k$ in the outward unit normal
direction. For asymptotically flat spaces, any AGPS is proved to satisfy the
areal inequality $A_\alpha \le 4\pi [ ( 3+4\alpha)/(1+2\alpha) ]^2(Gm)^2$,
where $A_{\alpha}$ is the area of $S_\alpha$ and $m$ is the
Arnowitt-Deser-Misner (ADM) mass. Equality is realized when the space is
isometric to the $t=$constant hypersurface of the Schwarzschild spacetime and
$S_\alpha$ is an $r=\mathrm{constant}$ surface with $r^a D_a k / k^2 = \alpha$.
We adapt the two methods of the inverse mean curvature flow and the conformal
flow. Therefore, our result is applicable to the case where $S_\alpha$ has
multiple components. For anti-de Sitter (AdS) spaces, a similar inequality is
derived, but the proof is performed only by using the inverse mean curvature
flow. We also discuss the cases with asymptotically locally AdS spaces.
|
The recent surge of complex attention-based deep learning architectures has
led to extraordinary results in various downstream NLP tasks in the English
language. However, such research for resource-constrained and morphologically
rich Indian vernacular languages has been relatively limited. This paper
proffers team SPPU\_AKAH's solution for the TechDOfication 2020 subtask-1f:
which focuses on the coarse-grained technical domain identification of short
text documents in Marathi, a Devanagari script-based Indian language. Availing
the large dataset at hand, a hybrid CNN-BiLSTM attention ensemble model is
proposed that competently combines the intermediate sentence representations
generated by the convolutional neural network and the bidirectional long
short-term memory, leading to efficient text classification. Experimental
results show that the proposed model outperforms various baseline machine
learning and deep learning models in the given task, giving the best validation
accuracy of 89.57\% and f1-score of 0.8875. Furthermore, the solution resulted
in the best system submission for this subtask, giving a test accuracy of
64.26\% and f1-score of 0.6157, transcending the performances of other teams as
well as the baseline system given by the organizers of the shared task.
|
Let $P$ be a bounded convex subset of $\mathbb R^n$ of positive volume.
Denote the smallest degree of a polynomial $p(X_1,\dots,X_n)$ vanishing on
$P\cap\mathbb Z^n$ by $r_P$ and denote the smallest number $u\geq0$ such that
every function on $P\cap\mathbb Z^n$ can be interpolated by a polynomial of
degree at most $u$ by $s_P$. We show that the values $(r_{d\cdot P}-1)/d$ and
$s_{d\cdot P}/d$ for dilates $d\cdot P$ converge from below to some numbers
$v_P,w_P>0$ as $d$ goes to infinity. The limits satisfy $v_P^{n-1}w_P \leq
n!\cdot\operatorname{vol}(P)$. When $P$ is a triangle in the plane, we show
equality: $v_Pw_P = 2\operatorname{vol}(P)$. These results are obtained by
looking at the set of standard monomials of the vanishing ideal of $d\cdot
P\cap\mathbb Z^n$ and by applying the Bernstein--Kushnirenko theorem. Finally,
we study irreducible Laurent polynomials that vanish with large multiplicity at
a point. This work is inspired by questions about Seshadri constants.
|
Neural cellular automata (Neural CA) are a recent framework used to model
biological phenomena emerging from multicellular organisms. In these systems,
artificial neural networks are used as update rules for cellular automata.
Neural CA are end-to-end differentiable systems where the parameters of the
neural network can be learned to achieve a particular task. In this work, we
used neural CA to control a cart-pole agent. The observations of the
environment are transmitted in input cells, while the values of output cells
are used as a readout of the system. We trained the model using deep-Q
learning, where the states of the output cells were used as the Q-value
estimates to be optimized. We found that the computing abilities of the
cellular automata were maintained over several hundreds of thousands of
iterations, producing an emergent stable behavior in the environment it
controls for thousands of steps. Moreover, the system demonstrated life-like
phenomena such as a developmental phase, regeneration after damage, stability
despite a noisy environment, and robustness to unseen disruption such as input
deletion.
|
Linear regression is a fundamental modeling tool in statistics and related
fields. In this paper, we study an important variant of linear regression in
which the predictor-response pairs are partially mismatched. We use an
optimization formulation to simultaneously learn the underlying regression
coefficients and the permutation corresponding to the mismatches. The
combinatorial structure of the problem leads to computational challenges. We
propose and study a simple greedy local search algorithm for this optimization
problem that enjoys strong theoretical guarantees and appealing computational
performance. We prove that under a suitable scaling of the number of mismatched
pairs compared to the number of samples and features, and certain assumptions
on problem data; our local search algorithm converges to a nearly-optimal
solution at a linear rate. In particular, in the noiseless case, our algorithm
converges to the global optimal solution with a linear convergence rate. We
also propose an approximate local search step that allows us to scale our
approach to much larger instances. We conduct numerical experiments to gather
further insights into our theoretical results and show promising performance
gains compared to existing approaches.
|
This is a PhD Thesis on the connection between subfactors (more precisely,
their corresponding fusion categories) and Conformal Field Theory (CFT).
Besides being a mathematically interesting topic on its own, subfactors have
also attracted the attention of physicists, since there is a conjectured
correspondence between these and CFTs. Although there is quite a persuasive
body of evidence for this conjecture, there are some gaps: there exists a set
of exceptional subfactors with no known counterpart CFT. Hence, it is necessary
to develop new techniques for building a CFT from a subfactor. Here, it is
useful to study the underlying mathematical structure in more detail: The even
parts of every subfactor give rise to two Unitary Fusion Categories (UFCs), and
it is a promising direction to study quantum spin systems constructed from
these categories to find a connection to CFTs. The simplest example that
requires new techniques for building a CFT is the Haagerup subfactor, since it
is the smallest subfactor with index larger than 4. In this thesis, we
investigate the question whether there is a CFT corresponding to the Haagerup
subfactor via lattice models in one and two dimensions. The first task here is
to find the F-symbols of the fusion category since these are crucial
ingredients for the construction of a physical model in all of the models we
consider in this thesis. We then investigate microscopic models such as the
golden chain model and the Levin-Wen model in order to find evidence for a
corresponding CFT. We find that there is no evidence for a corresponding CFT
from the investigation of the UFCs directly and it is necessary to expand these
studies to the corresponding unitary modular tensor category, which can, for
instance, be obtained via the excitations of the Levin-Wen model.
|
We study a mathematical model capturing the support/resistance line method (a
technique in technical analysis) where the underlying stock price transitions
between two states of nature in a path-dependent manner. For optimal stopping
problems with respect to a general class of reward functions and dynamics,
using probabilistic methods, we show that the value function is $C^1$ and
solves a general free boundary problem. Moreover, for a wide range of
utilities, we prove that the best time to buy and sell the stock is obtained by
solving free boundary problems corresponding to two linked optimal stopping
problems. We use this to numerically compute optimal trading strategies for
several types of dynamics and varying degrees of relative risk aversion. We
then compare the strategies with the standard trading rule to investigate the
viability of this form of technical analysis.
|
We give a new criterion guaranteeing existence of model structures
left-induced along a functor admitting both adjoints. This works under the
hypothesis that the functor induces idempotent adjunctions at the homotopy
category level. As an application, we construct new model structures on cubical
sets, prederivators, marked simplicial sets and simplicial spaces modeling
$\infty$-categories and $\infty$-groupoids.
|
Subsets and Splits