abstract
stringlengths 42
2.09k
|
---|
In a previous paper, second- and fourth-order explicit symplectic integrators
were designed for a Hamiltonian of the Schwarzschild black hole. Following this
work, we continue to trace the possibility of the construction of explicit
symplectic integrators for a Hamiltonian of charged particles moving around a
Reissner-Nordstrom black hole with an external magnetic field. Such explicit
symplectic methods are still available when the Hamiltonian is separated into
five independently integrable parts with analytical solutions as explicit
functions of proper time. Numerical tests show that the proposed algorithms
share the desirable properties in their long-term stability, precision and
efficiency for appropriate choices of step sizes. For the applicability of one
of the new algorithms, the effects of the black hole's charge, the Coulomb part
of the electromagnetic potential and the magnetic parameter on the dynamical
behavior are surveyed. Under some circumstances, the extent of chaos gets
strong with an increase of the magnetic parameter from a global phase-space
structure. No the variation of the black hole's charge but the variation of the
Coulomb part is considerably sensitive to affect the regular and chaotic
dynamics of particles' orbits. A positive Coulomb part is easier to induce
chaos than a negative one.
|
Heterogeneous teams of robots, leveraging a balance between autonomy and
human interaction, bring powerful capabilities to the problem of exploring
dangerous, unstructured subterranean environments. Here we describe the
solution developed by Team CSIRO Data61, consisting of CSIRO, Emesent and
Georgia Tech, during the DARPA Subterranean Challenge. These presented systems
were fielded in the Tunnel Circuit in August 2019, the Urban Circuit in
February 2020, and in our own Cave event, conducted in September 2020. A unique
capability of the fielded team is the homogeneous sensing of the platforms
utilised, which is leveraged to obtain a decentralised multi-agent SLAM
solution on each platform (both ground agents and UAVs) using peer-to-peer
communications. This enabled a shift in focus from constructing a pervasive
communications network to relying on multi-agent autonomy, motivated by
experiences in early circuit events. These experiences also showed the
surprising capability of rugged tracked platforms for challenging terrain,
which in turn led to the heterogeneous team structure based on a BIA5 OzBot
Titan ground robot and an Emesent Hovermap UAV, supplemented by smaller tracked
or legged ground robots. The ground agents use a common CatPack perception
module, which allowed reuse of the perception and autonomy stack across all
ground agents with minimal adaptation.
|
Black phosphorus (BP) analogous tin(II) sulfide (SnS) has recently emerged as
an attractive building block for electronic devices due to its highly
anisotropic response. Two-dimensional (2D) SnS has shown to exhibit in-plane
anisotropy in optical and electrical properties. However, the limitations in
growing ultrasmall structures of SnS hinder the experimental exploration of
anisotropic behavior in low dimension. Here, we present an elegant approach of
synthesizing highly crystalline nanometer-sized SnS sheets. Ultrasmall SnS
exhibits two distinct valleys along armchair and zig-zag directions due to
in-plane structural anisotropy like bulk SnS. We show that in such SnS
nanosheet dots, the band gaps corresponding to two valleys are increased due to
quantum confinement effect. We particularly observe that SnS quantum dots (QDs)
show excitation energy dependent photoluminescence (PL), which originates from
the two nondegenerate valleys. Our work may open up an avenue to show the
potential of SnS QDs for new functionalities in electronics and
optoelectronics.
|
We consider the typical behaviour of random dynamical systems of
order-preserving interval homeomorphisms with a positive Lyapunov exponent
condition at the endpoints. Our study removes any requirement for continuous
differentiability save the existence of finite derivatives of the
homeomorphisms at the endpoints of the interval. We construct a suitable Baire
space structure for this class of systems. Generically within our Baire space,
we show that the stationary measure is singular with respect to the Lebesgue
measure, but has full support on $[0,1]$. This provides an answer to a question
raised by Alsed\`a and Misiurewicz.
|
Consumer applications are becoming increasingly smarter and most of them have
to run on device ecosystems. Potential benefits are for example enabling
cross-device interaction and seamless user experiences. Essential for today's
smart solutions with high performance are machine learning models. However,
these models are often developed separately by AI engineers for one specific
device and do not consider the challenges and potentials associated with a
device ecosystem in which their models have to run. We believe that there is a
need for tool-support for AI engineers to address the challenges of
implementing, testing, and deploying machine learning models for a next
generation of smart interactive consumer applications. This paper presents
preliminary results of a series of inquiries, including interviews with AI
engineers and experiments for an interactive machine learning use case with a
Smartwatch and Smartphone. We identified the themes through interviews and
hands-on experience working on our use case and proposed features, such as data
collection from sensors and easy testing of the resources consumption of
running pre-processing code on the target device, which will serve as
tool-support for AI engineers.
|
The aim of this article is to show the global existence of both martingale
and pathwise solutions of stochastic equations with a monotone operator, of the
Ladyzenskaya-Smagorinsky type, driven by a general Levy noise. The classical
approach based on using directly the Galerkin approximation is not valid.
Instead, our approach is based on using appropriate approximations for the
monotone operator, Galerkin approximations and on the theory of martingale
solutions.
|
Machine learning has been proven to be effective in various application
areas, such as object and speech recognition on mobile systems. Since a
critical key to machine learning success is the availability of large training
data, many datasets are being disclosed and published online. From a data
consumer or manager point of view, measuring data quality is an important first
step in the learning process. We need to determine which datasets to use,
update, and maintain. However, not many practical ways to measure data quality
are available today, especially when it comes to large-scale high-dimensional
data, such as images and videos. This paper proposes two data quality measures
that can compute class separability and in-class variability, the two important
aspects of data quality, for a given dataset. Classical data quality measures
tend to focus only on class separability; however, we suggest that in-class
variability is another important data quality factor. We provide efficient
algorithms to compute our quality measures based on random projections and
bootstrapping with statistical benefits on large-scale high-dimensional data.
In experiments, we show that our measures are compatible with classical
measures on small-scale data and can be computed much more efficiently on
large-scale high-dimensional datasets.
|
The population of the elderly people has kept increasing rapidly over the
world in the past decades. Solutions that are able to effectively support the
elderly people to live independently at their home are thus urgently needed.
Ambient assisted living (AAL) aims to provide products and services with
ambient intelligence to build a safe environment around people in need. With
the high prevalence of multiple chronic diseases, the elderly people often need
different levels of care management to prolong independent living at home. An
effective AAL system should provide the required clinical support as an
extension to the services provided in hospitals. Following the rapid growth of
available data, together with the wide application of machine learning
technologies, we are now able to build intelligent ambient assisted systems to
fulfil such a request. This paper discusses different levels of intelligence in
AAL. We also introduce our solution for building an intelligent AAL system with
the discussed technologies. Taking semantic web technology as its backbone,
such an AAL system is able to aggregate information from different sources,
solve the semantic gap between different data sources, and perform adaptive and
personalized carepath management based on the ambient environment.
|
Following the theory of information measures based on the cumulative
distribution function, we propose the fractional generalized cumulative
entropy, and its dynamic version. These entropies are particularly suitable to
deal with distributions satisfying the proportional reversed hazard model. We
study the connection with fractional integrals, and some bounds and comparisons
based on stochastic orderings, that allow to show that the proposed measure is
actually a variability measure. The investigation also involves various notions
of reliability theory, since the considered dynamic measure is a suitable
extension of the mean inactivity time. We also introduce the empirical
generalized fractional cumulative entropy as a non-parametric estimator of the
new measure. It is shown that the empirical measure converges to the proposed
notion almost surely. Then, we address the stability of the empirical measure
and provide an example of application to real data. Finally, a central limit
theorem is established under the exponential distribution.
|
The most commonly used interface between a video game and the human user is a
handheld "game controller", "game pad", or in some occasions an "arcade stick."
Directional pads, analog sticks and buttons - both digital and analog - are
linked to in-game actions. One or multiple simultaneous inputs may be necessary
to communicate the intentions of the user. Activating controls may be more or
less convenient depending on their position and size. In order to enable the
user to perform all inputs which are necessary during gameplay, it is thus
imperative to find a mapping between in-game actions and buttons, analog
sticks, and so on. We present simple formats for such mappings as well as for
the constraints on possible inputs which are either determined by a physical
game controller or required to be met for a game software, along with methods
to transform said constraints via a button-action mapping and to check one
constraint set against another, i.e., to check whether a button-action mapping
allows a controller to be used in conjunction with a game software, while
preserving all desired properties.
|
Safety constraints and optimality are important, but sometimes conflicting
criteria for controllers. Although these criteria are often solved separately
with different tools to maintain formal guarantees, it is also common practice
in reinforcement learning to simply modify reward functions by penalizing
failures, with the penalty treated as a mere heuristic. We rigorously examine
the relationship of both safety and optimality to penalties, and formalize
sufficient conditions for safe value functions: value functions that are both
optimal for a given task, and enforce safety constraints. We reveal the
structure of this relationship through a proof of strong duality, showing that
there always exists a finite penalty that induces a safe value function. This
penalty is not unique, but upper-unbounded: larger penalties do not harm
optimality. Although it is often not possible to compute the minimum required
penalty, we reveal clear structure of how the penalty, rewards, discount
factor, and dynamics interact. This insight suggests practical, theory-guided
heuristics to design reward functions for control problems where safety is
important.
|
This paper critically examines arguments against independence, a measure of
group fairness also known as statistical parity and as demographic parity. In
recent discussions of fairness in computer science, some have maintained that
independence is not a suitable measure of group fairness. This position is at
least partially based on two influential papers (Dwork et al., 2012, Hardt et
al., 2016) that provide arguments against independence. We revisit these
arguments, and we find that the case against independence is rather weak. We
also give arguments in favor of independence, showing that it plays a
distinctive role in considerations of fairness. Finally, we discuss how to
balance different fairness considerations.
|
The International Olympiad in Cryptography NSUCRYPTO is the unique Olympiad
containing scientific mathematical problems for professionals, school and
university students from any country. Its aim is to involve young researchers
in solving curious and tough scientific problems of modern cryptography. In
2020, it was held for the seventh time. Prizes and diplomas were awarded to 84
participants in the first round and 49 teams in the second round from 32
countries. In this paper, problems and their solutions of NSUCRYPTO'2020 are
presented. We consider problems related to attacks on ciphers and hash
functions, protocols, permutations, primality tests, etc. We discuss several
open problems on JPEG encoding, Miller -- Rabin primality test, special bases
in the vector space, AES-GCM. The problem of a modified Miller -- Rabin
primality test was solved during the Olympiad. The problem for finding special
bases was partially solved.
|
We give two examples which show that rational nef and anti-nef polytopes are
not uniform even for klt surface pairs, answering a question of Chen-Han. We
also show that rational nef polytopes are uniform when the Cartier indices are
uniformly bounded.
|
Let $G=(V,E)$ be a simple graph. A dominating set of $G$ is a subset
$D\subseteq V$ such that every vertex not in $D$ is adjacent to at least one
vertex in $D$. The cardinality of a smallest dominating set of $G$, denoted by
$\gamma(G)$, is the domination number of $G$. A dominating set $D$ is an
accurate dominating set of $G$, if no $|D|$-element subset of $V\setminus D$ is
a dominating set of $G$. The accurate domination number, $\gamma_a(G)$, is the
cardinality of a smallest accurate dominating set $D$. In this paper, after
presenting preliminaries, we count the number of accurate dominating sets of
some specific graphs.
|
We construct conforming finite elements for the spaces
$H(\text{sym}\,\text{Curl})$ and $H(\text{dev}\,\text{sym}\,\text{Curl})$.
Those are spaces of matrix-valued functions with symmetric or
deviatoric-symmetric $\text{Curl}$ in a Lebesgue space, and they appear in
various models of nonstandard solid mechanics. The finite elements are not
$H(\text{Curl})$-conforming. We show the construction, prove conformity and
unisolvence, and point out optimal approximation error bounds.
|
Assessment of replicability is critical to ensure the quality and rigor of
scientific research. In this paper, we discuss inference and modeling
principles for replicability assessment. Targeting distinct application
scenarios, we propose two types of Bayesian model criticism approaches to
identify potentially irreproducible results in scientific experiments. They are
motivated by established Bayesian prior and posterior predictive model-checking
procedures and generalize many existing replicability assessment methods.
Finally, we discuss the statistical properties of the proposed replicability
assessment approaches and illustrate their usages by simulations and examples
of real data analysis, including the data from the Reproducibility Project:
Psychology and a systematic review of impacts of pre-existing cardiovascular
disease on COVID-19 outcomes.
|
This paper proposes a macroscopic model to describe the equilibrium
distribution of passenger arrivals for the morning commute problem in a
congested urban rail transit system. We use a macroscopic train operation
sub-model developed by Seo et al (2017a,b) to express the interaction between
the dynamics of passengers and trains in a simplified manner while maintaining
their essential physical relations. The equilibrium conditions of the proposed
model are derived and a solution method is provided. The characteristics of the
equilibrium are then examined through analytical discussion and numerical
examples. As an application of the proposed model, we analyze a simple
time-dependent timetable optimization problem with equilibrium constraints and
reveal that a "capacity increasing paradox" exists such that a higher dispatch
frequency can increase the equilibrium cost. Furthermore, insights into the
design of the timetable are obtained and the timetable influence on passengers'
equilibrium travel costs are evaluated.
|
5G D2D Communication promises improvements in energy and spectral efficiency,
overall system capacity, and higher data rates. However, to achieve optimum
results it is important to select wisely the Transmission mode of the D2D
Device to form clusters in the most fruitful positions in terms of Sum Rate and
Power Consumption. Towards this end, this paper investigates the use of
Distributed Artificial Intelligence (DAI) and innovative to D2D, Machine
Learning (ML) approaches to achieve satisfactory results in terms of Spectral
Efficiency (SE), Power Consumption (PC) and execution time, with the creation
of clusters and backhauling D2D network under existing Base Station/Small Cell.
Additionally, one of the major factors that affect the creation of high-quality
clusters under a D2D network is the number of the Devices. Therefore, this
paper focuses on a small (<=200) number of Devices, with the purpose to
identify the limits of each approach in terms of number of devices.
Specifically, to identify where it is beneficial to form a cluster, investigate
the critical point that gains increases rapidly and at the end examine the
applicability of 5G requirements. Additionally, prior work presented a
Distributed Artificial Intelligence (DAI) Solution/Framework in D2D and a DAIS
Transmission Mode Selection (TMS) plan was proposed. In this paper DAIS is
further examined, improved in terms of thresholds evaluation, evaluated, and
compared with other approaches (AI/ML). The results obtained demonstrate the
exceptional performance of DAIS, compared to all other related approaches in
terms of SE, PC, execution time and cluster formation efficiency. Also, results
show that the investigated AI/ML approaches are also beneficial for
Transmission Mode Selection (TMS) in 5G D2D communication, even with a smaller
(i.e., >=5 D2D Relay,>=50 D2D Multi Hop Relay) numbers of devices as a lower
limits.
|
For any nonconstant f,g in C(x) such that the numerator H(x,y) of f(x)-g(y)
is irreducible, we compute the genus of the normalization of the curve
H(x,y)=0. We also prove an analogous formula in arbitrary characteristic when f
and g have no common wildly ramified branch points, and generalize to (possibly
reducible) fiber products of nonconstant morphisms of curves f:A-->D and
g:B-->D.
|
This paper introduces a feedback-based temperature controller design for
intelligent regulation of food internal temperature inside of standard
convection ovens. Typical convection ovens employ an open-loop control system
that requires a person to estimate the amount of time needed to cook foods in
order to achieve desired internal temperatures. This approach, however, can
result in undesired results with the final food temperatures being too high or
too low due to the inherent difficulty in accurately predicting required
cooking times without continuously measuring internal states and accounting for
noise in the system. By implementing and introducing a feedback controller with
a full-order Luenberger observer to create a closed-loop system, an oven can be
instrumented to measure and regulate the internal temperatures of food in order
to automatically control oven heat output and confidently achieve desired
results.
|
Stabilized cat codes can provide a biased noise channel with a set of
bias-preserving (BP) gates, which can significantly reduce the resource
overhead for fault-tolerant quantum computing. All existing schemes of BP
gates, however, require adiabatic quantum evolution, with performance limited
by excitation loss and non-adiabatic errors during the adiabatic gates. In this
work, we apply a derivative-based leakage suppression technique to overcome
non-adiabatic errors, so that we can implement fast BP gates on Kerr-cat qubits
with improved gate fidelity while maintaining high noise bias. When applied to
concatenated quantum error correction, the fast BP gates can not only improve
the logical error rate but also reduce resource overhead, which enables more
efficient implementation of fault-tolerant quantum computing.
|
We study some asymptotic properties of cylinder processes in the plane
defined as union sets of dilated straight lines (appearing as mutually
overlapping infinitely long strips) derived from a stationary independently
marked point process on the real line, where the marks describe thickness and
orientation of individual cylinders. Such cylinder processes form an important
class of (in general non-stationary) planar random sets. We observe the
cylinder process in an unboundedly growing domain $\rho K$ when $\rho \to
\infty\,$, where the set $K$ is compact and star-shaped w.r.t. the origin ${\bf
o}$ being an inner point of $K$. Provided the unmarked point process satisfies
a Brillinger-type mixing condition and the thickness of the typical cylinder
has a finite second moment we prove a (weak) law of large numbers as well as a
formula of the asymptotic variance for the area of the cylinder process in
$\rho K$. Due to the long-range dependencies of the cylinder process, this
variance increases proportionally to $\rho^3$.
|
Depth information matters in RGB-D semantic segmentation task for providing
additional geometric information to color images. Most existing methods exploit
a multi-stage fusion strategy to propagate depth feature to the RGB branch.
However, at the very deep stage, the propagation in a simple element-wise
addition manner can not fully utilize the depth information. We propose
Global-Local propagation network (GLPNet) to solve this problem. Specifically,
a local context fusion module(L-CFM) is introduced to dynamically align both
modalities before element-wise fusion, and a global context fusion
module(G-CFM) is introduced to propagate the depth information to the RGB
branch by jointly modeling the multi-modal global context features. Extensive
experiments demonstrate the effectiveness and complementarity of the proposed
fusion modules. Embedding two fusion modules into a two-stream encoder-decoder
structure, our GLPNet achieves new state-of-the-art performance on two
challenging indoor scene segmentation datasets, i.e., NYU-Depth v2 and SUN-RGBD
dataset.
|
We prove the Second Vanishing Theorem for local cohomology modules of an
unramified regular local ring in its full generality and provide a new proof of
the Second Vanishing Theorem in prime characteristic $p$. As an application of
our vanishing theorem for unramified regular local rings, we extend our
topological characterization of the highest Lyubeznik number of an
equi-characteristic local ring to the setting of mixed characteristic. An upper
bound of local cohomological dimension in mixed characteristic is also obtained
by partially extending Lyubeznik's vanishing theorem in prime characteristic
$p$ to mixed characteristic.
|
Recent X-ray observations by Jiang et al. have identified an active galactic
nucleus (AGN) in the bulgeless spiral galaxy NGC 3319, located just
$14.3\pm1.1\,$Mpc away, and suggest the presence of an intermediate-mass black
hole (IMBH; $10^2\leq M_\bullet/\mathrm{M_{\odot}}\leq10^5$) if the Eddington
ratios are as high as 3 to $3\times10^{-3}$. In an effort to refine the black
hole mass for this (currently) rare class of object, we have explored multiple
black hole mass scaling relations, such as those involving the (not previously
used) velocity dispersion, logarithmic spiral-arm pitch angle, total galaxy
stellar mass, nuclear star cluster mass, rotational velocity, and colour of NGC
3319, to obtain ten mass estimates, of differing accuracy. We have calculated a
mass of $3.14_{-2.20}^{+7.02}\times10^4\,\mathrm{M_\odot}$, with a confidence
of 84% that it is $\leq$$10^5\,\mathrm{M_\odot}$, based on the combined
probability density function from seven of these individual estimates. Our
conservative approach excluded two black hole mass estimates (via the nuclear
star cluster mass, and the fundamental plane of black hole activity
$\unicode{x2014}$ which only applies to black holes with low accretion rates)
that were upper limits of $\sim$$10^5\,{\rm M}_{\odot}$, and it did not use the
$M_\bullet\unicode{x2013}L_{\rm 2-10\,keV}$ relation's prediction of
$\sim$$10^5\,{\rm M}_{\odot}$. This target provides an exceptional opportunity
to study an IMBH in AGN mode and advance our demographic knowledge of black
holes. Furthermore, we introduce our novel method of meta-analysis as a
beneficial technique for identifying new IMBH candidates by quantifying the
probability that a galaxy possesses an IMBH.
|
We propose a novel method to extract global and local features of functional
time series. The global features concerning the dominant modes of variation
over the entire function domain, and local features of function variations over
particular short intervals within function domain, are both important in
functional data analysis. Functional principal component analysis (FPCA),
though a key feature extraction tool, only focus on capturing the dominant
global features, neglecting highly localized features. We introduce a FPCA-BTW
method that initially extracts global features of functional data via FPCA, and
then extracts local features by block thresholding of wavelet (BTW)
coefficients. Using Monte Carlo simulations, along with an empirical
application on near-infrared spectroscopy data of wood panels, we illustrate
that the proposed method outperforms competing methods including FPCA and
sparse FPCA in the estimation functional processes. Moreover, extracted local
features inheriting serial dependence of the original functional time series
contribute to more accurate forecasts. Finally, we develop asymptotic
properties of FPCA-BTW estimators, discovering the interaction between
convergence rates of global and local features.
|
A volatility surface is an important tool for pricing and hedging
derivatives. The surface shows the volatility that is implied by the market
price of an option on an asset as a function of the option's strike price and
maturity. Often, market data is incomplete and it is necessary to estimate
missing points on partially observed surfaces. In this paper, we show how
variational autoencoders can be used for this task. The first step is to derive
latent variables that can be used to construct synthetic volatility surfaces
that are indistinguishable from those observed historically. The second step is
to determine the synthetic surface generated by our latent variables that fits
available data as closely as possible. As a dividend of our first step, the
synthetic surfaces produced can also be used in stress testing, in market
simulators for developing quantitative investment strategies, and for the
valuation of exotic options. We illustrate our procedure and demonstrate its
power using foreign exchange market data.
|
Most methods for publishing data with privacy guarantees introduce randomness
into datasets which reduces the utility of the published data. In this paper,
we study the privacy-utility tradeoff by taking maximal leakage as the privacy
measure and the expected Hamming distortion as the utility measure. We study
three different but related problems. First, we assume that the data-generating
distribution (i.e., the prior) is known, and we find the optimal privacy
mechanism that achieves the smallest distortion subject to a constraint on
maximal leakage. Then, we assume that the prior belongs to some set of
distributions, and we formulate a min-max problem for finding the smallest
distortion achievable for the worst-case prior in the set, subject to a maximal
leakage constraint. Lastly, we define a partial order on privacy mechanisms
based on the largest distortion they generate. Our results show that when the
prior distribution is known, the optimal privacy mechanism fully discloses
symbols with the largest prior probabilities, and suppresses symbols with the
smallest prior probabilities. Furthermore, we show that sets of priors that
contain more uniform distributions lead to larger distortion, while privacy
mechanisms that distribute the privacy budget more uniformly over the symbols
create smaller worst-case distortion.
|
We propose a novel high-fidelity expressive speech synthesis model, UniTTS,
that learns and controls overlapping style attributes avoiding interference.
UniTTS represents multiple style attributes in a single unified embedding space
by the residuals between the phoneme embeddings before and after applying the
attributes. The proposed method is especially effective in controlling multiple
attributes that are difficult to separate cleanly, such as speaker ID and
emotion, because it minimizes redundancy when adding variance in speaker ID and
emotion, and additionally, predicts duration, pitch, and energy based on the
speaker ID and emotion. In experiments, the visualization results exhibit that
the proposed methods learned multiple attributes harmoniously in a manner that
can be easily separated again. As well, UniTTS synthesized high-fidelity speech
signals controlling multiple style attributes. The synthesized speech samples
are presented at https://jackson-kang.github.io/paper_works/UniTTS/demos.
|
To construct the rotation curve of the Galaxy, classical Cepheids with proper
motions, parallaxes and line-of-sight velocities from the Gaia DR2 Catalog are
used in large part. The working sample formed from literature data contains
about 800 Cepheids with estimates of their age. We determined that the linear
rotation velocity of the Galaxy at a solar distance is $V_0=240\pm3$~km
s$^{-1}$. In this case, the distance from the Sun to the axis of rotation of
the Galaxy is found to be $R_0=8.27\pm0.10$~kpc. A spectral analysis of radial
and residual tangential velocities of Cepheids younger than 120 Myr showed
close estimates of the parameters of the spiral density wave obtained from data
both at present time and in the past. So, the value of the wavelength
$\lambda_{R,\theta}$ is in the range of [2.4--3.0] kpc, the pitch angle
$i_{R,\theta}$ is in the range of [$-13^\circ$,$-10^\circ$] for a four-arm
pattern model, the amplitudes of the radial and tangential perturbations are
$f_R\sim12$~km s$^{-1}$ and $f_\theta\sim9$~km s$^{-1}$, respectively.
Velocities of Cepheids older than 120 Myr are currently giving a wavelength
$\lambda_{R,\theta}\sim5$~kpc. This value differs significantly from one that
we obtained from the samples of young Cepheids. An analysis of positions and
velocities of old Cepheids, calculated by integrating their orbits backward in
time, made it possible to determine significantly more reliable values of the
parameters of the spiral density wave: wavelength $\lambda_{R,\theta}=2.7$~kpc,
amplitudes of radial and tangential perturbations are $f_R=7.9$~km s$^{-1}$ and
$f_\theta=5$~km s$^{-1}$, respectively.
|
Intraoperative Gamma Probe (IPG) remains the current gold standard modality
for sentinel lymph node identification and tumor removal in cancer patients.
However, even alongside the optical dyes they do not meet with <5% false
negative rates (FNR) requirement, a key metric suggested by the American
Society of Clinical Oncology (ASCO). We are aiming to reduce FNR by using time
of flight (TOF) PET detector technology in the limited angle geometry system by
using only two detector buckets in coincidence, where one small-area detector
is placed above the patient and the other with larger detection-area, placed
just under the patient bed. For proof of concept, we used two Hamamatsu TOF PET
detector modules (C13500-4075YC-12) featuring 12x12 array of 4.2x4.2x20 mm3 LFS
crystal pixels, one-one coupled to silicon photomultiplier (SiPM) pixels.
Detector coincidence timing resolution (CTR) measured 271 ps FWHM for the whole
detector. We 3D printed lesion phantom containing spheres with 2-10 mm in
diameter, representing lymph nodes, and placed it inside a 10-liter warm
background water phantom. Experimental results show that with sub-minute data
acquisition, 6 mm diameter spheres can be identified in the image when a lesion
phantom with 10:1 activity ratio to background is used. Simulation results are
in good agreement with the experimental data, by resolving 6 mm diameters
lesions with 60 seconds acquisition time, in 25 cm deep background water
phantom with 10:1 activity ratio. The image quality improves as the CTR
improves in the simulation, and with decreasing background water phantom depth
or lesion to background activity ratio, in the experiment. With the results
presented here we conclude that limited angle TOF PET detector is a major step
forward for intraoperative applications in that, improved lesion detectability
is beyond what the conventional Gamma- and NIR-based probes could achieve.
|
To facilitate effective human-robot interaction (HRI), trust-aware HRI has
been proposed, wherein the robotic agent explicitly considers the human's trust
during its planning and decision making. The success of trust-aware HRI depends
on the specification of a trust dynamics model and a trust-behavior model. In
this study, we proposed one novel trust-behavior model, namely the reverse
psychology model, and compared it against the commonly used disuse model. We
examined how the two models affect the robot's optimal policy and the
human-robot team performance. Results indicate that the robot will deliberately
"manipulate" the human's trust under the reverse psychology model. To correct
this "manipulative" behavior, we proposed a trust-seeking reward function that
facilitates trust establishment without significantly sacrificing the team
performance.
|
We explore variations of the dust extinction law of the Milky Way by
selecting stars from the Swift/UVOT Serendipitous Source Catalogue,
cross-matched with Gaia DR2 and 2MASS to produce a sample of 10,452 stars out
to ~4kpc with photometry covering a wide spectral window. The near ultraviolet
passbands optimally encompass the 2175A bump, so that we can simultaneously fit
the net extinction, quoted in the V band (A$_V$), the steepness of the
wavelength dependence ($\delta$) and the bump strength (E$_b$). The methodology
compares the observed magnitudes with theoretical stellar atmospheres from the
models of Coelho. Significant correlations are found between these parameters,
related to variations in dust composition, that are complementary to similar
scaling relations found in the more complex dust attenuation law of galaxies -
that also depend on the distribution of dust among the stellar populations
within the galaxy. We recover the strong anticorrelation between A$_V$ and
Galactic latitude, as well as a weaker bump strength at higher extinction.
$\delta$ is also found to correlate with latitude, with steeper laws towards
the Galactic plane. Our results suggest that variations in the attenuation law
of galaxies cannot be fully explained by dust geometry.
|
We present evidence that many common convolutional neural networks (CNNs)
trained for face verification learn functions that are nearly equivalent under
rotation. More specifically, we demonstrate that one face verification model's
embeddings (i.e. last-layer activations) can be compared directly to another
model's embeddings after only a rotation or linear transformation, with little
performance penalty. This finding is demonstrated using IJB-C 1:1 verification
across the combinations of ten modern off-the-shelf CNN-based face verification
models which vary in training dataset, CNN architecture, method of angular loss
calculation, or some combination of the 3. These networks achieve a mean true
accept rate of 0.96 at a false accept rate of 0.01. When instead evaluating
embeddings generated from two CNNs, where one CNN's embeddings are mapped with
a linear transformation, the mean true accept rate drops to 0.95 using the same
verification paradigm. Restricting these linear maps to only perform rotation
produces a mean true accept rate of 0.91. These mappings' existence suggests
that a common representation is learned by models despite variation in training
or structure. We discuss the broad implications a result like this has,
including an example regarding face template security.
|
This paper presents an advance on image interpolation based on ant colony
algorithm (AACA) for high-resolution image scaling. The difference between the
proposed algorithm and the previously proposed optimization of bilinear
interpolation based on ant colony algorithm (OBACA) is that AACA uses global
weighting, whereas OBACA uses a local weighting scheme. The strength of the
proposed global weighting of the AACA algorithm depends on employing solely the
pheromone matrix information present on any group of four adjacent pixels to
decide which case deserves a maximum global weight value or not. Experimental
results are further provided to show the higher performance of the proposed
AACA algorithm with reference to the algorithms mentioned in this paper.
|
Portrait matting is an important research problem with a wide range of
applications, such as video conference app, image/video editing, and
post-production. The goal is to predict an alpha matte that identifies the
effect of each pixel on the foreground subject. Traditional approaches and most
of the existing works utilized an additional input, e.g., trimap, background
image, to predict alpha matte. However, providing additional input is not
always practical. Besides, models are too sensitive to these additional inputs.
In this paper, we introduce an additional input-free approach to perform
portrait matting using Generative Adversarial Nets (GANs). We divide the main
task into two subtasks. For this, we propose a segmentation network for the
person segmentation and the alpha generation network for alpha matte
prediction. While the segmentation network takes an input image and produces a
coarse segmentation map, the alpha generation network utilizes the same input
image as well as a coarse segmentation map that is produced by the segmentation
network to predict the alpha matte. Besides, we present a segmentation encoding
block to downsample the coarse segmentation map and provide feature
representation to the residual block. Furthermore, we propose border loss to
penalize only the borders of the subject separately which is more likely to be
challenging and we also adapt perceptual loss for portrait matting. To train
the proposed system, we combine two different popular training datasets to
improve the amount of data as well as diversity to address domain shift
problems in the inference time. We tested our model on three different
benchmark datasets, namely Adobe Image Matting dataset, Portrait Matting
dataset, and Distinctions dataset. The proposed method outperformed the MODNet
method that also takes a single input.
|
The academic, socioemotional, and health impacts of school policies
throughout the COVID-19 pandemic have been a source of many important questions
that require accurate information about the extent of onsite schooling that has
been occurring throughout the pandemic. This paper investigates school
operational status data sources during the COVID-19 pandemic, comparing
self-report data collected nationally on the household level through a
Facebook-based survey with data collected at district and county levels
throughout the country. The percentage of households reporting in-person
instruction within each county is compared to the district and county data at
the state and county levels. The results show high levels of consistency
between the sources at the state level and for large counties. The consistency
levels across sources support the usage of the Facebook-based COVID-19 Symptom
Survey as a source to answer questions about the educational experiences,
factors, and impacts related to K-12 education across the nation during the
pandemic.
|
We review the theoretical aspects relevant in the description of high energy
heavy ion collisions, with an emphasis on the learnings about the underlying
QCD phenomena that have emerged from these collisions.
|
Bounded rationality is an important consideration stemming from the fact that
agents often have limits on their processing abilities, making the assumption
of perfect rationality inapplicable to many real tasks. We propose an
information-theoretic approach to the inference of agent decisions under
Smithian competition. The model explicitly captures the boundedness of agents
(limited in their information-processing capacity) as the cost of information
acquisition for expanding their prior beliefs. The expansion is measured as the
Kullblack-Leibler divergence between posterior decisions and prior beliefs.
When information acquisition is free, the homo economicus agent is recovered,
while in cases when information acquisition becomes costly, agents instead
revert to their prior beliefs. The maximum entropy principle is used to infer
least-biased decisions based upon the notion of Smithian competition formalised
within the Quantal Response Statistical Equilibrium framework. The
incorporation of prior beliefs into such a framework allowed us to
systematically explore the effects of prior beliefs on decision-making in the
presence of market feedback, as well as importantly adding a temporal
interpretation to the framework. We verified the proposed model using
Australian housing market data, showing how the incorporation of prior
knowledge alters the resulting agent decisions. Specifically, it allowed for
the separation of past beliefs and utility maximisation behaviour of the agent
as well as the analysis into the evolution of agent beliefs.
|
Multi-agent Markov Decision Processes (MMDPs) arise in a variety of
applications including target tracking, control of multi-robot swarms, and
multiplayer games. A key challenge in MMDPs occurs when the state and action
spaces grow exponentially in the number of agents, making computation of an
optimal policy computationally intractable for medium- to large-scale problems.
One property that has been exploited to mitigate this complexity is transition
independence, in which each agent's transition probabilities are independent of
the states and actions of other agents. Transition independence enables
factorization of the MMDP and computation of local agent policies but does not
hold for arbitrary MMDPs. In this paper, we propose an approximate transition
dependence property, called $\delta$-transition dependence and develop a metric
for quantifying how far an MMDP deviates from transition independence. Our
definition of $\delta$-transition dependence recovers transition independence
as a special case when $\delta$ is zero. We develop a polynomial time algorithm
in the number of agents that achieves a provable bound on the global optimum
when the reward functions are monotone increasing and submodular in the agent
actions. We evaluate our approach on two case studies, namely, multi-robot
control and multi-agent patrolling example.
|
We report on the follow-up $XMM-Newton$ observation of the persistent X-ray
pulsar CXOU J225355.1+624336, discovered with the CATS@BAR project on archival
$Chandra$ data. The source was detected at $f_{\rm X}$(0.5-10 keV) = 3.4$\times
10^{-12}$ erg cm$^{-2}$ s$^{-1}$, a flux level which is fully consistent with
the previous observations performed with $ROSAT$, $Swift$, and $Chandra$. The
measured pulse period $P$ = 46.753(3) s, compared with the previous
measurements, implies a constant spin down at an average rate $\dot P =
5.3\times 10^{-10}$ s s$^{-1}$. The pulse profile is energy dependent, showing
three peaks at low energy and a less structured profile above about 3.5 keV.
The pulsed fraction slightly increases with energy. We described the
time-averaged EPIC spectrum with four different emission models: a partially
covered power law, a cut-off power law, and a power law with an additional
thermal component (either a black body or a collisionally ionized gas). In all
cases we obtained equally good fits, so it was not possible to prefer or reject
any emission model on the statistical basis. However, we disfavour the presence
of the thermal components, since their modeled X-ray flux, resulting from a
region larger than the neutron star surface, would largely dominate the X-ray
emission from the pulsar. The phase-resolved spectral analysis showed that a
simple flux variation cannot explain the source variability and proved that it
is characterized by a spectral variability along the pulse phase. The results
of the $XMM-Newton$ observation confirmed that CXOU J225355.1+624336 is a BeXB
with a low-luminosity ($L_{\rm X} \sim 10^{34-35}$ erg s$^{-1}$), a limited
variability, and a constant spin down. Therefore, they reinforce the source
classification as a persistent BeXB.
|
Let us consider a Gaussian probability on a Banach space. We prove the
existence of an intermediate Banach space between the space where the Gaussian
measure lives and its RKHS. Such a space has full probability and a compact
embedding. This extends what happens with Wiener measure, where the
intermediate space can be chosen as a space of H\"older paths. From this result
it is very simple to deduce a result of exponential tightness for Gaussian
probabilities.
|
Laser cooling of matter through anti-Stokes photoluminescence, where the
emitted frequency of light exceeds that of the impinging laser by virtue of
absorption of thermal vibrational energy, has been successfully realized in
condensed media, and in particular with rare earth doped systems achieving
sub-100K solid state optical refrigeration. Studies suggest that laser cooling
in semiconductors has the potential of achieving temperatures down to ~10K and
that its direct integration can usher unique high-performance nanostructured
semiconductor devices. While laser cooling of nanostructured II-VI
semiconductors has been reported recently, laser cooling of indirect bandgap
semiconductors such as group IV silicon and germanium remains a major
challenge. Here we report on the anomalous observation of dominant anti-Stokes
photoluminescence in germanium nanocrystals. We attribute this result to the
confluence of ultra-high purity nanocrystal germanium, generation of high
density of electron-hole plasma, the inherent degeneracy of longitudinal and
transverse optical phonons in non-polar indirect bandgap semiconductors, and
commensurate spatial confinement effects. At high laser intensities, laser
cooling with lattice temperature as low as ~50K is inferred.
|
Major scandals in corporate history have urged the need for regulatory
compliance, where organizations need to ensure that their controls (processes)
comply with relevant laws, regulations, and policies. However, keeping track of
the constantly changing legislation is difficult, thus organizations are
increasingly adopting Regulatory Technology (RegTech) to facilitate the
process. To this end, we introduce regulatory information retrieval (REG-IR),
an application of document-to-document information retrieval (DOC2DOC IR),
where the query is an entire document making the task more challenging than
traditional IR where the queries are short. Furthermore, we compile and release
two datasets based on the relationships between EU directives and UK
legislation. We experiment on these datasets using a typical two-step pipeline
approach comprising a pre-fetcher and a neural re-ranker. Experimenting with
various pre-fetchers from BM25 to k nearest neighbors over representations from
several BERT models, we show that fine-tuning a BERT model on an in-domain
classification task produces the best representations for IR. We also show that
neural re-rankers under-perform due to contradicting supervision, i.e., similar
query-document pairs with opposite labels. Thus, they are biased towards the
pre-fetcher's score. Interestingly, applying a date filter further improves the
performance, showcasing the importance of the time dimension.
|
First person action recognition is an increasingly researched topic because
of the growing popularity of wearable cameras. This is bringing to light
cross-domain issues that are yet to be addressed in this context. Indeed, the
information extracted from learned representations suffers from an intrinsic
environmental bias. This strongly affects the ability to generalize to unseen
scenarios, limiting the application of current methods in real settings where
trimmed labeled data are not available during training. In this work, we
propose to leverage over the intrinsic complementary nature of audio-visual
signals to learn a representation that works well on data seen during training,
while being able to generalize across different domains. To this end, we
introduce an audio-visual loss that aligns the contributions from the two
modalities by acting on the magnitude of their feature norm representations.
This new loss, plugged into a minimal multi-modal action recognition
architecture, leads to strong results in cross-domain first person action
recognition, as demonstrated by extensive experiments on the popular
EPIC-Kitchens dataset.
|
Accurate prediction of pedestrian crossing behaviors by autonomous vehicles
can significantly improve traffic safety. Existing approaches often model
pedestrian behaviors using trajectories or poses but do not offer a deeper
semantic interpretation of a person's actions or how actions influence a
pedestrian's intention to cross in the future. In this work, we follow the
neuroscience and psychological literature to define pedestrian crossing
behavior as a combination of an unobserved inner will (a probabilistic
representation of binary intent of crossing vs. not crossing) and a set of
multi-class actions (e.g., walking, standing, etc.). Intent generates actions,
and the future actions in turn reflect the intent. We present a novel
multi-task network that predicts future pedestrian actions and uses predicted
future action as a prior to detect the present intent and action of the
pedestrian. We also designed an attention relation network to incorporate
external environmental contexts thus further improve intent and action
detection performance. We evaluated our approach on two naturalistic driving
datasets, PIE and JAAD, and extensive experiments show significantly improved
and more explainable results for both intent detection and action prediction
over state-of-the-art approaches. Our code is available at:
https://github.com/umautobots/pedestrian_intent_action_detection.
|
Let $\Delta$ denote a non-degenerate $k$-simplex in $\mathbb{R}^k$. The set
$\text{Sim}(\Delta)$ of simplices in $\mathbb{R}^k$ similar to $\Delta$ is
diffeomorphic to $O(k)\times [0,\infty)\times \mathbb{R}^k$, where the factor
in $O(k)$ is a matrix called the {\em pose}. Among $(k-1)$-spheres smoothly
embedded in $\mathbb{R}^k$ and isotopic to the identity, there is a dense
family of spheres, for which the subset of $\text{Sim}(\Delta)$ of simplices
inscribed in each embedded sphere contains a similar simplex of every pose
$U\in O(k)$. Further, the intersection of $\text{Sim}(\Delta)$ with the
configuration space of $k+1$ distinct points on an embedded sphere is a
manifold whose top homology class maps to the top class in $O(k)$ via the pose
map. This gives a high dimensional generalization of classical results on
inscribing families of triangles in plane curves. We use techniques established
in our previous paper on the square-peg problem where we viewed inscribed
simplices in spheres as transverse intersections of submanifolds of
compactified configuration spaces.
|
We propose an approach that links density functional theory (DFT) and
molecular dynamics (MD) simulation to study fluid behavior in nanopores in
contact with bulk (macropores). It consists of two principal steps. First, the
theoretical calculation of fluid composition and density distribution in
nanopore under specified thermodynamic conditions using DFT. Second, MD
simulation of the confined system with obtained characteristics. Thus, we
investigate an open system in a grand canonical ensemble. This method allows us
to investigate both structural and dynamic properties of confined fluid at
given bulk conditions and do not require computationally expensive simulation
of bulk reservoir. In this work, we obtain equilibrium density profiles of pure
methane, ethane and carbon dioxide and their binary mixtures in slitlike
nanopores with carbon walls. Good agreement of structures obtained by theory
and simulation confirms the applicability of the proposed method.
|
Brown dwarfs with well-determined ages, luminosities, and masses provide rare
but valuable tests of low-temperature atmospheric and evolutionary models. We
present the discovery and dynamical mass measurement of a substellar companion
to HD 47127, an old ($\approx$7-10 Gyr) G5 main sequence star with a mass
similar to the Sun. Radial velocities of the host star with the Harlan J. Smith
Telescope uncovered a low-amplitude acceleration of 1.93 $\pm$ 0.08 m s$^{-1}$
yr$^{-1}$ based on 20 years of monitoring. We subsequently recovered a faint
($\Delta H$=13.14 $\pm$ 0.15 mag) co-moving companion at 1.95$''$ (52 AU) with
follow-up Keck/NIRC2 adaptive optics imaging. The radial acceleration of HD
47127 together with its tangential acceleration from Hipparcos and Gaia EDR3
astrometry provide a direct measurement of the three-dimensional acceleration
vector of the host star, enabling a dynamical mass constraint for HD 47127 B
(67.5-177 $M_\mathrm{Jup}$ at 95% confidence) despite the small fractional
orbital coverage of the observations. The absolute $H$-band magnitude of HD
47127 B is fainter than the benchmark T dwarfs HD 19467 B and Gl 229 B but
brighter than Gl 758 B and HD 4113 C, suggesting a late-T spectral type.
Altogether the mass limits for HD 47127 B from its dynamical mass and the
substellar boundary imply a range of 67-78 $M_\mathrm{Jup}$ assuming it is
single, although a preference for high masses of $\approx$100 $M_\mathrm{Jup}$
from dynamical constraints hints at the possibility that HD 47127 B could
itself be a binary pair of brown dwarfs or that another massive companion
resides closer in. Regardless, HD 47127 B will be an excellent target for more
refined orbital and atmospheric characterization in the future.
|
In this chapter, we derive and analyse models for consensus dynamics on
hypergraphs. As we discuss, unless there are nonlinear node interaction
functions, it is always possible to rewrite the system in terms of a new
network of effective pairwise node interactions, regardless of the initially
underlying multi-way interaction structure. We thus focus on dynamics based on
a certain class of non-linear interaction functions, which can model different
sociological phenomena such as peer pressure and stubbornness. Unlike for
linear consensus dynamics on networks, we show how our nonlinear model dynamics
can cause shifts away from the average system state. We examine how these
shifts are influenced by the distribution of the initial states, the underlying
hypergraph structure and different forms of non-linear scaling of the node
interaction function.
|
Transformers are state-of-the-art models for a variety of sequence modeling
tasks. At their core is an attention function which models pairwise
interactions between the inputs at every timestep. While attention is powerful,
it does not scale efficiently to long sequences due to its quadratic time and
space complexity in the sequence length. We propose RFA, a linear time and
space attention that uses random feature methods to approximate the softmax
function, and explore its application in transformers. RFA can be used as a
drop-in replacement for conventional softmax attention and offers a
straightforward way of learning with recency bias through an optional gating
mechanism. Experiments on language modeling and machine translation demonstrate
that RFA achieves similar or better performance compared to strong transformer
baselines. In the machine translation experiment, RFA decodes twice as fast as
a vanilla transformer. Compared to existing efficient transformer variants, RFA
is competitive in terms of both accuracy and efficiency on three long text
classification datasets. Our analysis shows that RFA's efficiency gains are
especially notable on long sequences, suggesting that RFA will be particularly
useful in tasks that require working with large inputs, fast decoding speed, or
low memory footprints.
|
We analyze the connections between the non-Markovianity degree of the most
general phase-damping qubit maps and their legitimate mixtures. Using the
results for image non-increasing dynamical maps, we formulate the necessary and
sufficient conditions for the Pauli maps to satisfy specific divisibility
criteria. Next, we examine how the non-Markovianity properties for (in general
noninvertible) Pauli dynamical maps influence the properties of their convex
combinations. Our results are illustrated with instructive examples. For
P-divisible maps, we propose a legitimate time-local generator whose all
decoherence rates are temporarily infinite.
|
In this paper we attempt to examine the possibility of construction of a
traversable wormhole on the Randall-Sundrum braneworld with ordinary matter
employing the Kuchowicz potential as one of the metric potentials. In this
scenario, the wormhole shape function is obtained and studied, along with
validity of Null Energy Condition (NEC) and the junction conditions at the
surface of the wormhole are used to obtain a few of the model parameters. The
investigation, besides giving an estimate for the bulk equation of state
parameter, draws important constraints on the brane tension which is a novel
attempt in this aspect and very interestingly the constraints imposed by a
physically plausible traversable wormhole is in high confirmity with those
drawn from more general space-times or space-time independent situations
involved in fundamental physics. Also, we go on to claim that the possible
existence of a wormhole may very well indicate that we live on a three-brane
universe.
|
We consider a numerical scheme for the approximation of a system that couples
the evolution of a two--dimensional hypersurface to a reaction--diffusion
equation on the surface. The surfaces are assumed to be graphs and evolve
according to forced mean curvature flow. The method uses continuous, piecewise
linear finite elements in space and a backward Euler scheme in time. Assuming
the existence of a smooth solution we prove optimal error bounds both in
$L^\infty(L^2)$ and in $L^2(H^1)$. We present several numerical experiments
that confirm our theoretical findings and apply the method in order to simulate
diffusion induced grain boundary motion.
|
This paper presents a new prescribed performance control scheme for the
attitude tracking of the three degree-of-freedom (3-DOF) helicopter system with
lumped disturbances under mechanical constraints. First, a novel prescribed
performance function is defined to guarantee that the tracking error
performance has a small overshoot in the transient process and converges to an
arbitrary small region within a predetermined time in the steady-state process
without knowing the initial tracking error in advance. Then, based on the novel
prescribed performance function, an error transformation combined with the
smooth finite-time control method we proposed before is employed to drive the
elevation and pitch angles to track given desired trajectories with guaranteed
tracking performance. The theoretical analysis of finite-time Lyapunov
stability indicates that the closed-loop system is fast finite-time uniformly
ultimately boundedness. Finally, comparative experiment results illustrate the
effectiveness and superiority of the proposed control scheme.
|
One of the challenges in many-body physics is determining the effects of
phonons on strongly correlated electrons. The difficulty arises from strong
correlations at differing energy scales -- for band metals, Migdal-Eliashberg
theory accurately determines electron-phonon coupling effects due to the
absence of vertex corrections -- but strongly correlated electrons require a
more complex description and the standard Migdal-Eliashberg approach does not
necessarily apply. In this work, we solve for the atomic limit Green's function
of the Holstein-Hubbard model with both time-dependent electron-electron and
electron-phonon couplings. We then examine the photoemission spectra (PES) of
this model in and out of equilibrium. Next we use similar methods to exactly
solve an extended version of the Hatsugai-Komoto model, and examine its
behavior in and out of equilibrium. These calculations lead us to propose using
the first moment of the photoemission spectra to signal non-equilibrium changes
in electron-electron and electron-phonon couplings.
|
The paper treats pseudodifferential operators $P=Op(p(\xi ))$ with
homogeneous complex symbol $p(\xi )$ of order $2a>0$, generalizing the
fractional Laplacian $(-\Delta )^a$ but lacking its symmetries, and taken to
act on the halfspace $R^n_+$. The operators are seen to satisfy a principal
$\mu $-transmission condition relative to $R^n_+$, but generally not the full
$\mu $-transmission condition satisfied by $(-\Delta )^a$ and related operators
(with $\mu =a$). However, $P$ acts well on the so-called $\mu $-transmission
spaces over $R^n_+$ (defined in earlier works), and when $P$ moreover is
strongly elliptic, these spaces are the solution spaces for the homogeneous
Dirichlet problem for $P$, leading to regularity results with a factor $x_n^\mu
$ (in a limited range of Sobolev spaces). The information is then shown to be
sufficient to establish an integration by parts formula over $R^n_+$ for $P$
acting on such functions. The formulation in Sobolev spaces, and the results on
strongly elliptic operators going beyond operators with real kernels, are new.
Furthermore, large solutions with nonzero Dirichlet traces are described, and
a halfways Green's formula is established, for this new class of operators.
Since the principal $\mu $-transmission condition has weaker requirements
than the full $\mu $-transmission condition assumed in earlier papers, new
arguments were needed, relying on work of Vishik and Eskin instead of the
Boutet de Monvel theory. The results cover the case of nonsymmetric operators
with real kernel that were only partially treated in a preceding paper.
|
Representing complex 3D objects as simple geometric primitives, known as
shape abstraction, is important for geometric modeling, structural analysis,
and shape synthesis. In this paper, we propose an unsupervised shape
abstraction method to map a point cloud into a compact cuboid representation.
We jointly predict cuboid allocation as part segmentation and cuboid shapes and
enforce the consistency between the segmentation and shape abstraction for
self-learning. For the cuboid abstraction task, we transform the input point
cloud into a set of parametric cuboids using a variational auto-encoder
network. The segmentation network allocates each point into a cuboid
considering the point-cuboid affinity. Without manual annotations of parts in
point clouds, we design four novel losses to jointly supervise the two branches
in terms of geometric similarity and cuboid compactness. We evaluate our method
on multiple shape collections and demonstrate its superiority over existing
shape abstraction methods. Moreover, based on our network architecture and
learned representations, our approach supports various applications including
structured shape generation, shape interpolation, and structural shape
clustering.
|
Grammatical Evolution (GE) is one of the most popular Genetic Programming
(GP) variants, and it has been used with success in several problem domains.
Since the original proposal, many enhancements have been proposed to GE in
order to address some of its main issues and improve its performance.
In this paper we propose Probabilistic Grammatical Evolution (PGE), which
introduces a new genotypic representation and new mapping mechanism for GE.
Specifically, we resort to a Probabilistic Context-Free Grammar (PCFG) where
its probabilities are adapted during the evolutionary process, taking into
account the productions chosen to construct the fittest individual. The
genotype is a list of real values, where each value represents the likelihood
of selecting a derivation rule. We evaluate the performance of PGE in two
regression problems and compare it with GE and Structured Grammatical Evolution
(SGE).
The results show that PGE has a a better performance than GE, with
statistically significant differences, and achieved similar performance when
comparing with SGE.
|
We analyze internal device physics, performance limitations, and optimization
options for a unique laser design with multiple active regions separated by
tunnel junctions, featuring surprisingly wide quantum wells. Contrary to common
assumptions, these quantum wells are revealed to allow for perfect screening of
the strong built-in polarization field, while optical gain is provided by
higher quantum levels. However, internal absorption, low p-cladding
conductivity, and self-heating are shown to strongly limit the laser
performance.
|
Adversarial robustness has become a topic of growing interest in machine
learning since it was observed that neural networks tend to be brittle. We
propose an information-geometric formulation of adversarial defense and
introduce FIRE, a new Fisher-Rao regularization for the categorical
cross-entropy loss, which is based on the geodesic distance between natural and
perturbed input features. Based on the information-geometric properties of the
class of softmax distributions, we derive an explicit characterization of the
Fisher-Rao Distance (FRD) for the binary and multiclass cases, and draw some
interesting properties as well as connections with standard regularization
metrics. Furthermore, for a simple linear and Gaussian model, we show that all
Pareto-optimal points in the accuracy-robustness region can be reached by FIRE
while other state-of-the-art methods fail. Empirically, we evaluate the
performance of various classifiers trained with the proposed loss on standard
datasets, showing up to 2\% of improvements in terms of robustness while
reducing the training time by 20\% over the best-performing methods.
|
Deep learning (DL) relies on massive amounts of labeled data, and improving
its labeled sample-efficiency remains one of the most important problems since
its advent. Semi-supervised learning (SSL) leverages unlabeled data that are
more accessible than their labeled counterparts. Active learning (AL) selects
unlabeled instances to be annotated by a human-in-the-loop in hopes of better
performance with less labeled data. Given the accessible pool of unlabeled data
in pool-based AL, it seems natural to use SSL when training and AL to update
the labeled set; however, algorithms designed for their combination remain
limited. In this work, we first prove that convergence of gradient descent on
sufficiently wide ReLU networks can be expressed in terms of their Gram matrix'
eigen-spectrum. Equipped with a few theoretical insights, we propose
convergence rate control (CRC), an AL algorithm that selects unlabeled data to
improve the problem conditioning upon inclusion to the labeled set, by
formulating an acquisition step in terms of improving training dynamics.
Extensive experiments show that SSL algorithms coupled with CRC can achieve
high performance using very few labeled data.
|
An $\ell$-facial edge-coloring of a plane graph is a coloring of its edges
such that any two edges at distance at most $\ell$ on a boundary walk of any
face receive distinct colors. It is the edge-coloring variant of the
$\ell$-facial vertex coloring, which arose as a generalization of the
well-known cyclic coloring. It is conjectured that at most $3\ell + 1$ colors
suffice for an $\ell$-facial edge-coloring of any plane graph. The conjecture
has only been confirmed for $\ell \le 2$, and in this paper, we prove its
validity for $\ell = 3$.
|
Despite the increasing interest, the research field which studies the
concepts of work and heat at quantum level has suffered from two main
drawbacks: first, the difficulty to properly define and measure the work, heat
and internal energy variation in a quantum system and, second, the lack of
experiments. Here, we report a full characterization of the dissipated heat,
work and internal energy variation in a two-level quantum system interacting
with an engineered environment. We use the IBMQ quantum computer to implement
the driven system's dynamics in a dissipative environment. The experimental
data allow us to construct quasi-probability distribution functions from which
we recover the correct averages of work, heat and internal energy variation in
the dissipative processes. Interestingly, by increasing the environment
coupling strength, we observe a reduction of the pure quantum features of the
energy exchange processes that we interpret as the emergence of the classical
limit. This makes the present approach a privileged tool to study, understand
and exploit quantum effects in energy exchanges.
|
Many machine learning tasks involve subpopulation shift where the testing
data distribution is a subpopulation of the training distribution. For such
settings, a line of recent work has proposed the use of a variant of empirical
risk minimization(ERM) known as distributionally robust optimization (DRO). In
this work, we apply DRO to real, large-scale tasks with subpopulation shift,
and observe that DRO performs relatively poorly, and moreover has severe
instability. We identify one direct cause of this phenomenon: sensitivity of
DRO to outliers in the datasets. To resolve this issue, we propose the
framework of DORO, for Distributional and Outlier Robust Optimization. At the
core of this approach is a refined risk function which prevents DRO from
overfitting to potential outliers. We instantiate DORO for the Cressie-Read
family of R\'enyi divergence, and delve into two specific instances of this
family: CVaR and $\chi^2$-DRO. We theoretically prove the effectiveness of the
proposed method, and empirically show that DORO improves the performance and
stability of DRO with experiments on large modern datasets, thereby positively
addressing the open question raised by Hashimoto et al., 2018.
|
We study two-sided reputational bargaining with opportunities to issue an
ultimatum -- threats to force dispute resolution. Each player is either a
justified type, who never concedes and issues an ultimatum whenever an
opportunity arrives, or an unjustified type, who can concede, wait, or bluff
with an ultimatum. In equilibrium, the presence of ultimatum opportunities can
harm or benefit a player by decelerating or accelerating reputation building.
When only one player can issue an ultimatum, equilibrium play is unique. The
hazard rate of dispute resolution is discontinuous and piecewise monotonic in
time. As the probabilities of being justified vanish, agreement is immediate
and efficient, and if the set of justifiable demands is rich, payoffs modify
Abreu and Gul (2000), with the discount rate replaced by the ultimatum
opportunity arrival rate if the former is smaller. When both players' ultimatum
opportunities arrive sufficiently fast, there may exist multiple equilibria in
which their reputations do not build up and negotiation lasts forever.
|
Overparametrized interpolating models have drawn increasing attention from
machine learning. Some recent studies suggest that regularized interpolating
models can generalize well. This phenomenon seemingly contradicts the
conventional wisdom that interpolation tends to overfit the data and performs
poorly on test data. Further, it appears to defy the bias-variance trade-off.
As one of the shortcomings of the existing theory, the classical notion of
model degrees of freedom fails to explain the intrinsic difference among the
interpolating models since it focuses on estimation of in-sample prediction
error. This motivates an alternative measure of model complexity which can
differentiate those interpolating models and take different test points into
account. In particular, we propose a measure with a proper adjustment based on
the squared covariance between the predictions and observations. Our analysis
with least squares method reveals some interesting properties of the measure,
which can reconcile the "double descent" phenomenon with the classical theory.
This opens doors to an extended definition of model degrees of freedom in
modern predictive settings.
|
Solving for detailed chemical kinetics remains one of the major bottlenecks
for computational fluid dynamics simulations of reacting flows using a
finite-rate-chemistry approach. This has motivated the use of fully connected
artificial neural networks to predict stiff chemical source terms as functions
of the thermochemical state of the combustion system. However, due to the
nonlinearities and multi-scale nature of combustion, the predicted solution
often diverges from the true solution when these deep learning models are
coupled with a computational fluid dynamics solver. This is because these
approaches minimize the error during training without guaranteeing successful
integration with ordinary differential equation solvers. In the present work, a
novel neural ordinary differential equations approach to modeling chemical
kinetics, termed as ChemNODE, is developed. In this deep learning framework,
the chemical source terms predicted by the neural networks are integrated
during training, and by computing the required derivatives, the neural network
weights are adjusted accordingly to minimize the difference between the
predicted and ground-truth solution. A proof-of-concept study is performed with
ChemNODE for homogeneous autoignition of hydrogen-air mixture over a range of
composition and thermodynamic conditions. It is shown that ChemNODE accurately
captures the correct physical behavior and reproduces the results obtained
using the full chemical kinetic mechanism at a fraction of the computational
cost.
|
Graphics Processing Units (GPUs) have been widely used to accelerate
artificial intelligence, physics simulation, medical imaging, and information
visualization applications. To improve GPU performance, GPU hardware designers
need to identify performance issues by inspecting a huge amount of
simulator-generated traces. Visualizing the execution traces can reduce the
cognitive burden of users and facilitate making sense of behaviors of GPU
hardware components. In this paper, we first formalize the process of GPU
performance analysis and characterize the design requirements of visualizing
execution traces based on a survey study and interviews with GPU hardware
designers. We contribute data and task abstraction for GPU performance
analysis. Based on our task analysis, we propose Daisen, a framework that
supports data collection from GPU simulators and provides visualization of the
simulator-generated GPU execution traces. Daisen features a data abstraction
and trace format that can record simulator-generated GPU execution traces.
Daisen also includes a web-based visualization tool that helps GPU hardware
designers examine GPU execution traces, identify performance bottlenecks, and
verify performance improvement. Our qualitative evaluation with GPU hardware
designers demonstrates that the design of Daisen reflects the typical workflow
of GPU hardware designers. Using Daisen, participants were able to effectively
identify potential performance bottlenecks and opportunities for performance
improvement. The open-sourced implementation of Daisen can be found at
gitlab.com/akita/vis. Supplemental materials including a demo video, survey
questions, evaluation study guide, and post-study evaluation survey are
available at osf.io/j5ghq.
|
A set of boundary conditions called the Transpiration-Resistance Model (TRM)
are investigated in altering near-wall turbulence. The TRM has been previously
proposed by \citet{Lacis2020} as a means of representing the net effect of
surface micro-textures on their overlying bulk flows. It encompasses
conventional Navier-slip boundary conditions relating the streamwise and
spanwise velocities to their respective shears through the slip lengths
$\ell_x$ and $\ell_z$. In addition, it features a transpiration condition
accounting for the changes induced in the wall-normal velocity by expressing it
in terms of variations of the wall-parallel velocity shears through the
transpiration lengths $m_x$ and $m_z$. Greater levels of drag increase occur
when more transpiration takes place at the boundary plane, with turbulent
transpiration being predominately coupled to the spanwise shear component for
canonical near-wall turbulence. The TRM can reproduce the effect of a
homogeneous and structured roughness up to ${k^+}\,{\approx18}$. In this
transitionally rough flow regime, the transpiration lengths of the TRM must be
empirically determined. The \emph{transpiration factor} is defined as the
product between the slip and transpiration lengths, i.e. $(m\ell)_{x,z}$. This
factor contains the compound effect of the wall-parallel velocity occurring at
the boundary plane and increased permeability, both of which lead to the
transport of momentum in the wall-normal direction. A linear relation between
the transpiration factor and the roughness function is observed for regularly
textured surfaces in the transitionally rough regime of turbulence. The
relations obtained between the transpiration factor and the roughness function
show that such effective flow quantities can be suitable measures for
characterizing rough surfaces in this flow regime.
|
This paper addresses the trajectory tracking problem of an autonomous
tractor-trailer system by using a fast distributed nonlinear model predictive
control algorithm in combination with nonlinear moving horizon estimation for
the state and parameter estimation in which constraints on the inputs and the
states can be incorporated. The proposed control algorithm is capable of
driving the tractor-trailer system to any desired trajectory ensuring high
control accuracy and robustness against environmental disturbances.
|
We introduce FaDIV-Syn, a fast depth-independent method for novel view
synthesis. Related methods are often limited by their depth estimation stage,
where incorrect depth predictions can lead to large projection errors. To avoid
this issue, we efficiently warp input images into the target frame for a range
of assumed depth planes. The resulting plane sweep volume (PSV) is directly fed
into our network, which first estimates soft PSV masks in a self-supervised
manner, and then directly produces the novel output view. We therefore
side-step explicit depth estimation. This improves efficiency and performance
on transparent, reflective, thin, and feature-less scene parts. FaDIV-Syn can
perform both interpolation and extrapolation tasks and outperforms
state-of-the-art extrapolation methods on the large-scale RealEstate10k
dataset. In contrast to comparable methods, it achieves real-time performance
due to its lightweight architecture. We thoroughly evaluate ablations, such as
removing the Soft-Masking network, training from fewer examples as well as
generalization to higher resolutions and stronger depth discretization.
|
The unique properties of blockchain enable central requirements of
distributed secure logging: Immutability, integrity, and availability.
Especially when providing transparency about data usages, a blockchain-based
secure log can be beneficial, as no trusted third party is required. Yet, with
data governed by privacy legislation such as the GDPR or CCPA, the core
advantage of immutability becomes a liability. After a rightful request, an
individual's personal data need to be rectified or deleted, which is impossible
in an immutable blockchain. To solve this issue, we exploit a legal property of
pseudonymized data: They are only regarded personal data if they can be
associated with an individual's identity. We make use of this fact by
presenting P3, a pseudonym provisioning system for secure usage logs including
a protocol for recording new usages. For each new block, a one-time transaction
pseudonym is generated. The pseudonym generation algorithm guarantees
unlinkability and enables proof of ownership. These properties enable
GDPR-compliant use of blockchain, as data subjects can exercise their legal
rights with regards to their personal data. The new-usage protocol ensures
non-repudiation, and therefore accountability and liability. Most importantly,
our approach does not require a trusted third party and is independent of the
utilized blockchain software.
|
The spectrum of laser-plasma generated X-rays is very important, it
characterizes electron dynamics in plasma and is basic for applications.
However, the accuracies and efficiencies of existing methods to diagnose the
spectrum of laser-plasma based X-ray pulse are not very high, especially in the
range of several hundred keV. In this study, a new method based on electron
tracks detection to measure the spectrum of laser-plasma produced X-ray pulses
is proposed and demonstrated. Laser-plasma generated X-rays are scattered in a
multi-pixel silicon tracker. Energies and scattering directions of Compton
electrons can be extracted from the response of the detector, and then the
spectrum of X-rays can be reconstructed. Simulations indicate that the energy
resolution of this method is approximately 20% for X-rays from 200 to 550 keV
for a silicon-on-insulator pixel detector with 12 $\rm \mu$m pixel pitch and
500 $\rm \mu$m depletion region thickness. The results of a proof-of-principle
experiment based on a Timepix3 detector are also shown.
|
We present spectroscopy of individual stars in 26 Magellanic Cloud (MC) star
clusters with the aim of estimating dynamical masses and $V$-band mass-to-light
($M/L_V$) ratios over a wide range in age and metallicity. We obtained 3137
high-resolution stellar spectra with M2FS on the \textit{Magellan}/Clay
Telescope. Combined with 239 published spectroscopic results of comparable
quality, we produced a final sample of 2787 stars with good quality spectra for
kinematic analysis in the target clusters. Line-of-sight velocities measured
from these spectra and stellar positions within each cluster were used in a
customized expectation-maximization (EM) technique to estimate cluster
membership probabilities. Using appropriate cluster structural parameters and
corresponding single-mass dynamical models, this technique ultimately provides
self-consistent total mass and $M/L_V$ estimates for each cluster. Mean
metallicities for the clusters were also obtained and tied to a scale based on
calcium IR triplet metallicites. We present trends of the cluster $M/L_V$
values with cluster age, mass and metallicity, and find that our results run
about 40 per cent on average lower than the predictions of a set of simple
stellar population (SSP) models. Modified SSP models that account for internal
and external dynamical effects greatly improve agreement with our results, as
can models that adopt a strongly bottom-light IMF. To the extent that dynamical
evolution must occur, a modified IMF is not required to match data and models.
In contrast, a bottom-heavy IMF is ruled out for our cluster sample as this
would lead to higher predicted $M/L_V$ values, significantly increasing the
discrepancy with our observations.
|
Current vision systems are trained on huge datasets, and these datasets come
with costs: curation is expensive, they inherit human biases, and there are
concerns over privacy and usage rights. To counter these costs, interest has
surged in learning from cheaper data sources, such as unlabeled images. In this
paper we go a step further and ask if we can do away with real image datasets
entirely, instead learning from noise processes. We investigate a suite of
image generation models that produce images from simple random processes. These
are then used as training data for a visual representation learner with a
contrastive loss. We study two types of noise processes, statistical image
models and deep generative models under different random initializations. Our
findings show that it is important for the noise to capture certain structural
properties of real data but that good performance can be achieved even with
processes that are far from realistic. We also find that diversity is a key
property to learn good representations. Datasets, models, and code are
available at https://mbaradad.github.io/learning_with_noise.
|
The detection of polylines is usually either bound to branchless polylines or
formulated in a recurrent way, prohibiting their use in real-time systems.
We propose an approach that builds upon the idea of single shot object
detection. Reformulating the problem of polyline detection as a bottom-up
composition of small line segments allows to detect bounded, dashed and
continuous polylines with a single head. This has several major advantages over
previous methods. Not only is the method at 187 fps more than suited for
real-time applications with virtually any restriction on the shapes of the
detected polylines. By predicting multiple line segments for each cell, even
branching or crossing polylines can be detected.
We evaluate our approach on three different applications for road marking,
lane border and center line detection. Hereby, we demonstrate the ability to
generalize to different domains as well as both implicit and explicit polyline
detection tasks.
|
We develop the covariant phase space formalism allowing for non-vanishing
flux, anomalies and field dependence in the vector field generators. We
construct a charge bracket that generalizes the one introduced by Barnich and
Troessaert and includes contributions from the Lagrangian and its anomaly. This
bracket is uniquely determined by the choice of Lagrangian representative of
the theory. We then extend the notion of corner symmetry algebra to include the
surface translation symmetries and prove that the charge bracket provides a
canonical representation of the extended corner symmetry algebra. This
representation property is shown to be equivalent to the projection of the
gravitational equations of motion on the corner, providing us with an encoding
of the bulk dynamics in a locally holographic manner.
|
We introduce Virasoro operators for any Landau-Ginzburg pair (W, G) where W
is a non-degenerate quasi-homogeneous polynomial and G is a certain group of
diagonal symmetries. We propose a conjecture that the total ancestor potential
of the FJRW theory of the pair (W, G) is annihilated by these Virasoro
operators. We prove the conjecture in various cases, including: (1) invertible
polynomials with the maximal group, (2) two-variable homogeneous Fermat
polynomials with the minimal group, (3) certain Calabi-Yau polynomials with
groups. We also discuss the connections among Virasoro constraints, mirror
symmetry of Landau-Ginzburg models, and Landau-Ginzburg/Calabi-Yau
correspondence.
|
Although the unscented Kalman filter (UKF) is applicable to nonlinear
systems, it turns out that, for linear systems, UKF does not specialize to the
classical Kalman filter. This situation suggests that it may be advantageous to
modify UKF in such a way that, for linear systems, the Kalman filter is
recovered. The ultimate goal is thus to develop modifications of UKF that
specialize to the Kalman filter for linear systems and have improved accuracy
for nonlinear systems. With this motivation, this paper presents two
modifications of UKF that specialize to the Kalman filter for linear systems.
The first modification (EUKF-A) requires the Jacobian of the dynamics map,
whereas the second modification (EUKF-C) requires the Jacobian of the
measurement map. For various nonlinear examples, the accuracy of EUKF-A and
EUKF-C is compared to the accuracy of UKF.
|
In this article we show why flying and rotating beer mats, CDs, or other flat
disks will eventually flip in the air and end up flying with backspin, thus,
making them unusable as frisbees. The crucial effect responsible for the
flipping is found to be the lift attacking not in the center of mass but
slightly offset to the forward edge. This induces a torque leading to a
precession towards backspin orientation. An effective theory is developed
providing an approximate solution for the disk's trajectory with a minimal set
of parameters. Our theoretical results are confronted with experimental results
obtained using a beer mat shooting apparatus and a high speed camera. Very good
agreement is found.
|
Line intensity mapping (LIM) proposes to efficiently observe distant faint
galaxies and map the matter density field at high redshift. Building upon the
formalism in the companion paper, we first highlight the degeneracies between
cosmology and astrophysics in LIM. We discuss what can be constrained from
measurements of the mean intensity and redshift-space power spectra. With a
sufficient spectral resolution, the large-scale redshift-space distortions of
the 2-halo term can be measured, helping to break the degeneracy between bias
and mean intensity. With a higher spectral resolution, measuring the
small-scale redshift-space distortions disentangles the 1-halo and shot noise
terms. Cross-correlations with external galaxy catalogs or lensing surveys
further break degeneracies. We derive requirements for experiments similar to
SPHEREx, HETDEX, CDIM, COMAP and CONCERTO. We then revisit the question of the
optimality of the LIM observables, compared to galaxy detection, for
astrophysics and cosmology. We use a matched filter to compute the luminosity
detection threshold for individual sources. We show that LIM contains
information about galaxies too faint to detect, in the high-noise or
high-confusion regimes. We quantify the sparsity and clustering bias of the
detected sources and compare them to LIM, showing in which cases LIM is a
better tracer of the matter density. We extend previous work by answering these
questions as a function of Fourier scale, including for the first time the
effect of cosmic variance, pixel-to-pixel correlations, luminosity-dependent
clustering bias and redshift-space distortions.
|
This paper aims at using the functional renormalization group formalism to
study the equilibrium states of a stochastic process described by a
quench--disordered multilinear Langevin equation. Such an equation
characterizes the evolution of a time-dependent $N$-vector
$q(t)=\{q_1(t),\cdots q_N(t)\}$ and is traditionally encountered in the
dynamical description of glassy systems at and out of equilibrium through the
so-called Glauber model. From the connection between Langevin dynamics and
quantum mechanics in imaginary time, we are able to coarse-grain the path
integral of the problem in the Fourier modes, and to construct a
renormalization group flow for effective euclidean action. In the large
$N$-limit we are able to solve the flow equations for both matrix and tensor
disorder. The numerical solutions of the resulting exact flow equations are
then investigated using standard local potential approximation, taking into
account the quench disorder. In the case where the interaction is taken to be
matricial, for finite $N$ the flow equations are also solved. However, the case
of finite $N$ and taking into account the non-equilibrum process will be
considered in a companion investigation.
|
Semi-discrete and fully discrete mixed finite element methods are considered
for Maxwell-model-based problems of wave propagation in linear viscoelastic
solid. This mixed finite element framework allows the use of a large class of
existing mixed conforming finite elements for elasticity in the spatial
discretization. In the fully discrete scheme, a Crank-Nicolson scheme is
adopted for the approximation of the temporal derivatives of stress and
velocity variables. Error estimates of the semi-discrete and fully discrete
schemes, as well as an unconditional stability result for the fully discrete
scheme, are derived. Numerical experiments are provided to verify the
theoretical results.
|
A novel approach for electrochemical tuning of alcohol oxidase (AOx) and
alcohol dehydrogenase (ADH) biocatalysis towards butanol-1 oxidation by
incorporating enzymes in various designs of amperometric biosensors is
presented. The biosensors were developed by using commercial graphene
oxide-based screen-printed electrodes and varying enzyme producing strains,
encapsulation approaches (layer-by-layer (LbL) or one-step electrodeposition
(EcD)), layers composition and structure, operating conditions (applied
potential values) and introducing mediators (Meldola Blue and Prussian Blue) or
Pd-nanoparticles (Pd-NPs). Simultaneous analysis/screening of multiple crucial
system parameters during the enzyme engineering process allowed to identify
within a period of one month that four out of twelve proposed designs
demonstrated a good signal reproducibility and linear response (up to 14.6 mM
of butanol) under very low applied potentials (from -0.02 to -0.32 V). Their
mechanical stability was thoroughly investigated by multi-analytical techniques
prior to butanol determination in cell-free samples from an anaerobic butanol
fermentation. The EcD-based biosensor that incorporates ADH, NAD+, Pd-NPs and
Nafion showed no loss of enzyme activity after preparation and demonstrated
capabilities towards low potential (-0.12 V) detection of butanol-1 in
fermentation medium (4 mM) containing multiple electroactive species with
almost 15 times enhanced sensitivity (0.2282 $\mu$A/mM $\pm$ 0.05) when
compared to the LbL design. Furthermore, the ADH-Nafion bonding for the S.
cerevisiae strain was confirmed to be 3 times higher than for E. coli.
|
Electronic states of a correlated material can be effectively modified by
structural variations delivered from a single-crystal substrate. In this
letter, we show that the CrN films grown on MgO (001) substrates have a (001)
orientation, whereas the CrN films on {\alpha}-Al2O3 (0001) substrates are
oriented along (111) direction parallel to the surface normal. Transport
properties of CrN films are remarkably different depending on crystallographic
orientations. The critical thickness for the metal-insulator transition (MIT)
in CrN 111 films is significantly larger than that of CrN 001 films. In
contrast to CrN 001 films without apparent defects, scanning transmission
electron microscopy results reveal that CrN 111 films exhibit strain-induced
structural defects, e. g. the periodic horizontal twinning domains, resulting
in an increased electron scattering facilitating an insulating state.
Understanding the key parameters that determine the electronic properties of
ultrathin conductive layers is highly desirable for future technological
applications.
|
Statistical Hypothesis Testing (SHT) is a class of inference methods whereby
one makes use of empirical data to test a hypothesis and often emit a judgment
about whether to reject it or not. In this paper we focus on the logical aspect
of this strategy, which is largely independent of the adopted school of
thought, at least within the various frequentist approaches. We identify SHT as
taking the form of an unsound argument from Modus Tollens in classical logic,
and, in order to rescue SHT from this difficulty, we propose that it can
instead be grounded in t-norm based fuzzy logics. We reformulate the
frequentists' SHT logic by making use of a fuzzy extension of modus Tollens to
develop a model of truth valuation for its premises. Importantly, we show that
it is possible to preserve the soundness of Modus Tollens by exploring the
various conventions involved with constructing fuzzy negations and fuzzy
implications (namely, the S and R conventions). We find that under the S
convention, it is possible to conduct the Modus Tollens inference argument
using Zadeh's compositional extension and any possible t-norm. Under the R
convention we find that this is not necessarily the case, but that by mixing
R-implication with S-negation we can salvage the product t-norm, for example.
In conclusion, we have shown that fuzzy logic is a legitimate framework to
discuss and address the difficulties plaguing frequentist interpretations of
SHT.
|
We present the first version of a system for interactive generation of
theatre play scripts. The system is based on a vanilla GPT-2 model with several
adjustments, targeting specific issues we encountered in practice. We also list
other issues we encountered but plan to only solve in a future version of the
system. The presented system was used to generate a theatre play script planned
for premiere in February 2021.
|
We consider the revenue maximization problem in social advertising, where a
social network platform owner needs to select seed users for a group of
advertisers, each with a payment budget, such that the total expected revenue
that the owner gains from the advertisers by propagating their ads in the
network is maximized. Previous studies on this problem show that it is
intractable and present approximation algorithms. We revisit this problem from
a fresh perspective and develop novel efficient approximation algorithms, both
under the setting where an exact influence oracle is assumed and under one
where this assumption is relaxed. Our approximation ratios significantly
improve upon the previous ones. Furthermore, we empirically show, using
extensive experiments on four datasets, that our algorithms considerably
outperform the existing methods on both the solution quality and computation
efficiency.
|
A spectral-energy distribution (SED) model for Type Ia supernovae (SNe Ia) is
a critical tool for measuring precise and accurate distances across a large
redshift range and constraining cosmological parameters. We present an improved
model framework, SALT3, which has several advantages over current models
including the leading SALT2 model (SALT2.4). While SALT3 has a similar
philosophy, it differs from SALT2 by having improved estimation of
uncertainties, better separation of color and light-curve stretch, and a
publicly available training code. We present the application of our training
method on a cross-calibrated compilation of 1083 SNe with 1207 spectra. Our
compilation is $2.5\times$ larger than the SALT2 training sample and has
greatly reduced calibration uncertainties. The resulting trained SALT3.K21
model has an extended wavelength range $2000$-$11000$ angstroms (1800 angstroms
redder) and reduced uncertainties compared to SALT2, enabling accurate use of
low-$z$ $I$ and $iz$ photometric bands. Including these previously discarded
bands, SALT3.K21 reduces the Hubble scatter of the low-z Foundation and CfA3
samples by 15% and 10%, respectively. To check for potential systematic
uncertainties we compare distances of low ($0.01<z<0.2$) and high ($0.4<z<0.6$)
redshift SNe in the training compilation, finding an insignificant $2\pm14$
mmag shift between SALT2.4 and SALT3.K21. While the SALT3.K21 model was trained
on optical data, our method can be used to build a model for rest-frame NIR
samples from the Roman Space Telescope. Our open-source training code, public
training data, model, and documentation are available at
https://saltshaker.readthedocs.io/en/latest/, and the model is integrated into
the sncosmo and SNANA software packages.
|
We all know that in the dense anisotropic interior of the star,
neutrino-neutrino forward-scattering can lead to fast collective neutrino
oscillations, which has striking consequences on flavor dependent neutrino
emission and can be crucial for the evolution of a supernova and its neutrino
signal. The flavor evolution of such dense neutrino system is governed by a
large number of coupled nonlinear partial differential equations which are
almost always very difficult to solve. Although the triggering, initial linear
growth and the condition for fast oscillations to occur are understood by
"Linear stability analysis" but this fails to answer an important question:
"what is the impact of fast flavor conversion on observable neutrino fluxes or
the supernova explosion mechanism?". This is a significantly harder problem
that requires understanding the nature of the final state solution in the
nonlinear regime. Moving towards this direction we present one of the first
numerical as well as an analytical study of the coupled flavor evolution of a
non-stationary and inhomogeneous dense neutrino system in the nonlinear regime
considering one spatial dimension and a spectrum of velocity modes. This work
gives a clear picture of the final state flavor dynamics of such systems
specifying its dependence on space-time coordinates, phase space variables as
well as the lepton asymmetry and thus can have significant implications for the
supernova astrophysics as well as its associated neutrino phenomenology even
for the most realistic scenario.
|
Coherent quantum phase slips are expected to lead to a blockade of dc
conduction in sufficiently narrow superconducting nanowires below a certain
critical voltage. We present measurements of NbN nanowires in which not only is
a critical voltage observed, but also in which this critical voltage may be
tuned using a side-gate electrode. The critical voltage varies periodically as
the applied gate voltage is varied. While the observations are qualitatively as
expected for quantum interference between coherent quantum phase slip elements,
the period of the tuning is orders of magnitude larger than expected on the
basis of simple capacitance considerations. Furthermore, two significant abrupt
changes in the period of the variations during measurements of one nanowire are
observed, an observation which constrains detailed explanations for the
behaviour. The plausibility of an explanation assuming that the behaviour
arises from granular Josephson junctions in the nanowire is also considered.
|
We construct the complementary short-range correlation relativistic
local-density-approximation functional to be used in relativistic
range-separated density-functional theory based on a Dirac-Coulomb Hamiltonian
in the no-pair approximation. For this, we perform relativistic
random-phase-approximation calculations of the correlation energy of the
relativistic homogeneous electron gas with a modified electron-electron
interaction, we study the high-density behavior, and fit the results to a
parametrized expression. The obtained functional should eventually be useful
for electronic-structure calculations of strongly correlated systems containing
heavy elements.
|
We show that if the complement of a Donaldson hypersurface in a closed,
integral symplectic manifold has the homology of a subcritical Stein manifold,
then the hypersurface is of degree one. In particular, this demonstrates a
conjecture by Biran and Cieliebak on subcritical polarisations of symplectic
manifolds. Our proof is based on a simple homological argument using ideas of
Kulkarni-Wood.
|
Density functional theory based computational study has been conducted in
order to investigate the effect of substitution of Cr and Co components by Si
on the structure, mechanical, electronic, and magnetic properties of the high
entropy alloy CrCoNiFe. It is found that the presence of a moderate
concentration of Si substitutes (up to 12.5 %) does not significantly reduce
the structural and mechanical stability of CrCoNiFe while it may modify its
electronic and magnetic properties. Based on that, Si is proposed as a cheap
and functional material for partial substitution of Cr or Co in CrCoNiFe.
|
Argument mining systems often consider contextual information, i.e.
information outside of an argumentative discourse unit, when trained to
accomplish tasks such as argument component identification, classification, and
relation extraction. However, prior work has not carefully analyzed the utility
of different contextual properties in context-aware models. In this work, we
show how two different types of contextual information, local discourse context
and speaker context, can be incorporated into a computational model for
classifying argument components in multi-party classroom discussions. We find
that both context types can improve performance, although the improvements are
dependent on context size and position.
|
The effect of a spatially uniform magnetic field on the shear rheology of a
dilute emulsion of monodispersed ferrofluid droplets, immersed in a
non-magnetizable immiscible fluid, is investigated using direct numerical
simulations. The direction of the applied magnetic field is normal to the shear
flow direction. The droplets extra stress tensor arising from the presence of
interfacial forces of magnetic nature is modeled on the basis of the seminal
work of G. K. Batchelor, J. Fluid Mech., 41.3 (1970) under the assumptions of a
linearly magnetizable ferrofluid phase and negligible inertia. The results show
that even relatively small magnetic fields can have significant consequences on
the rheological properties of the emulsion due to the magnetic forces that
contribute to deform and orient the droplets towards the direction of the
applied magnetic vector. In particular, we have observed an increase of the
effective (bulk) viscosity and a reversal of the sign of the two normal stress
differences with respect to the case without magnetic field for those
conditions where the magnetic force prevails over the shearing force.
Comparisons between the results of our model with a direct integration of the
viscous stress have provided an indication of its reliability to predict the
effective viscosity of the suspension. Moreover, this latter quantity has been
found to behave as a monotonic increasing function of the applied magnetic
field for constant shearing flows ("magneto-thickening" behaviour), which
allowed us to infer a simple constitutive equation describing the emulsion
viscosity.
|
A good understanding of the confinement of energetic ions in non-axisymmetric
magnetic fields is key for the design of reactors based on the stellarator
concept. In this work, we develop a model that, based on the radially-local
bounce-averaged drift-kinetic equation, classifies orbits and succeeds in
predicting configuration-dependent aspects of the prompt losses of energetic
ions in stellarators. Such a model could in turn be employed in the
optimization stage of the design of new devices.
|
State-of-the-art visual analytics techniques in application domains are often
designed by VA professionals over qualitative requirement collected from end
users. These VA techniques may not leverage users' domain knowledge about how
to achieve their analytical goals. In this position paper, we propose a
user-driven design process of VA applications centered around a new concept
called analytical representation (AR). AR features a formal abstraction of user
requirement and their desired analytical trails for certain VA application, and
is independent of the actual visualization design. A conceptual graph schema is
introduced to define the AR abstraction, which can be created manually or
constructed by semi-automated tools. Designing VA applications with AR provides
a shared opportunity for both optimal analysis blueprint from the perspective
of end users and optimal visualization/algorithm from the perspective of VA
designers. We demonstrate the usage of the design process in two case studies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.