abstract
stringlengths 42
2.09k
|
---|
In this paper, we prove that the Fechner and Stevens laws are equivalent
(coincide up to isomorphism). Therefore, the problem does not exist.
|
Deep learning has achieved promising segmentation performance on 3D left
atrium MR images. However, annotations for segmentation tasks are expensive,
costly and difficult to obtain. In this paper, we introduce a novel
hierarchical consistency regularized mean teacher framework for 3D left atrium
segmentation. In each iteration, the student model is optimized by multi-scale
deep supervision and hierarchical consistency regularization, concurrently.
Extensive experiments have shown that our method achieves competitive
performance as compared with full annotation, outperforming other
state-of-the-art semi-supervised segmentation methods.
|
We present a collection recommender system that can automatically create and
recommend collections of items at a user level. Unlike regular recommender
systems, which output top-N relevant items, a collection recommender system
outputs collections of items such that the items in the collections are
relevant to a user, and the items within a collection follow a specific theme.
Our system builds on top of the user-item representations learnt by item
recommender systems. We employ dimensionality reduction and clustering
techniques along with intuitive heuristics to create collections with their
ratings and titles.
We test these ideas in a real-world setting of music recommendation, within a
popular music streaming service. We find that there is a 2.3x increase in
recommendation-driven consumption when recommending collections over items.
Further, it results in effective utilization of real estate and leads to
recommending a more and diverse set of items. To our knowledge, these are first
of its kind experiments at such a large scale.
|
The number and importance of AI-based systems in all domains is growing. With
the pervasive use and the dependence on AI-based systems, the quality of these
systems becomes essential for their practical usage. However, quality assurance
for AI-based systems is an emerging area that has not been well explored and
requires collaboration between the SE and AI research communities. This paper
discusses terminology and challenges on quality assurance for AI-based systems
to set a baseline for that purpose. Therefore, we define basic concepts and
characterize AI-based systems along the three dimensions of artifact type,
process, and quality characteristics. Furthermore, we elaborate on the key
challenges of (1) understandability and interpretability of AI models, (2) lack
of specifications and defined requirements, (3) need for validation data and
test input generation, (4) defining expected outcomes as test oracles, (5)
accuracy and correctness measures, (6) non-functional properties of AI-based
systems, (7) self-adaptive and self-learning characteristics, and (8) dynamic
and frequently changing environments.
|
An action functional is developed for nonlinear dislocation dynamics. This
serves as a first step towards the application of effective field theory in
physics to evaluate its potential in obtaining a macroscopic description of
dislocation dynamics describing the plasticity of crystalline solids.
Connections arise between the continuum mechanics and material science of
defects in solids, effective field theory techniques in physics, and fracton
tensor gauge theories.
|
Cyber Physical Systems (CPS) are characterized by their ability to integrate
the physical and information or cyber worlds. Their deployment in critical
infrastructure have demonstrated a potential to transform the world. However,
harnessing this potential is limited by their critical nature and the far
reaching effects of cyber attacks on human, infrastructure and the environment.
An attraction for cyber concerns in CPS rises from the process of sending
information from sensors to actuators over the wireless communication medium,
thereby widening the attack surface. Traditionally, CPS security has been
investigated from the perspective of preventing intruders from gaining access
to the system using cryptography and other access control techniques. Most
research work have therefore focused on the detection of attacks in CPS.
However, in a world of increasing adversaries, it is becoming more difficult to
totally prevent CPS from adversarial attacks, hence the need to focus on making
CPS resilient. Resilient CPS are designed to withstand disruptions and remain
functional despite the operation of adversaries. One of the dominant
methodologies explored for building resilient CPS is dependent on machine
learning (ML) algorithms. However, rising from recent research in adversarial
ML, we posit that ML algorithms for securing CPS must themselves be resilient.
This paper is therefore aimed at comprehensively surveying the interactions
between resilient CPS using ML and resilient ML when applied in CPS. The paper
concludes with a number of research trends and promising future research
directions. Furthermore, with this paper, readers can have a thorough
understanding of recent advances on ML-based security and securing ML for CPS
and countermeasures, as well as research trends in this active research area.
|
I point out fatal mathematical errors in the paper "Quantum correlations are
weaved by the spinors of the Euclidean primitives" by Joy Christian, published
(2019) in the journal Royal Society Open Science.
|
This article presents an algorithm for reducing measurement uncertainty of
one physical quantity when given oversampled measurements of two physical
quantities with correlated noise. The algorithm assumes that the aleatoric
measurement uncertainty in both physical quantities follows a Gaussian
distribution and relies on sampling faster than it is possible for the
measurand (the true value of the physical quantity that we are trying to
measure) to change (due to the system thermal time constant) to calculate the
parameters of the noise distribution. In contrast to the Kalman and particle
filters, which respectively require state update equations and a map of one
physical quality, our algorithm requires only the oversampled sensor
measurements. When applied to temperature-compensated humidity sensors, it
provides reduced uncertainty in humidity estimates from correlated temperature
and humidity measurements. In an experimental evaluation, the algorithm
achieves average uncertainty reduction of 10.3 %. The algorithm incurs an
execution time overhead of 5.3 % when compared to the minimum algorithm
required to measure and calculate the uncertainty. Detailed instruction-level
emulation of a C-language implementation compiled to the RISC-V architecture
shows that the uncertainty reduction program required 0.05 % more instructions
per iteration than the minimum operations required to calculate the
uncertainty.
|
MOA-2006-BLG-074 was selected as one of the most promising planetary
candidates in a retrospective analysis of the MOA collaboration: its asymmetric
high-magnification peak can be perfectly explained by a source passing across a
central caustic deformed by a small planet. However, after a detailed analysis
of the residuals, we have realized that a single lens and a source orbiting
with a faint companion provides a more satisfactory explanation for all the
observed deviations from a Paczynski curve and the only physically acceptable
interpretation. Indeed the orbital motion of the source is constrained enough
to allow a very good characterization of the binary source from the
microlensing light curve. The case of MOA-2006-BLG-074 suggests that the
so-called xallarap effect must be taken seriously in any attempts to obtain
accurate planetary demographics from microlensing surveys.
|
Iteratively reweighted least square (IRLS) is a popular approach to solve
sparsity-enforcing regression problems in machine learning. State of the art
approaches are more efficient but typically rely on specific coordinate pruning
schemes. In this work, we show how a surprisingly simple reparametrization of
IRLS, coupled with a bilevel resolution (instead of an alternating scheme) is
able to achieve top performances on a wide range of sparsity (such as Lasso,
group Lasso and trace norm regularizations), regularization strength (including
hard constraints), and design matrices (ranging from correlated designs to
differential operators). Similarly to IRLS, our method only involves linear
systems resolutions, but in sharp contrast, corresponds to the minimization of
a smooth function. Despite being non-convex, we show that there is no spurious
minima and that saddle points are "ridable", so that there always exists a
descent direction. We thus advocate for the use of a BFGS quasi-Newton solver,
which makes our approach simple, robust and efficient. We perform a numerical
benchmark of the convergence speed of our algorithm against state of the art
solvers for Lasso, group Lasso, trace norm and linearly constrained problems.
These results highlight the versatility of our approach, removing the need to
use different solvers depending on the specificity of the ML problem under
study.
|
Self-interacting dark matter (SIDM) models offer one way to reconcile
inconsistencies between observations and predictions from collisionless cold
dark matter (CDM) models on dwarf-galaxy scales. In order to incorporate the
effects of both baryonic and SIDM interactions, we study a suite of
cosmological-baryonic simulations of Milky-Way (MW)-mass galaxies from the
Feedback in Realistic Environments (FIRE-2) project where we vary the SIDM
self-interaction cross-section $\sigma/m$. We compare the shape of the main
dark matter (DM) halo at redshift $z=0$ predicted by SIDM simulations (at
$\sigma/m=0.1$, $1$, and $10$ cm$^2$ g$^{-1}$) with CDM simulations using the
same initial conditions. In the presence of baryonic feedback effects, we find
that SIDM models do not produce the large differences in the inner structure of
MW-mass galaxies predicted by SIDM-only models. However, we do find that the
radius where the shape of the total mass distribution begins to differ from
that of the stellar mass distribution is dependent on $\sigma/m$. This
transition could potentially be used to set limits on the SIDM cross-section in
the MW.
|
We reanalyze the experimental NMC data on the nonsinglet structure function
$F_2^p-F_2^n$ and E866 data on the nucleon sea asymmetry $\bar{d}/\bar{u}$
using the truncated moments approach elaborated in our previous papers. With
help of the special truncated sum one can overcome the problem of the
unavoidable experimental restrictions on the Bjorken $x$ and effectively study
the fundamental sum rules for the parton distributions and structure functions.
Using only the data from the measured region of $x$, we obtain the Gottfried
sum $\int_0^1 F_2^{ns}/x\, dx$ and the integrated nucleon sea asymmetry
$\int_0^1 (\bar{d}-\bar{u})\, dx$. We compare our results with the reported
experimental values and with the predictions obtained for different global
parametrizations for the parton distributions. We also discuss the discrepancy
between the NMC and E866 results on $\int_0^1 (\bar{d}-\bar{u})\, dx$. We
demonstrate that this discrepancy can be resolved by taking into account the
higher-twist effects.
|
The emission properties of tin plasmas, produced by the irradiation of
preformed liquid tin targets by several-ns-long 2-$\mu$m-wavelength laser
pulses, are studied in the extreme ultraviolet (EUV) regime. In a two-pulse
scheme, a pre-pulse laser is first used to deform tin microdroplets into thin,
extended disks before the main (2$\mu$m) pulse creates the EUV-emitting plasma.
Irradiating 30- to 300-$\mu$m-diameter targets with 2-$\mu$m laser pulses, we
find that the efficiency in creating EUV light around 13.5nm follows the
fraction of laser light that overlaps with the target. Next, the effects of a
change in 2-$\mu$m drive laser intensity (0.6-1.8$\times 10^{11}$W/cm$^2$) and
pulse duration (3.7-7.4ns) are studied. It is found that the angular dependence
of the emission of light within a 2\% bandwidth around 13.5nm and within the
backward 2$\pi$ hemisphere around the incoming laser beam is almost independent
of intensity and duration of the 2-$\mu$m drive laser. With increasing target
diameter, the emission in this 2\% bandwidth becomes increasingly anisotropic,
with a greater fraction of light being emitted into the hemisphere of the
incoming laser beam. For direct comparison, a similar set of experiments is
performed with a 1-$\mu$m-wavelength drive laser. Emission spectra, recorded in
a 5.5-25.5nm wavelength range, show significant self-absorption of light around
13.5nm in the 1-$\mu$m case, while in the 2-$\mu$m case only an opacity-related
broadening of the spectral feature at 13.5nm is observed. This work
demonstrates the enhanced capabilities and performance of 2-$\mu$m-driven
plasmas produced from disk targets when compared to 1-$\mu$m-driven plasmas,
providing strong motivation for the use of 2-$\mu$m lasers as drive lasers in
future high-power sources of EUV light.
|
Advancements in the digital technologies have enabled researchers to develop
a variety of Computational Music applications. Such applications are required
to capture, process, and generate data related to music. Therefore, it is
important to digitally represent music in a music theoretic and concise manner.
Existing approaches for representing music are ineffective in terms of
utilizing music theory. In this paper, we address the disjoint of music theory
and computational music by developing an opensource representation tool based
on music theory. Through the wide range of use cases, we run an analysis on the
classical music pieces to show the usefulness of the developed music embedding.
|
Network function virtualization (NFV) and content caching are two promising
technologies that hold great potential for network operators and designers.
This paper optimizes the deployment of NFV and content caching in 5G networks
and focuses on the associated power consumption savings. In addition, it
introduces an approach to combine content caching with NFV in one integrated
architecture for energy aware 5G networks. A mixed integer linear programming
(MILP) model has been developed to minimize the total power consumption by
jointly optimizing the cache size, virtual machine (VM) workload, and the
locations of both cache nodes and VMs. The results were investigated under the
impact of core network virtual machines (CNVMs) inter-traffic. The result show
that the optical line terminal (OLT) access network nodes are the optimum
location for content caching and for hosting VMs during busy times of the day
whilst IP over WDM core network nodes are the optimum locations for caching and
VM placement during off-peak time. Furthermore, the results reveal that a
virtualization-only approach is better than a caching-only approach for video
streaming services where the virtualization-only approach compared to
caching-only approach, achieves a maximum power saving of 7% (average 5%) when
no CNVMs inter-traffic is considered and 6% (average 4%) with CNVMs
inter-traffic at 10% of the total backhaul traffic. On the other hand, the
integrated approach has a maximum power saving of 15% (average 9%) with and
without CNVMs inter-traffic compared to the virtualization-only approach, and
it achieves a maximum power saving of 21% (average 13%) without CNVMs
inter-traffic and 20% (average 12%) when CNVMs inter-traffic is considered
compared with the caching-only approach. In order to validate the MILP models
and achieve real-time operation in our approaches, a heuristic was developed.
|
A symbolic method for solving linear recurrences of combinatorial and
statistical interest is introduced. This method essentially relies on a
representation of polynomial sequences as moments of a symbol that looks as the
framework of a random variable with no reference to any probability space. We
give several examples of applications and state an explicit form for the class
of linear recurrences involving Sheffer sequences satisfying a special initial
condition. The results here presented can be easily implemented in a symbolic
software.
|
By Hacon-McKernan-Xu, there is a positive lower bound in each dimension for
the volume of all klt varieties with ample canonical class. We show that these
bounds must go to zero extremely fast as the dimension increases, by
constructing a klt $n$-fold with ample canonical class whose volume is less
than $1/2^{2^n}$. These examples should be close to optimal.
We also construct a klt Fano variety of each dimension $n$ such that
$H^0(X,-mK_X)=0$ for all $1\leq m < b$ with $b$ roughly $2^{2^n}$. Here again
there is some bound in each dimension, by Birkar's theorem on boundedness of
complements, and we are showing that the bound must increase extremely fast
with the dimension.
|
The effective low-energy late-time description of many body systems near
thermal equilibrium provided by classical hydrodynamics in terms of dissipative
transport phenomena receives important corrections once the effects of
stochastic fluctuations are taken into account. One such physical effect is the
occurrence of long-time power law tails in correlation functions of conserved
currents. In the hydrodynamic regime $\vec{k} \rightarrow 0$ this amounts to
non-analytic dependence of the correlation functions on the frequency $\omega$.
In this article, we consider a relativistic fluid with a conserved global
$U(1)$ charge in the presence of a strong background magnetic field, and
compute the long-time tails in correlation functions of the stress tensor. The
presence of the magnetic field renders the system anisotropic. In the absence
of the magnetic field, there are three out-of-equilibrium transport parameters
that arise at the first order in the hydrodynamic derivative expansion, all of
which are dissipative. In the presence of a background magnetic field, there
are ten independent out-of-equilibrium transport parameters at the first order,
three of which are non-dissipative and the rest are dissipative. We provide the
most general linearized equations about a given state of thermal equilibrium
involving the various transport parameters in the presence of a magnetic field,
and use them to compute the long-time tails for the fluid.
|
Contending hate speech in social media is one of the most challenging social
problems of our time. There are various types of anti-social behavior in social
media. Foremost of them is aggressive behavior, which is causing many social
issues such as affecting the social lives and mental health of social media
users. In this paper, we propose an end-to-end ensemble-based architecture to
automatically identify and classify aggressive tweets. Tweets are classified
into three categories - Covertly Aggressive, Overtly Aggressive, and
Non-Aggressive. The proposed architecture is an ensemble of smaller subnetworks
that are able to characterize the feature embeddings effectively. We
demonstrate qualitatively that each of the smaller subnetworks is able to learn
unique features. Our best model is an ensemble of Capsule Networks and results
in a 65.2% F1 score on the Facebook test set, which results in a performance
gain of 0.95% over the TRAC-2018 winners. The code and the model weights are
publicly available at
https://github.com/parthpatwa/Hater-O-Genius-Aggression-Classification-using-Capsule-Networks.
|
We study the transport properties for a family of geometrically frustrated
models on the triangular lattice with an interaction scale far exceeding the
single-particle bandwidth. Starting from the interaction-only limit, which can
be solved exactly, we analyze the transport and thermodynamic behavior as a
function of filling and temperature at the leading non-trivial order in the
single-particle hopping. Over a broad range of intermediate temperatures, we
find evidence of a dc resistivity scaling linearly with temperature and with
typical values far exceeding the quantum of resistance, $h/e^2$. At a sequence
of commensurate fillings, the bad-metallic regime eventually crosses over into
interaction induced insulating phases in the limit of low temperatures. We
discuss the relevance of our results to experiments in cold-atom and moir\'e
heterostructure based platforms.
|
The aim of this note is to completely determine the second homology group of
the special queer Lie superalgebra $\mathfrak{sq}_n(R)$ coordinatized by a
unital associative superalgebra $R$, which will be achieved via an isomorphism
between the special linear Lie superalgebra $\mathfrak{sl}_{n}(R\otimes Q_1)$
and the special queer Lie superalgebra $\mathfrak{sq}_n(R)$.
|
In a multiple linear regression model, the algebraic formula of the
decomposition theorem explains the relationship between the univariate
regression coefficient and partial regression coefficient using geometry. It
was found that univariate regression coefficients are decomposed into their
respective partial regression coefficients according to the parallelogram rule.
Multicollinearity is analyzed with the help of the decomposition theorem. It
was also shown that it is a sample phenomenon that the partial regression
coefficients of important explanatory variables are not significant, but the
sign expectation deviation cause may be the population structure between the
explained variables and explanatory variables or may be the result of sample
selection. At present, some methods of diagnostic multicollinearity only
consider the correlation of explanatory variables, so these methods are
basically unreliable, and handling multicollinearity is blind before the causes
are not distinguished. The increase in the sample size can help identify the
causes of multicollinearity, and the difference method can play an auxiliary
role.
|
As the earliest stage of planet formation, massive, optically thick, and gas
rich protoplanetary disks provide key insights into the physics of star and
planet formation. When viewed edge-on, high resolution images offer a unique
opportunity to study both the radial and vertical structures of these disks and
relate this to vertical settling, radial drift, grain growth, and changes in
the midplane temperatures. In this work, we present multi-epoch HST and Keck
scattered light images, and an ALMA 1.3 mm continuum map for the remarkably
flat edge-on protoplanetary disk SSTC2DJ163131.2-242627, a young solar-type
star in $\rho$ Ophiuchus. We model the 0.8 $\mu$m and 1.3 mm images in separate
MCMC runs to investigate the geometry and dust properties of the disk using the
MCFOST radiative transfer code. In scattered light, we are sensitive to the
smaller dust grains in the surface layers of the disk, while the sub-millimeter
dust continuum observations probe larger grains closer to the disk midplane. An
MCMC run combining both datasets using a covariance-based log-likelihood
estimation was marginally successful, implying insufficient complexity in our
disk model. The disk is well characterized by a flared disk model with an
exponentially tapered outer edge viewed nearly edge-on, though some degree of
dust settling is required to reproduce the vertically thin profile and lack of
apparent flaring. A colder than expected disk midplane, evidence for dust
settling, and residual radial substructures all point to a more complex radial
density profile to be probed with future, higher resolution observations.
|
Silicon ferroelectric field-effect transistors (FeFETs) with low-k
interfacial layer (IL) between ferroelectric gate stack and silicon channel
suffers from high write voltage, limited write endurance and large
read-after-write latency due to early IL breakdown and charge trapping and
detrapping at the interface. We demonstrate low voltage, high speed memory
operation with high write endurance using an IL-free back-end-of-line (BEOL)
compatible FeFET. We fabricate IL-free FeFETs with 28nm channel length and
126nm width under a thermal budget <400C by integrating 5nm thick Hf0.5Zr0.5O2
gate stack with amorphous Indium Tungsten Oxide (IWO) semiconductor channel. We
report 1.2V memory window and read current window of 10^5 for program and
erase, write latency of 20ns with +/-2V write pulses, read-after-write latency
<200ns, write endurance cycles exceeding 5x10^10 and 2-bit/cell programming
capability. Array-level analysis establishes IL-free BEOL FeFET as a promising
candidate for logic-compatible high-performance on-chip buffer memory and
multi-bit weight cell for compute-in-memory accelerators.
|
We prove the asymptotic functional Poisson laws in the total variation norm
and obtain estimates of the corresponding convergence rates for a large class
of hyperbolic dynamical systems. These results generalize the ones obtained
before in this area. Applications to intermittent solenoids, Axiom A systems,
H\'enon attractors and to billiards, are also considered.
|
Software engineering educators are continually challenged by rapidly evolving
concepts, technologies, and industry demands. Due to the omnipresence of
software in a digitalized society, higher education institutions (HEIs) have to
educate the students such that they learn how to learn, and that they are
equipped with a profound basic knowledge and with latest knowledge about modern
software and system development. Since industry demands change constantly, HEIs
are challenged in meeting such current and future demands in a timely manner.
This paper analyzes the current state of practice in software engineering
education. Specifically, we want to compare contemporary education with
industrial practice to understand if frameworks, methods and practices for
software and system development taught at HEIs reflect industrial practice. For
this, we conducted an online survey and collected information about 67 software
engineering courses. Our findings show that development approaches taught at
HEIs quite closely reflect industrial practice. We also found that the choice
of what process to teach is sometimes driven by the wish to make a course
successful. Especially when this happens for project courses, it could be
beneficial to put more emphasis on building learning sequences with other
courses.
|
This study explores the potential of modern implicit solvers for stochastic
partial differential equations in the simulation of real-time complex Langevin
dynamics. Not only do these methods offer asymptotic stability, rendering the
issue of runaway solution moot, but they also allow us to simulate at
comparatively largeLangevin time steps, leading to lower computational cost. We
compare different ways of regularizing the underlying path integral and
estimate the errors introduced due to the finite Langevin time. Based on that
insight, we implement benchmark (non-)thermal simulations of the quantum
anharmonic oscillator on the canonical Schwinger-Keldysh contour of short
real-time extent.
|
In this article, we consider a class of finite rank perturbations of Toeplitz
operators that have simple eigenvalues on the unit circle. Under a suitable
assumption on the behavior of the essential spectrum, we show that such
operators are power bounded. The problem originates in the approximation of
hyperbolic partial differential equations with boundary conditions by means of
finite difference schemes. Our result gives a positive answer to a conjecture
by Trefethen, Kreiss and Wu that only a weak form of the so-called Uniform
Kreiss-Lopatinskii Condition is sufficient to imply power boundedness.
|
We address the problem of exposure correction of dark, blurry and noisy
images captured in low-light conditions in the wild. Classical image-denoising
filters work well in the frequency space but are constrained by several factors
such as the correct choice of thresholds, frequency estimates etc. On the other
hand, traditional deep networks are trained end-to-end in the RGB space by
formulating this task as an image-translation problem. However, that is done
without any explicit constraints on the inherent noise of the dark images and
thus produce noisy and blurry outputs. To this end we propose a DCT/FFT based
multi-scale loss function, which when combined with traditional losses, trains
a network to translate the important features for visually pleasing output. Our
loss function is end-to-end differentiable, scale-agnostic, and generic; i.e.,
it can be applied to both RAW and JPEG images in most existing frameworks
without additional overhead. Using this loss function, we report significant
improvements over the state-of-the-art using quantitative metrics and
subjective tests.
|
The \textit{node reliability} of a graph $G$ is the probability that at least
one node is operational and that the operational nodes can all communicate in
the subgraph that they induce, given that the edges are perfectly reliable but
each node operates independently with probability $p\in[0,1]$. We show that
unlike many other notions of graph reliability, the number of maximal intervals
of decrease in $[0,1]$ is unbounded, and that there can be arbitrarily many
inflection points in the interval as well.
|
For the Minkowski question mark function $?(x)$ we consider derivative of the
function $f_n(x) = \underbrace{?(?(...?}_\text{n times}(x)))$. Apart from
obvious cases (rational numbers for example) it is non-trivial to find explicit
examples of numbers $x$ for which $f'_n(x)=0$. In this paper we present a set
of irrational numbers, such that for every element $x_0$ of this set and for
any $n\in\mathbb{Z}_+$ one has $f'_n(x_0)=0$.
|
Since their inception, learning techniques under the Reservoir Computing
paradigm have shown a great modeling capability for recurrent systems without
the computing overheads required for other approaches. Among them, different
flavors of echo state networks have attracted many stares through time, mainly
due to the simplicity and computational efficiency of their learning algorithm.
However, these advantages do not compensate for the fact that echo state
networks remain as black-box models whose decisions cannot be easily explained
to the general audience. This work addresses this issue by conducting an
explainability study of Echo State Networks when applied to learning tasks with
time series, image and video data. Specifically, the study proposes three
different techniques capable of eliciting understandable information about the
knowledge grasped by these recurrent models, namely, potential memory, temporal
patterns and pixel absence effect. Potential memory addresses questions related
to the effect of the reservoir size in the capability of the model to store
temporal information, whereas temporal patterns unveils the recurrent
relationships captured by the model over time. Finally, pixel absence effect
attempts at evaluating the effect of the absence of a given pixel when the echo
state network model is used for image and video classification. We showcase the
benefits of our proposed suite of techniques over three different domains of
applicability: time series modeling, image and, for the first time in the
related literature, video classification. Our results reveal that the proposed
techniques not only allow for a informed understanding of the way these models
work, but also serve as diagnostic tools capable of detecting issues inherited
from data (e.g. presence of hidden bias).
|
We point out qualitatively different possibilities on the role of
CP-conserving processes in generating cosmological particle-antiparticle
asymmetries, with illustrative examples from models in leptogenesis and
asymmetric dark matter production. In particular, we consider scenarios in
which the CP-violating and CP-conserving processes are either both decays or
both scatterings, thereby being naturally of comparable rates. This is in
contrast to the previously considered CP-conserving processes in models of
leptogenesis in different see-saw mechanisms, in which the CP-conserving
scatterings typically have lower rates compared to the CP-violating decays, due
to a Boltzmann suppression. We further point out that the CP-conserving
processes can play a dual role if the asymmetry is generated in the mother
sector itself, in contrast to the conventional scenarios in which it is
generated in the daughter sector. This is because, the CP-conserving processes
initially suppress the asymmetry generation by controlling the
out-of-equilibrium number densities of the bath particles, but subsequently
modify the ratio of particle anti-particle yields at the present epoch by
eliminating the symmetric component of the bath particles through
pair-annihilations, leading to a competing effect stemming from the same
process at different epochs. We find that the asymmetric yields for relevant
particle-antiparticle systems can vary by orders of magnitude depending upon
the relative size of the CP-conserving and violating reaction rates.
|
Magnetic field-line reconnection is a universal plasma process responsible
for the conversion of magnetic field energy to the plasma heating and charged
particle acceleration. Solar flares and Earth's magnetospheric substorms are
two most investigated dynamical systems where magnetic reconnection is believed
to be responsible for global magnetic field reconfiguration and energization of
plasma populations. Such a reconfiguration includes formation of a long-living
current systems connecting the primary energy release region and cold dense
conductive plasma of photosphere/ionosphere. In both flares and substorms the
evolution of this current system correlates with formation and dynamics of
energetic particle fluxes. Our study is focused on this similarity between
flares and substorms. Using a wide range of datasets available for flare and
substorm investigations, we compare qualitatively dynamics of currents and
energetic particle fluxes for one flare and one substorm. We showed that there
is a clear correlation between energetic particle bursts (associated with
energy release due to magnetic reconnection) and magnetic field
reconfiguration/formation of current system. We then discuss how datasets of
in-situ measurements in the magnetospheric substorm can help in interpretation
of datasets gathered for the solar flare.
|
The design of provably correct controllers for continuous-state stochastic
systems crucially depends on approximate finite-state abstractions and their
accuracy quantification. For this quantification, one generally uses
approximate stochastic simulation relations, whose constant precision limits
the achievable guarantees on the control design. This limitation especially
affects higher dimensional stochastic systems and complex formal
specifications. This work allows for variable precision by defining a
simulation relation that contains multiple precision layers. For bi-layered
simulation relations, we develop a robust dynamic programming approach yielding
a lower bound on the satisfaction probability of temporal logic specifications.
We illustrate the benefit of bi-layered simulation relations for linear
stochastic systems in an example.
|
We are concerned with interior and global gradient estimates for solutions to
a class of singular quasilinear elliptic equations with measure data, whose
prototype is given by the $p$-Laplace equation $-\Delta_p u=\mu$ with $p\in
(1,2)$. The cases when $p\in \big(2-\frac 1 n,2\big)$ and $p\in
\big(\frac{3n-2}{2n-1},2-\frac{1}{n}\big]$ were studied in [9] and [22],
respectively. In this paper, we improve the results in [22] and address the
open case when $p\in \big(1,\frac{3n-2}{2n-1}\big]$. Interior and global
modulus of continuity estimates of the gradients of solutions are also
established.
|
A system of interacting classical oscillators is discussed, similar to a
quantum mechanical system of a discrete energy level, interacting with the
energy quasi-continuum of states considered Fano. The limit of a continuous
spectrum is analyzed together with the possible connection of the problem under
study with the generation of coherent phonons.
|
The sequence of deformation bursts during plastic deformation exhibits
scale-free features. In addition to the burst or avalanche sizes and the rate
of avalanches the process is characterized by correlations in the series which
become manifest in the resulting shape of the stress-strain curve. We analyze
such features of plastic deformation with 2D and 3D simulations of discrete
dislocation dynamics models and we show, that only with severe plastic
deformation the ensuing memory effects become negligible. The role of past
deformation history and dislocation pinning by disorder are studied. In
general, the correlations have the effect of reducing the scatter of the
individual stress-strain curves around the mean one.
|
We introduce the concept of impedance matching to axion dark matter by posing
the question of why axion detection is difficult, even though there is enough
power in each square meter of incident dark-matter flux to energize a LED light
bulb. By quantifying backreaction on the axion field, we show that a small
axion-photon coupling does not by itself prevent an order-unity fraction of the
dark matter from being absorbed through optimal impedance match. We further
show, in contrast, that the electromagnetic charges and the self-impedance of
their coupling to photons provide the principal constraint on power absorption
integrated across a search band. Using the equations of axion electrodynamics,
we demonstrate stringent limitations on absorbed power in linear,
time-invariant, passive receivers. Our results yield fundamental constraints,
arising from the photon-electron interaction, on improving integrated power
absorption beyond the cavity haloscope technique. The analysis also has
significant practical implications, showing apparent tension with the
sensitivity projections for a number of planned axion searches. We additionally
provide a basis for more accurate signal power calculations and calibration
models, especially for receivers using multi-wavelength open configurations
such as dish antennas and dielectric haloscopes.
|
Given a simple connected compact Lie group $K$ and a maximal torus $T$ of
$K$, the Weyl group $W=N_K(T)/T$ naturally acts on $T$. First, we use the
combinatorics of the (extended) affine Weyl group to provide an explicit
$W$-equivariant triangulation of $T$. We describe the associated cellular
homology chain complex and give a formula for the cup product on its dual
cochain complex, making it a $\mathbb{Z}[W]$-dg-algebra. Next, remarking that
the combinatorics of this dg-algebra is still valid for Coxeter groups, we
associate a closed compact manifold $\mathbf{T}(W)$ to any finite irreducible
Coxeter group $W$, which coincides with a torus if $W$ is a Weyl group and is
hyperbolic in other cases. Of course, we focus our study on
non-crystallographic groups, which are $I_2(m)$ with $m=5$ or $m\ge 7$, $H_3$
and $H_4$. The manifold $\mathbf{T}(W)$ comes with a $W$-action and an
equivariant triangulation, whose related $\mathbb{Z}[W]$-dg-algebra is the one
mentioned above. We finish by computing the homology of $\mathbf{T}(W)$, as a
representation of $W$.
|
We present an approach for compressing volumetric scalar fields using
implicit neural representations. Our approach represents a scalar field as a
learned function, wherein a neural network maps a point in the domain to an
output scalar value. By setting the number of weights of the neural network to
be smaller than the input size, we achieve compressed representations of scalar
fields, thus framing compression as a type of function approximation. Combined
with carefully quantizing network weights, we show that this approach yields
highly compact representations that outperform state-of-the-art volume
compression approaches. The conceptual simplicity of our approach enables a
number of benefits, such as support for time-varying scalar fields, optimizing
to preserve spatial gradients, and random-access field evaluation. We study the
impact of network design choices on compression performance, highlighting how
simple network architectures are effective for a broad range of volumes.
|
The proliferation of resourceful mobile devices that store rich,
multidimensional and privacy-sensitive user data motivate the design of
federated learning (FL), a machine-learning (ML) paradigm that enables mobile
devices to produce an ML model without sharing their data. However, the
majority of the existing FL frameworks rely on centralized entities. In this
work, we introduce IPLS, a fully decentralized federated learning framework
that is partially based on the interplanetary file system (IPFS). By using IPLS
and connecting into the corresponding private IPFS network, any party can
initiate the training process of an ML model or join an ongoing training
process that has already been started by another party. IPLS scales with the
number of participants, is robust against intermittent connectivity and dynamic
participant departures/arrivals, requires minimal resources, and guarantees
that the accuracy of the trained model quickly converges to that of a
centralized FL framework with an accuracy drop of less than one per thousand.
|
In this paper, we introduce a new concept: the Lions tree. These objects
arise in Taylor expansions involving the Lions derivative and prove invaluable
in classifying the dynamics of mean-field stochastic differential equations.
We discuss Lions trees, derive an Algebra spanned by Lions trees and explore
how couplings between Lions trees lead to a coupled Hopf algebra. Using this
framework, we construct a new way to characterise rough signals driving
mean-field equations: the probabilistic rough path. A comprehensive
generalisation of the ideas first introduced in \cite{2019arXiv180205882.2B},
these ideas promise powerful insights into how interactions with a collective
determine the dynamics of an individual within this collective.
|
We consider a Bayesian framework based on "probability of decision" for
dose-finding trial designs. The proposed PoD-BIN design evaluates the posterior
predictive probabilities of up-and-down decisions. In PoD-BIN, multiple grades
of toxicity, categorized as the mild toxicity (MT) and dose-limiting toxicity
(DLT), are modeled simultaneously, and the primary outcome of interests is
time-to-toxicity for both MT and DLT. This allows the possibility of enrolling
new patients when previously enrolled patients are still being followed for
toxicity, thus potentially shortening trial length. The Bayesian decision rules
in PoD-BIN utilize the probability of decisions to balance the need to speed up
the trial and the risk of exposing patients to overly toxic doses. We
demonstrate via numerical examples the resulting balance of speed and safety of
PoD-BIN and compare to existing designs.
|
Recently, Blockchain technology adoption has expanded to many application
areas due to the evolution of smart contracts. However, developing smart
contracts is non-trivial and challenging due to the lack of tools and expertise
in this field. A promising solution to overcome this issue is to use
Model-Driven Engineering (MDE), however, using models still involves a learning
curve and might not be suitable for non-technical users. To tackle this
challenge, chatbot or conversational interfaces can be used to assess the
non-technical users to specify a smart contract in gradual and interactive
manner.
In this paper, we propose iContractBot, a chatbot for modeling and developing
smart contracts. Moreover, we investigate how to integrate iContractBot with
iContractML, a domain-specific modeling language for developing smart
contracts, and instantiate intention models from the chatbot. The iContractBot
framework provides a domain-specific language (DSL) based on the user intention
and performs model-to-text transformation to generate the smart contract code.
A smart contract use case is presented to demonstrate how iContractBot can be
utilized for creating models and generating the deployment artifacts for smart
contracts based on a simple conversation.
|
The physical mechanism on meridians (acupuncture lines) is studied and a
theoretical model is proposed. The meridians are explained as an alternating
system responsible for the integration and the regulation of life in addition
to the neuro-humoral regulation. We proposed that the meridian conduction is a
kind of mechanical waves (soliton) of low frequency along the slits of muscles.
The anatomical-physiological and experimental evidences are reviewed. It is
demonstrated that the stabilization of the soliton is guaranteed by the
coupling between muscle vibration and cell activation. Therefore the
propagation of mechanical wave dominates the excitation of cell groups along
the meridian. The meridian wave equations and its solution are deduced and how
these results can be used in studying human healthy is briefly discussed .
|
We investigate the dynamics brought on by an impulse perturbation in two
infinite-range quantum Ising models coupled to each other and to a dissipative
bath. We show that, if dissipation is faster the higher the excitation energy,
the pulse perturbation cools down the low-energy sector of the system, at the
expense of the high-energy one, eventually stabilising a transient
symmetry-broken state at temperatures higher than the equilibrium critical one.
Such non-thermal quasi-steady state may survive for quite a long time after the
pulse, if the latter is properly tailored.
|
Consistent alpha generation, i.e., maintaining an edge over the market,
underpins the ability of asset traders to reliably generate profits. Technical
indicators and trading strategies are commonly used tools to determine when to
buy/hold/sell assets, yet these are limited by the fact that they operate on
known values. Over the past decades, multiple studies have investigated the
potential of artificial intelligence in stock trading in conventional markets,
with some success. In this paper, we present RCURRENCY, an RNN-based trading
engine to predict data in the highly volatile digital asset market which is
able to successfully manage an asset portfolio in a live environment. By
combining asset value prediction and conventional trading tools, RCURRENCY
determines whether to buy, hold or sell digital currencies at a given point in
time. Experimental results show that, given the data of an interval $t$, a
prediction with an error of less than 0.5\% of the data at the subsequent
interval $t+1$ can be obtained. Evaluation of the system through backtesting
shows that RCURRENCY can be used to successfully not only maintain a stable
portfolio of digital assets in a simulated live environment using real
historical trading data but even increase the portfolio value over time.
|
The formation of Uranus' regular moons has been suggested to be linked to the
origin of its enormous spin axial tilt (~98^o). A giant impact between
proto-Uranus and a 2-3 M_Earth impactor could lead to a large tilt and to the
formation of an impact generated disc, where prograde and circular satellites
are accreted. The most intriguing features of the current regular Uranian
satellite system is that it possesses a positive trend in the mass-distance
distribution and likely also in the bulk density, implying that viscous
spreading of the disc after the giant impact plays a crucial role in shaping
the architecture of the final system. In this paper, we investigate the
formation of Uranus' satellites by combining results of SPH simulations for the
giant impact, a 1D semi-analytic disc model for viscous spreading of the
post-impact disc, and N-body simulations for the assembly of satellites from a
disc of moonlets. Assuming the condensed rock (i.e., silicate) remains small
and available to stick onto the relatively rapid growing condensed water-ice,
we find that the best case in reproducing the observed mass and bulk
composition of Uranus' satellite system is a pure-rocky impactor with 3 M_Earth
colliding with the young Uranus with an impact parameter b = 0.75. Such an
oblique collision could also naturally explain Uranus' large tilt and possibly,
its low internal heat flux. The giant impact scenario can naturally explain the
key features of Uranus and its regular moons. We therefore suggest that the
Uranian satellite system formed as a result of an impact rather than from a
circumplanetary disc.
|
We investigate a set of techniques for RNN Transducers (RNN-Ts) that were
instrumental in lowering the word error rate on three different tasks
(Switchboard 300 hours, conversational Spanish 780 hours and conversational
Italian 900 hours). The techniques pertain to architectural changes, speaker
adaptation, language model fusion, model combination and general training
recipe. First, we introduce a novel multiplicative integration of the encoder
and prediction network vectors in the joint network (as opposed to additive).
Second, we discuss the applicability of i-vector speaker adaptation to RNN-Ts
in conjunction with data perturbation. Third, we explore the effectiveness of
the recently proposed density ratio language model fusion for these tasks. Last
but not least, we describe the other components of our training recipe and
their effect on recognition performance. We report a 5.9% and 12.5% word error
rate on the Switchboard and CallHome test sets of the NIST Hub5 2000 evaluation
and a 12.7% WER on the Mozilla CommonVoice Italian test set.
|
The autoregressive (AR) models, such as attention-based encoder-decoder
models and RNN-Transducer, have achieved great success in speech recognition.
They predict the output sequence conditioned on the previous tokens and
acoustic encoded states, which is inefficient on GPUs. The non-autoregressive
(NAR) models can get rid of the temporal dependency between the output tokens
and predict the entire output tokens in at least one step. However, the NAR
model still faces two major problems. On the one hand, there is still a great
gap in performance between the NAR models and the advanced AR models. On the
other hand, it's difficult for most of the NAR models to train and converge. To
address these two problems, we propose a new model named the two-step
non-autoregressive transformer(TSNAT), which improves the performance and
accelerating the convergence of the NAR model by learning prior knowledge from
a parameters-sharing AR model. Furthermore, we introduce the two-stage method
into the inference process, which improves the model performance greatly. All
the experiments are conducted on a public Chinese mandarin dataset ASIEHLL-1.
The results show that the TSNAT can achieve a competitive performance with the
AR model and outperform many complicated NAR models.
|
Mathematical models are formal and simplified representations of the
knowledge related to a phenomenon. In classical epidemic models, a neglected
aspect is the heterogeneity of disease transmission and progression linked to
the viral load of each infectious individual. Here, we attempt to investigate
the interplay between the evolution of individuals' viral load and the epidemic
dynamics from a theoretical point of view. In the framework of multi-agent
systems, we propose a particle stochastic model describing the infection
transmission through interactions among agents and the individual physiological
course of the disease. Agents have a double microscopic state: a discrete
label, that denotes the epidemiological compartment to which they belong and
switches in consequence of a Markovian process, and a microscopic trait,
representing a normalized measure of their viral load, that changes in
consequence of binary interactions or interactions with a background.
Specifically, we consider Susceptible--Infected--Removed--like dynamics where
infectious individuals may be isolated from the general population and the
isolation rate may depend on the viral load sensitivity and frequency of tests.
We derive kinetic evolution equations for the distribution functions of the
viral load of the individuals in each compartment, whence, via suitable
upscaling procedures, we obtain a macroscopic model for the densities and viral
load momentum. We perform then a qualitative analysis of the ensuing
macroscopic model, and we present numerical tests in the case of both constant
and viral load-dependent isolation control. Also, the matching between the
aggregate trends obtained from the macroscopic descriptions and the original
particle dynamics simulated by a Monte Carlo approach is investigated.
|
This is a summation of research done in the author's second and third year of
undergraduate mathematics at The University of Toronto. As the previous details
were largely scattered and disorganized; the author decided to rewrite the
cumulative research. The goal of this paper is to construct a family of
analytic functions $\alpha \uparrow^n z : (1,e^{1/e}) \times \mathbb{C}_{\Re(z)
> 0} \to \mathbb{C}_{\Re(z) > 0}$ using methods from fractional calculus. This
family satisfies the hyper-operator chain, $\alpha \uparrow^{n-1} \alpha
\uparrow^n z = \alpha \uparrow^n (z+1)$; with the initial condition $\alpha
\uparrow^0 z = \alpha \cdot z$.
|
Fiber-reinforced ceramic-matrix composites are advanced materials resistant
to high temperatures, with application to aerospace engineering. Their analysis
depends on the detection of embedded fibers, with semi-supervised techniques
usually employed to separate fibers within the fiber beds. Here we present an
open computational pipeline to detect fibers in ex-situ X-ray computed
tomography fiber beds. To separate the fibers in these samples, we tested four
different architectures of fully convolutional neural networks. When comparing
our neural network approach to a semi-supervised one, we obtained Dice and
Matthews coefficients greater than $92.28 \pm 9.65\%$, reaching up to $98.42
\pm 0.03 \%$, showing that the network results are close to the
human-supervised ones in these fiber beds, in some cases separating fibers that
human-curated algorithms could not find. The software we generated in this
project is open source, released under a permissive license, and can be freely
adapted and re-used in other domains. All data and instructions on how to
download and use it are also available.
|
We have designed a two-stage, 10-step process to give organisations a method
to analyse small local energy systems (SLES) projects based on their Cyber
Physical System components in order to develop future-proof energy systems.
SLES are often developed for a specific range of use cases and functions, and
these match the specific requirements and needs of the community, location or
site under consideration. During the design and commissioning, new and specific
cyber physical architectures are developed. These are the control and data
systems that are needed to bridge the gap between the physical assets, the
captured data and the control signals. Often, the cyber physical architecture
and infrastructure is focused on functionality and the delivery of the specific
applications.
But we find that technologies and approaches have arisen from other fields
that, if used within SLES, could support the flexibility, scalability and
reusability vital to their success. As these can improve the operational data
systems then they can also be used to enhance predictive functions If used and
deployed effectively, these new approaches can offer longer term improvements
in the use and effectiveness of SLES, while allowing the concepts and designs
to be capitalised upon through wider roll-out and the offering of commercial
services or products.
|
Mukai varieties are Fano varieties of Picard number one and coindex three. In
genus seven to ten they are linear sections of some special homogeneous
varieties. We describe the generic automorphism groups of these varieties. When
they are expected to be trivial for dimensional reasons, we show they are
indeed trivial, up to three interesting and unexpected exceptions in genera 7,
8, 9, and codimension 4, 3, 2 respectively. We conclude in particular that a
generic prime Fano threefold of genus g has no automorphisms for 7 $\le$ g
$\le$ 10. In the Appendix by Y. Prokhorov, the latter statement is extended to
g = 12.
|
Over a decade ago De Loera, Haws and K\"oppe conjectured that Ehrhart
polynomials of matroid polytopes have only positive coefficients and that the
coefficients of the corresponding $h^*$-polynomials form a unimodal sequence.
The first of these intensively studied conjectures has recently been disproved
by the first author who gave counterexamples in all ranks greater or equal to
three. In this article we complete the picture by showing that Ehrhart
polynomials of matroids of lower rank have indeed only positive coefficients.
Moreover, we show that they are coefficient-wise bounded by the Ehrhart
polynomials of minimal and uniform matroids. We furthermore address the second
conjecture by proving that $h^*$-polynomials of matroid polytopes of sparse
paving matroids of rank two are real-rooted and therefore have log-concave and
unimodal coefficients.
|
We present GrammarTagger, an open-source grammar profiler which, given an
input text, identifies grammatical features useful for language education. The
model architecture enables it to learn from a small amount of texts annotated
with spans and their labels, which 1) enables easier and more intuitive
annotation, 2) supports overlapping spans, and 3) is less prone to error
propagation, compared to complex hand-crafted rules defined on
constituency/dependency parses. We show that we can bootstrap a grammar
profiler model with $F_1 \approx 0.6$ from only a couple hundred sentences both
in English and Chinese, which can be further boosted via learning a
multilingual model. With GrammarTagger, we also build Octanove Learn, a search
engine of language learning materials indexed by their reading difficulty and
grammatical features. The code and pretrained models are publicly available at
\url{https://github.com/octanove/grammartagger}.
|
Recently, research on mental health conditions using public online data,
including Reddit, has surged in NLP and health research but has not reported
user characteristics, which are important to judge generalisability of
findings. This paper shows how existing NLP methods can yield information on
clinical, demographic, and identity characteristics of almost 20K Reddit users
who self-report a bipolar disorder diagnosis. This population consists of
slightly more feminine- than masculine-gendered mainly young or middle-aged
US-based adults who often report additional mental health diagnoses, which is
compared with general Reddit statistics and epidemiological studies.
Additionally, this paper carefully evaluates all methods and discusses ethical
issues.
|
In the present article, we study the Hawking effect and the bounds on
greybody factor in a spacetime with radial deformation. This deformation is
expected to carry the imprint of a non-Einsteinian theory of gravity, but
shares some of the important characteristics of general relativity (GR). In
particular, this radial deformation will restore the asymptotic behavior, and
also allows for the separation of the scalar field equation in terms of the
angular and radial coordinates -- making it suitable to study the Hawking
effect and greybody factors. However, the radial deformation would introduce a
change in the locations of the horizon, and therefore, the temperature of the
Hawking effect naturally alters. In fact, we observe that the deformation
parameter has an enhancing effect on both temperature and bounds on the
greybody factor, which introduces a useful distinction with the Kerr spacetime.
We discuss these effects elaborately, and broadly study the thermal behavior of
a radially deformed spacetime.
|
This paper demonstrates how spectrum up to 1 THz will support mobile
communications beyond 5G in the coming decades. Results of rooftop surrogate
satellite/tower base station measurements at 140 GHz show the natural isolation
between terrestrial networks and surrogate satellite systems, as well as
between terrestrial mobile users and co-channel fixed backhaul links. These
first-of-their-kind measurements and accompanying analysis show that by keeping
the energy radiated by terrestrial emitters on the horizon (e.g., elevation
angles $\leq$15\textdegree), there will not likely be interference in the same
or adjacent bands between passive satellite sensors and terrestrial terminals,
or between mobile links and terrestrial backhaul links at frequencies above 100
GHz.
|
In this paper we discuss applications of the theory developed in [21] and
[22] in studying certain Galois groups and splitting fields of rational
functions in $\mathbb Q\left(X_0(N)\right)$ using Hilbert's irreducibility
theorem and modular forms. We also consider computational aspect of the problem
using MAGMA and SAGE.
|
We provide the first construction of stationary measures for the open KPZ
equation on the spatial interval $[0,1]$ with general inhomogeneous Neumann
boundary conditions at $0$ and $1$ depending on real parameters $u$ and $v$,
respectively. When $u+v\geq 0$ we uniquely characterize the constructed
stationary measures through their multipoint Laplace transform which we prove
is given in terms of a stochastic process that we call the continuous dual Hahn
process.
|
Generative models are now capable of producing highly realistic images that
look nearly indistinguishable from the data on which they are trained. This
raises the question: if we have good enough generative models, do we still need
datasets? We investigate this question in the setting of learning
general-purpose visual representations from a black-box generative model rather
than directly from data. Given an off-the-shelf image generator without any
access to its training data, we train representations from the samples output
by this generator. We compare several representation learning methods that can
be applied to this setting, using the latent space of the generator to generate
multiple "views" of the same semantic content. We show that for contrastive
methods, this multiview data can naturally be used to identify positive pairs
(nearby in latent space) and negative pairs (far apart in latent space). We
find that the resulting representations rival those learned directly from real
data, but that good performance requires care in the sampling strategy applied
and the training method. Generative models can be viewed as a compressed and
organized copy of a dataset, and we envision a future where more and more
"model zoos" proliferate while datasets become increasingly unwieldy, missing,
or private. This paper suggests several techniques for dealing with visual
representation learning in such a future. Code is released on our project page:
https://ali-design.github.io/GenRep/
|
Generative Adversarial Networks (GANs) currently achieve the state-of-the-art
sound synthesis quality for pitched musical instruments using a 2-channel
spectrogram representation consisting of log magnitude and instantaneous
frequency (the "IFSpectrogram"). Many other synthesis systems use
representations derived from the magnitude spectra, and then depend on a
backend component to invert the output magnitude spectrograms that generally
result in audible artefacts associated with the inversion process. However, for
signals that have closely-spaced frequency components such as non-pitched and
other noisy sounds, training the GAN on the 2-channel IFSpectrogram
representation offers no advantage over the magnitude spectra based
representations. In this paper, we propose that training GANs on single-channel
magnitude spectra, and using the Phase Gradient Heap Integration (PGHI)
inversion algorithm is a better comprehensive approach for audio synthesis
modeling of diverse signals that include pitched, non-pitched, and dynamically
complex sounds. We show that this method produces higher-quality output for
wideband and noisy sounds, such as pops and chirps, compared to using the
IFSpectrogram. Furthermore, the sound quality for pitched sounds is comparable
to using the IFSpectrogram, even while using a simpler representation with half
the memory requirements.
|
While the anomalous Hall effect can manifest even without an external
magnetic field, time reversal symmetry is nonetheless still broken by the
internal magnetization of the sample. Recently, it has been shown that certain
materials without an inversion center allow for a nonlinear type of anomalous
Hall effect whilst retaining time reversal symmetry. The effect may arise from
either Berry curvature or through various asymmetric scattering mechanisms.
Here, we report the observation of an extremely large $c$-axis nonlinear
anomalous Hall effect in the non-centrosymmetric T$_d$ phase of MoTe$_2$ and
WTe$_2$ without intrinsic magnetic order. We find that the effect is dominated
by skew-scattering at higher temperatures combined with another scattering
process active at low temperatures. Application of higher bias yields an
extremely large Hall ratio of $E_\perp /E_\parallel$=2.47 and corresponding
anomalous Hall conductivity of order 8x10$^7$S/m.
|
Rationalizing which parts of a molecule drive the predictions of a molecular
graph convolutional neural network (GCNN) can be difficult. To help, we propose
two simple regularization techniques to apply during the training of GCNNs:
Batch Representation Orthonormalization (BRO) and Gini regularization. BRO,
inspired by molecular orbital theory, encourages graph convolution operations
to generate orthonormal node embeddings. Gini regularization is applied to the
weights of the output layer and constrains the number of dimensions the model
can use to make predictions. We show that Gini and BRO regularization can
improve the accuracy of state-of-the-art GCNN attribution methods on artificial
benchmark datasets. In a real-world setting, we demonstrate that medicinal
chemists significantly prefer explanations extracted from regularized models.
While we only study these regularizers in the context of GCNNs, both can be
applied to other types of neural networks
|
Most online multi-object trackers perform object detection stand-alone in a
neural net without any input from tracking. In this paper, we present a new
online joint detection and tracking model, TraDeS (TRAck to DEtect and
Segment), exploiting tracking clues to assist detection end-to-end. TraDeS
infers object tracking offset by a cost volume, which is used to propagate
previous object features for improving current object detection and
segmentation. Effectiveness and superiority of TraDeS are shown on 4 datasets,
including MOT (2D tracking), nuScenes (3D tracking), MOTS and Youtube-VIS
(instance segmentation tracking). Project page:
https://jialianwu.com/projects/TraDeS.html.
|
For partial, nondeterministic, finite state machines, a new conformance
relation called strong reduction is presented. It complements other existing
conformance relations in the sense that the new relation is well-suited for
model-based testing of systems whose inputs are enabled or disabled, depending
on the actual system state. Examples of such systems are graphical user
interfaces and systems with interfaces that can be enabled or disabled in a
mechanical way. We present a new test generation algorithm producing complete
test suites for strong reduction. The suites are executed according to the
grey-box testing paradigm: it is assumed that the state-dependent sets of
enabled inputs can be identified during test execution, while the
implementation states remain hidden, as in black-box testing. It is shown that
this grey-box information is exploited by the generation algorithm in such a
way that the resulting best-case test suite size is only linear in the state
space size of the reference model. Moreover, examples show that this may lead
to significant reductions of test suite size in comparison to true black-box
testing for strong reduction.
|
Residual coherence is a graphical tool for selecting potential second-order
interaction terms as functions of a single time series and its lags. This paper
extends the notion of residual coherence to account for interaction terms of
multiple time series. Moreover, an alternative criterion, integrated spectrum,
is proposed to facilitate this graphical selection.
A financial market application shows that new insights can be gained
regarding implied market volatility.
|
V838 Mon erupted in 2002 quickly becoming the prototype of a new type of
stellar eruptions known today as (luminous) red novae. The red nova outbursts
are thought to be caused by stellar mergers. The merger in V838 Mon took place
in a triple or higher system involving two B-type stars. We mapped the merger
site with ALMA at a resolution of 25 mas in continuum dust emission and in
rotational lines of simple molecules, including CO, SiO, SO, SO$_2$, AlOH, and
H$_2$S. We use radiative transfer calculations to reproduce the remnant's
architecture at the epoch of the ALMA observations. For the first time, we
identify the position of the B-type companion relative to the outbursting
component of V838 Mon. The stellar remnant is surrounded by a clumpy wind with
characteristics similar to winds of red supergiants. The merger product is also
associated with an elongated structure, $17.6 \times 7.6$ mas, seen in
continuum emission, and which we interpret as a disk seen at a moderate
inclination. Maps of continuum and molecular emission show also a complex
region of interaction between the B-type star (its gravity, radiation, and
wind) and the flow of matter ejected in 2002. The remnant's molecular mass is
about 0.1 M$_{\odot}$ and the dust mass is 8.3$\cdot$10$^{-3}$ M$_{\odot}$. The
mass of the atomic component remains unconstrained. The most interesting region
for understanding the merger of V838 Mon remains unresolved but appears
elongated. To study it further in more detail will require even higher angular
resolutions. ALMA maps show us an extreme form of interaction between the
merger ejecta with a distant (250 au) companion. This interaction is similar to
that known from the Antares AB system but at a much higher mass loss rate. The
B-type star not only deflects the merger ejecta but also changes its chemical
composition with an involvement of circumstellar shocks.
|
Unmanned aerial vehicles (UAVs) are expected to be an integral part of
wireless networks. In this paper, we aim to find collision-free paths for
multiple cellular-connected UAVs, while satisfying requirements of connectivity
with ground base stations (GBSs) in the presence of a dynamic jammer. We first
formulate the problem as a sequential decision making problem in discrete
domain, with connectivity, collision avoidance, and kinematic constraints. We,
then, propose an offline temporal difference (TD) learning algorithm with
online signal-to-interference-plus-noise ratio (SINR) mapping to solve the
problem. More specifically, a value network is constructed and trained offline
by TD method to encode the interactions among the UAVs and between the UAVs and
the environment; and an online SINR mapping deep neural network (DNN) is
designed and trained by supervised learning, to encode the influence and
changes due to the jammer. Numerical results show that, without any information
on the jammer, the proposed algorithm can achieve performance levels close to
that of the ideal scenario with the perfect SINR-map. Real-time navigation for
multi-UAVs can be efficiently performed with high success rates, and collisions
are avoided.
|
A multi-objective optimization problem is $C^r$ weakly simplicial if there
exists a $C^r$ surjection from a simplex onto the Pareto set/front such that
the image of each subsimplex is the Pareto set/front of a subproblem, where
$0\leq r\leq \infty$. This property is helpful to compute a parametric-surface
approximation of the entire Pareto set and Pareto front. It is known that all
unconstrained strongly convex $C^r$ problems are $C^{r-1}$ weakly simplicial
for $1\leq r \leq \infty$. In this paper, we show that all unconstrained
strongly convex problems are $C^0$ weakly simplicial. The usefulness of this
theorem is demonstrated in a sparse modeling application: we reformulate the
elastic net as a non-differentiable multi-objective strongly convex problem and
approximate its Pareto set (the set of all trained models with different
hyper-parameters) and Pareto front (the set of performance metrics of the
trained models) by using a B\'ezier simplex fitting method, which accelerates
hyper-parameter search.
|
Three $q$-versions of Lommel polynomials are studied. Included are explicit
representations, recurrences, continued fractions, and connections to
associated Askey--Wilson polynomials. Combinatorial results are emphasized,
including a general theorem when $R_I$ moments coincide with orthogonal
polynomial moments. The combinatorial results use weighted Motzkin paths,
Schr\"oder paths, and parallelogram polyominoes.
|
Unsupervised time series clustering is a challenging problem with diverse
industrial applications such as anomaly detection, bio-wearables, etc. These
applications typically involve small, low-power devices on the edge that
collect and process real-time sensory signals. State-of-the-art time-series
clustering methods perform some form of loss minimization that is extremely
computationally intensive from the perspective of edge devices. In this work,
we propose a neuromorphic approach to unsupervised time series clustering based
on Temporal Neural Networks that is capable of ultra low-power, continuous
online learning. We demonstrate its clustering performance on a subset of UCR
Time Series Archive datasets. Our results show that the proposed approach
either outperforms or performs similarly to most of the existing algorithms
while being far more amenable for efficient hardware implementation. Our
hardware assessment analysis shows that in 7 nm CMOS the proposed architecture,
on average, consumes only about 0.005 mm^2 die area and 22 uW power and can
process each signal with about 5 ns latency.
|
Various methods for solving the inverse reinforcement learning (IRL) problem
have been developed independently in machine learning and economics. In
particular, the method of Maximum Causal Entropy IRL is based on the
perspective of entropy maximization, while related advances in the field of
economics instead assume the existence of unobserved action shocks to explain
expert behavior (Nested Fixed Point Algorithm, Conditional Choice Probability
method, Nested Pseudo-Likelihood Algorithm). In this work, we make previously
unknown connections between these related methods from both fields. We achieve
this by showing that they all belong to a class of optimization problems,
characterized by a common form of the objective, the associated policy and the
objective gradient. We demonstrate key computational and algorithmic
differences which arise between the methods due to an approximation of the
optimal soft value function, and describe how this leads to more efficient
algorithms. Using insights which emerge from our study of this class of
optimization problems, we identify various problem scenarios and investigate
each method's suitability for these problems.
|
The thermodynamic uncertainty relation originally proven for systems driven
into a non-equilibrium steady state (NESS) allows one to infer the total
entropy production rate by observing any current in the system. This kind of
inference scheme is especially useful when the system contains hidden degrees
of freedom or hidden discrete states, which are not accessible to the
experimentalist. A recent generalization of the thermodynamic uncertainty
relation to arbitrary time-dependent driving allows one to infer entropy
production not only by measuring current-observables but also by observing
state variables. A crucial question then is to understand which observable
yields the best estimate for the total entropy production. In this paper we
address this question by analyzing the quality of the thermodynamic uncertainty
relation for various types of observables for the generic limiting cases of
fast driving and slow driving. We show that in both cases observables can be
found that yield an estimate of order one for the total entropy production. We
further show that the uncertainty relation can even be saturated in the limit
of fast driving.
|
The KOALA experiment measures the differential cross section of
(anti)proton-proton elastic scattering over a wide range of four-momentum
transfer squared 0.0008 < |t| < 0.1 (GeV/c)$^2$ . The forward scattering
parameters and the absolute luminosity can be deduced by analyzing the
characteristic shape of the differential cross-section spectrum. The experiment
is based on fixed target kinematics and uses an internal hydrogen cluster jet
target. The wide range of |t| is achieved by measuring the total kinetic energy
of the recoil protons near 90{\deg} with a recoil detector, which consists of
silicon and germanium single-sided strip sensors. The energy resolution of the
recoil detector is better than 30 keV (FWHM). A forward detector consisting of
two layers of plastic scintillators measures the elastically scattered beam
particles in the forward direction close to the beam axis. It helps suppress
the large background at small recoil angles and improves the identification of
elastic scattering events in the low |t| range. The KOALA setup has been
installed and commissioned at COSY in order to validate the detector by
measuring the proton-proton elastic scattering. The results from this
commissioning are presented here.
|
Nowadays High Energy Physics experiments can accumulate unprecedented
statistics of heavy flavour decays that allows to apply new methods, based on
the study of very rare phenomena, which used to be just desperate. In this
paper we propose a new method to measure composition of $K^0$-$\overline{K}^0$,
produced in a decay of heavy hadrons. This composition contains important
information, in particular about weak and strong phases between amplitudes of
the produced $K^0$ and $\overline{K}^0$. We consider possibility to measure
these parameters with time-dependent $K^0 \to \pi^+ \pi^-$ analysis. Due to
$CP$-violation in kaon mixing time-dependent decay rates of $K^0$ and
$\overline{K}^0$ differ, and the initial amplitudes revealed in the
$CP$-violating decay pattern. In particular we consider cases of charmed
hadrons decays: $D^+ \to K^0 \pi^+$, $D_s^+ \to K^0 K^+$, $\Lambda_c \to p K^0$
and with some assumptions $D^0 \to K^0 \pi^0$. This can be used to test the sum
rule for charmed mesons and to obtain input for the full constraint of the two
body amplitudes of $D$-mesons.
|
The levitation of a volatile droplet on a highly superheated surface is known
as the Leidenfrost effect. Wetting state during transition from full wetting of
a surface by a droplet at room temperature to Leidenfrost bouncing, i.e.,
zero-wetting at high superheating, is not fully understood. Here,
visualizations of droplet thermal and wetting footprint in the Leidenfrost
transition state are presented using two optical techniques: mid-infrared
thermography and wetting sensitive total internal reflection imaging under
carefully selected experimental conditions, impact Weber number < 10 and
droplet diameter < capillary length, using an indium-tin-oxide coated sapphire
heater. The experimental regime was designed to create relatively stable
droplet dynamics, where the effects of oscillatory and capillary instabilities
were minimized. The thermography for ethanol droplet in Leidenfrost transition
state (superheat range of 82K-97K) revealed thermal footprint with a central
hot zone surrounded by a cooler periphery, indicative of a partial wetting
state during Leidenfrost transition. High-speed total internal reflection
imaging also confirmed the partial wetting footprint such that there are
wetting areas around a central non-wetting zone. Result presented here using
ethanol as a test fluid shed light on the geometry and dynamics of a volatile
droplet footprint in Leidenfrost transition state.
|
We study the null set $N(\mathcal{P})$ of the Fourier-Laplace transform of a
polytope $\mathcal{P} \subset \mathbb{R}^d$, and we find that $N(\mathcal{P})$
does not contain (almost all) circles in $\mathbb{R}^d$. As a consequence, the
null set does not contain the algebraic varieties $\{z \in \mathbb{C}^d \mid
z_1^2 + \dots + z_d^2 = \alpha^2\}$ for each fixed $\alpha \in \mathbb{C}$, and
hence we get an explicit proof that the Pompeiu property is true for all
polytopes. Our proof uses the Brion-Barvinok theorem, which gives a concrete
formulation for the Fourier-Laplace transform of a polytope, and it also uses
properties of Bessel functions. The original proof that polytopes (as well as
other bodies) possess the Pompeiu property was given by Brown, Schreiber, and
Taylor (1973) for dimension 2. Williams (1976) later observed that the same
proof also works for $d>2$ and, using eigenvalues of the Laplacian, gave
another proof valid for $d \geq 2$ that polytopes have the Pompeiu property.
|
In today's world data is being generated at a high rate due to which it has
become inevitable to analyze and quickly get results from this data. Most of
the relational databases primarily support SQL querying with a limited support
for complex data analysis. Due to this reason, data scientists have no other
option, but to use a different system for complex data analysis. Due to this,
data science frameworks are in huge demand. But to use such a framework, all
the data needs to be loaded into it. This requires significant data movement
across multiple systems, which can be expensive.
We believe that it has become the need of the hour to come up with a single
system which can perform both data analysis tasks and SQL querying. This will
save the data scientists from the expensive data transfer operation across
systems. In our work, we present DaskDB, a system built over the Python's Dask
framework, which is a scalable data science system having support for both data
analytics and in situ SQL query processing over heterogeneous data sources.
DaskDB supports invoking any Python APIs as User-Defined Functions (UDF) over
SQL queries. So, it can be easily integrated with most existing Python data
science applications, without modifying the existing code. Since joining two
relations is a very vital but expensive operation, so a novel distributed
learned index is also introduced to improve the join performance. Our
experimental evaluation demonstrates that DaskDB significantly outperforms
existing systems.
|
We give sufficient conditions on the exponent $p: \mathbb R^d\rightarrow
[1,\infty)$ for the boundedness of the non-centered Gaussian maximal function
on variable Lebesgue spaces $L^{p(\cdot)}(\mathbb R^d, \gamma_d)$, as well as
of the new higher order Riesz transforms associated with the Ornstein-Uhlenbeck
semigroup, which are the natural extensions of the supplementary first order
Gaussian Riesz transforms defined by A. Nowak and K. Stempak in
\cite{nowakstempak}.
|
We derive the explicit form of the martingale representation for
square-integrable processes that are martingales with respect to the natural
filtration of the super-Brownian motion. This is done by using a weak extension
of the Dupire derivative for functionals of superprocesses.
|
The polymer model framework is a classical tool from statistical mechanics
that has recently been used to obtain approximation algorithms for spin systems
on classes of bounded-degree graphs; examples include the ferromagnetic Potts
model on expanders and on the grid. One of the key ingredients in the analysis
of polymer models is controlling the growth rate of the number of polymers,
which has been typically achieved so far by invoking the bounded-degree
assumption. Nevertheless, this assumption is often restrictive and obstructs
the applicability of the method to more general graphs. For example, sparse
random graphs typically have bounded average degree and good expansion
properties, but they include vertices with unbounded degree, and therefore are
excluded from the current polymer-model framework.
We develop a less restrictive framework for polymer models that relaxes the
standard bounded-degree assumption, by reworking the relevant polymer models
from the edge perspective. The edge perspective allows us to bound the growth
rate of the number of polymers in terms of the total degree of polymers, which
in turn can be related more easily to the expansion properties of the
underlying graph. To apply our methods, we consider random graphs with
unbounded degrees from a fixed degree sequence (with minimum degree at least 3)
and obtain approximation algorithms for the ferromagnetic Potts model, which is
a standard benchmark for polymer models. Our techniques also extend to more
general spin systems.
|
In this note, we extend the renormalization horseshoe we have recently
constructed with N. Goncharuk for analytic diffeomorphisms of the circle to
their small two-dimensional perturbations. As one consequence, Herman rings
with rotation numbers of bounded type survive on a codimension one set of
parameters under small two-dimensional perturbations.
|
Type Ia supernovae (SNe Ia) span a range of luminosities and timescales, from
rapidly evolving subluminous to slowly evolving overluminous subtypes. Previous
theoretical work has, for the most part, been unable to match the entire
breadth of observed SNe Ia with one progenitor scenario. Here, for the first
time, we apply non-local thermodynamic equilibrium radiative transfer
calculations to a range of accurate explosion models of sub-Chandrasekhar-mass
white dwarf detonations. The resulting photometry and spectra are in excellent
agreement with the range of observed non-peculiar SNe Ia through 15 d after the
time of B-band maximum, yielding one of the first examples of a quantitative
match to the entire Phillips (1993) relation. The intermediate-mass element
velocities inferred from theoretical spectra at maximum light for the more
massive white dwarf explosions are higher than those of bright observed SNe Ia,
but these and other discrepancies likely stem from the one-dimensional nature
of our explosion models and will be improved upon by future non-local
thermodynamic equilibrium radiation transport calculations of multi-dimensional
sub-Chandrasekhar-mass white dwarf detonations.
|
This article explores the territorial differences in the onset and spread of
COVID-19 and the excess mortality associated with the pandemic, across the
European NUTS3 regions and US counties. Both in Europe and in the US, the
pandemic arrived earlier and recorded higher Rt values in urban regions than in
intermediate and rural ones. A similar gap is also found in the data on excess
mortality. In the weeks during the first phase of the pandemic, urban regions
in EU countries experienced excess mortality of up to 68pp more than rural
ones. We show that, during the initial days of the pandemic, territorial
differences in Rt by the degree of urbanisation can be largely explained by the
level of internal, inbound and outbound mobility. The differences in the spread
of COVID-19 by rural-urban typology and the role of mobility are less clear
during the second wave. This could be linked to the fact that the infection is
widespread across territories, to changes in mobility patterns during the
summer period as well as to the different containment measures which reverse
the causality between mobility and Rt.
|
Neural information retrieval systems typically use a cascading pipeline, in
which a first-stage model retrieves a candidate set of documents and one or
more subsequent stages re-rank this set using contextualized language models
such as BERT. In this paper, we propose DeepImpact, a new document
term-weighting scheme suitable for efficient retrieval using a standard
inverted index. Compared to existing methods, DeepImpact improves impact-score
modeling and tackles the vocabulary-mismatch problem. In particular, DeepImpact
leverages DocT5Query to enrich the document collection and, using a
contextualized language model, directly estimates the semantic importance of
tokens in a document, producing a single-value representation for each token in
each document. Our experiments show that DeepImpact significantly outperforms
prior first-stage retrieval approaches by up to 17% on effectiveness metrics
w.r.t. DocT5Query, and, when deployed in a re-ranking scenario, can reach the
same effectiveness of state-of-the-art approaches with up to 5.1x speedup in
efficiency.
|
Performance metrics are a core component of the evaluation of any machine
learning model and used to compare models and estimate their usefulness. Recent
work started to question the validity of many performance metrics for this
purpose in the context of software defect prediction. Within this study, we
explore the relationship between performance metrics and the cost saving
potential of defect prediction models. We study whether performance metrics are
suitable proxies to evaluate the cost saving capabilities and derive a theory
for the relationship between performance metrics and cost saving potential.
|
Many real-life applications involve simultaneously forecasting multiple time
series that are hierarchically related via aggregation or disaggregation
operations. For instance, commercial organizations often want to forecast
inventories simultaneously at store, city, and state levels for resource
planning purposes. In such applications, it is important that the forecasts, in
addition to being reasonably accurate, are also consistent w.r.t one another.
Although forecasting such hierarchical time series has been pursued by
economists and data scientists, the current state-of-the-art models use strong
assumptions, e.g., all forecasts being unbiased estimates, noise distribution
being Gaussian. Besides, state-of-the-art models have not harnessed the power
of modern nonlinear models, especially ones based on deep learning. In this
paper, we propose using a flexible nonlinear model that optimizes quantile
regression loss coupled with suitable regularization terms to maintain the
consistency of forecasts across hierarchies. The theoretical framework
introduced herein can be applied to any forecasting model with an underlying
differentiable loss function. A proof of optimality of our proposed method is
also provided. Simulation studies over a range of datasets highlight the
efficacy of our approach.
|
The consequences of the attractive, short-range nucleon-nucleon (NN)
interaction on the wave functions of the Elliott SU(3) and the proxy-SU(3)
symmetry are discussed. The NN interaction favors the most symmetric spatial
SU(3) irreducible representation, which corresponds to the maximal spatial
overlap among the fermions. The percentage of the symmetric components out of
the total in an SU(3) wave function is introduced, through which it is found,
that no SU(3) irrep is more symmetric than the highest weight irrep for a
certain number of valence particles in a three dimensional, isotropic, harmonic
oscillator shell. The consideration of the highest weight irreps in nuclei and
in alkali metal clusters, leads to the prediction of a prolate to oblate shape
transition beyond the mid-shell region.
|
While deep learning-based 3D face generation has made a progress recently,
the problem of dynamic 3D (4D) facial expression synthesis is less
investigated. In this paper, we propose a novel solution to the following
question: given one input 3D neutral face, can we generate dynamic 3D (4D)
facial expressions from it? To tackle this problem, we first propose a mesh
encoder-decoder architecture (Expr-ED) that exploits a set of 3D landmarks to
generate an expressive 3D face from its neutral counterpart. Then, we extend it
to 4D by modeling the temporal dynamics of facial expressions using a
manifold-valued GAN capable of generating a sequence of 3D landmarks from an
expression label (Motion3DGAN). The generated landmarks are fed into the mesh
encoder-decoder, ultimately producing a sequence of 3D expressive faces. By
decoupling the two steps, we separately address the non-linearity induced by
the mesh deformation and motion dynamics. The experimental results on the CoMA
dataset show that our mesh encoder-decoder guided by landmarks brings a
significant improvement with respect to other landmark-based 3D fitting
approaches, and that we can generate high quality dynamic facial expressions.
This framework further enables the 3D expression intensity to be continuously
adapted from low to high intensity. Finally, we show our framework can be
applied to other tasks, such as 2D-3D facial expression transfer.
|
We propose a method to exploit high finesse optical resonators for light
assisted coherent manipulation of atomic ensembles, overcoming the limit
imposed by the finite response time of the cavity. The key element of our
scheme is to rapidly switch the interaction between the atoms and the cavity
field with an auxiliary control process as, for example, the light shift
induced by an optical beam. The scheme is applicable to many different atomic
species, both in trapped and free fall configurations, and can be adopted to
control the internal and/or external atomic degrees of freedom. Our method will
open new possibilities in cavity-aided atom interferometry and in the
preparation of highly non-classical atomic states.
|
We investigate the quantum transport through Kondo impurity assuming both a
large number of orbital channels $\mathcal K$$\gg $$1$ for the itinerant
electrons and a semi-classical spin ${\cal S}$ $\gg $ $1$ for the impurity. The
non-Fermi liquid regime of the Kondo problem is achieved in the overscreened
sector $\mathcal K>2\mathcal{S}$. We show that there exist two distinct
semiclassical regimes for the quantum transport through impurity: i) $\mathcal
K$ $\gg$ $\mathcal S$ $\gg$ $1$, differential conductance vanishes and ii)
$\mathcal S$$/$$\mathcal K{=}\mathcal C$ with $ 0$$<$$\mathcal C$$<$$1/2$,
differential conductance reaches some non-vanishing fraction of its unitary
value. Using conformal field theory approach we analyze behavior of the quantum
transport observables and residual entropy in both semiclassical regimes. We
show that the semiclassical limit ii) preserves the key features of resonance
scattering and the most essential fingerprints of the non-Fermi liquid
behavior. We discuss possible realization of two semiclassical regimes in
semiconductor quantum transport experiments.
|
The Multi-voltage Threshold (MVT) method, which samples the signal by certain
reference voltages, has been well developed as being adopted in pre-clinical
and clinical digital positron emission tomography(PET) system. To improve its
energy measurement performance, we propose a Peak Picking MVT(PP-MVT) Digitizer
in this paper. Firstly, a sampled Peak Point(the highest point in pulse
signal), which carries the values of amplitude feature voltage and amplitude
arriving time, is added to traditional MVT with a simple peak sampling circuit.
Secondly, an amplitude deviation statistical analysis, which compares the
energy deviation of various reconstruction models, is used to select adaptive
reconstruction models for signal pulses with different amplitudes. After
processing 30,000 randomly-chosen pulses sampled by the oscilloscope with a
22Na point source, our method achieves an energy resolution of 17.50% within a
450-650 KeV energy window, which is 2.44% better than the result of traditional
MVT with same thresholds; and we get a count number at 15225 in the same energy
window while the result of MVT is at 14678. When the PP-MVT involves less
thresholds than traditional MVT, the advantages of better energy resolution and
larger count number can still be maintained, which shows the robustness and the
flexibility of PP-MVT Digitizer. This improved method indicates that adding
feature peak information could improve the performance on signal sampling and
reconstruction, which canbe proved by the better performance in energy
determination in radiation measurement.
|
We present a Python-based renderer built on NVIDIA's OptiX ray tracing engine
and the OptiX AI denoiser, designed to generate high-quality synthetic images
for research in computer vision and deep learning. Our tool enables the
description and manipulation of complex dynamic 3D scenes containing object
meshes, materials, textures, lighting, volumetric data (e.g., smoke), and
backgrounds. Metadata, such as 2D/3D bounding boxes, segmentation masks, depth
maps, normal maps, material properties, and optical flow vectors, can also be
generated. In this work, we discuss design goals, architecture, and
performance. We demonstrate the use of data generated by path tracing for
training an object detector and pose estimator, showing improved performance in
sim-to-real transfer in situations that are difficult for traditional
raster-based renderers. We offer this tool as an easy-to-use, performant,
high-quality renderer for advancing research in synthetic data generation and
deep learning.
|
In this paper, we consider visualization of displacement fields via optical
flow methods in elastographic experiments consisting of a static compression of
a sample. We propose an elastographic optical flow method (EOFM) which takes
into account experimental constraints, such as appropriate boundary conditions,
the use of speckle information, as well as the inclusion of structural
information derived from knowledge of the background material. We present
numerical results based on both simulated and experimental data from an
elastography experiment in order to demonstrate the relevance of our proposed
approach.
|
Population growth in the last decades has resulted in the production of about
2.01 billion tons of municipal waste per year. The current waste management
systems are not capable of providing adequate solutions for the disposal and
use of these wastes. Recycling and reuse have proven to be a solution to the
problem, but large-scale waste segregation is a tedious task and on a small
scale it depends on public awareness. This research used convolutional neural
networks and computer vision to develop a tool for the automation of solid
waste sorting. The Fotini10k dataset was constructed, which has more than
10,000 images divided into the categories of 'plastic bottles', 'aluminum cans'
and 'paper and cardboard'. ResNet50, MobileNetV1 and MobileNetV2 were retrained
with ImageNet weights on the Fotini10k dataset. As a result, top-1 accuracy of
99% was obtained in the test dataset with all three networks. To explore the
possible use of these networks in mobile applications, the three nets were
quantized in float16 weights. By doing so, it was possible to obtain inference
times twice as low for Raspberry Pi and three times as low for computer
processing units. It was also possible to reduce the size of the networks by
half. When quantizing the top-1 accuracy of 99% was maintained with all three
networks. When quantizing MobileNetV2 to int-8, it obtained a top-1 accuracy of
97%.
|
We consider speeding up stochastic gradient descent (SGD) by parallelizing it
across multiple workers. We assume the same data set is shared among $N$
workers, who can take SGD steps and coordinate with a central server. While it
is possible to obtain a linear reduction in the variance by averaging all the
stochastic gradients at every step, this requires a lot of communication
between the workers and the server, which can dramatically reduce the gains
from parallelism. The Local SGD method, proposed and analyzed in the earlier
literature, suggests machines should make many local steps between such
communications. While the initial analysis of Local SGD showed it needs $\Omega
( \sqrt{T} )$ communications for $T$ local gradient steps in order for the
error to scale proportionately to $1/(NT)$, this has been successively improved
in a string of papers, with the state of the art requiring $\Omega \left( N
\left( \mbox{ poly} (\log T) \right) \right)$ communications. In this paper, we
suggest a Local SGD scheme that communicates less overall by communicating less
frequently as the number of iterations grows. Our analysis shows that this can
achieve an error that scales as $1/(NT)$ with a number of communications that
is completely independent of $T$. In particular, we show that $\Omega(N)$
communications are sufficient. Empirical evidence suggests this bound is close
to tight as we further show that $\sqrt{N}$ or $N^{3/4}$ communications fail to
achieve linear speed-up in simulations. Moreover, we show that under mild
assumptions, the main of which is twice differentiability on any neighborhood
of the optimal solution, one-shot averaging which only uses a single round of
communication can also achieve the optimal convergence rate asymptotically.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.