abstract
stringlengths 42
2.09k
|
---|
This letter presents an energy- and memory-efficient pattern-matching engine
for a network intrusion detection system (NIDS) in the Internet of Things.
Tightly coupled architecture and circuit co-designs are proposed to fully
exploit the statistical behaviors of NIDS pattern matching. The proposed engine
performs pattern matching in three phases, where the phase-1 prefix matching
employs reconfigurable pipelined automata processing to minimize memory
footprint without loss of throughput and efficiency. The processing elements
utilize 8-T content-addressable memory (CAM) cells for dual-port search by
leveraging proposed fixed-1s encoding. A 65-nm prototype demonstrates
best-in-class 1.54-fJ energy per search per pattern byte and 0.9-byte memory
usage per pattern byte.
|
As part of a programme to develop parton showers with controlled logarithmic
accuracy, we consider the question of collinear spin correlations within the
PanScales family of parton showers. We adapt the well-known Collins-Knowles
spin-correlation algorithm to PanScales antenna and dipole showers, using an
approach with similarities to that taken by Richardson and Webster. To study
the impact of spin correlations, we develop Lund-declustering based observables
that are sensitive to spin-correlation effects both within and between jets and
extend the MicroJets collinear single-logarithmic resummation code to include
spin correlations. Together with a 3-point energy correlation observable
proposed recently by Chen, Moult and Zhu, this provides a powerful set of
constraints for validating the logarithmic accuracy of our shower results. The
new observables and their resummation further open the pathway to
phenomenological studies of these important quantum mechanical effects.
|
In this paper, we demonstrate how Hyperledger Fabric, one of the most popular
permissioned blockchains, can benefit from network-attached acceleration. The
scalability and peak performance of Fabric is primarily limited by the
bottlenecks present in its block validation/commit phase. We propose Blockchain
Machine, a hardware accelerator coupled with a hardware-friendly communication
protocol, to act as the validator peer. It can be adapted to applications and
their smart contracts, and is targeted for a server with network-attached FPGA
acceleration card. The Blockchain Machine retrieves blocks and their
transactions in hardware directly from the network interface, which are then
validated through a configurable and efficient block-level and
transaction-level pipeline. The validation results are then transferred to the
host CPU where non-bottleneck operations are executed. From our implementation
integrated with Fabric v1.4 LTS, we observed up to 12x speedup in block
validation when compared to software-only validator peer, with commit
throughput of up to 68,900 tps. Our work provides an acceleration platform that
will foster further research on hardware acceleration of permissioned
blockchains.
|
We propose multirate training of neural networks: partitioning neural network
parameters into "fast" and "slow" parts which are trained simultaneously using
different learning rates. By choosing appropriate partitionings we can obtain
large computational speed-ups for transfer learning tasks. We show that for
various transfer learning applications in vision and NLP we can fine-tune deep
neural networks in almost half the time, without reducing the generalization
performance of the resulting model. We also discuss other splitting choices for
the neural network parameters which are beneficial in enhancing generalization
performance in settings where neural networks are trained from scratch.
Finally, we propose an additional multirate technique which can learn different
features present in the data by training the full network on different time
scales simultaneously. The benefits of using this approach are illustrated for
ResNet architectures on image data. Our paper unlocks the potential of using
multirate techniques for neural network training and provides many starting
points for future work in this area.
|
Investigations of magnetically ordered phases on the femtosecond timescale
have provided significant insights into the influence of charge and lattice
degrees of freedom on the magnetic sub-system. However, short-range magnetic
correlations occurring in the absence of long-range order, for example in
spin-frustrated systems, are inaccessible to many ultrafast techniques. Here,
we show how time-resolved resonant inelastic X-ray scattering (trRIXS) is
capable of probing such short-ranged magnetic dynamics in a charge-transfer
insulator through the detection of a Zhang-Rice singlet exciton. Utilizing
trRIXS measurements at the O K-edge, and in combination with model
calculations, we probe the short-range spin-correlations in the frustrated spin
chain material CuGeO3 following photo-excitation, revealing a strong coupling
between the local lattice and spin sub-systems.
|
Many modern workloads, such as neural networks, databases, and graph
processing, are fundamentally memory-bound. For such workloads, the data
movement between main memory and CPU cores imposes a significant overhead in
terms of both latency and energy. A major reason is that this communication
happens through a narrow bus with high latency and limited bandwidth, and the
low data reuse in memory-bound workloads is insufficient to amortize the cost
of main memory access. Fundamentally addressing this data movement bottleneck
requires a paradigm where the memory system assumes an active role in computing
by integrating processing capabilities. This paradigm is known as
processing-in-memory (PIM).
Recent research explores different forms of PIM architectures, motivated by
the emergence of new 3D-stacked memory technologies that integrate memory with
a logic layer where processing elements can be easily placed. Past works
evaluate these architectures in simulation or, at best, with simplified
hardware prototypes. In contrast, the UPMEM company has designed and
manufactured the first publicly-available real-world PIM architecture.
This paper provides the first comprehensive analysis of the first
publicly-available real-world PIM architecture. We make two key contributions.
First, we conduct an experimental characterization of the UPMEM-based PIM
system using microbenchmarks to assess various architecture limits such as
compute throughput and memory bandwidth, yielding new insights. Second, we
present PrIM, a benchmark suite of 16 workloads from different application
domains (e.g., linear algebra, databases, graph processing, neural networks,
bioinformatics).
|
Let $(\Omega, \mathcal{A}, \mu)$ be a probability space. The classical
Borel-Cantelli Lemma states that for any sequence of $\mu$-measurable sets
$E_i$ ($i=1,2,3,\dots$), if the sum of their measures converges then the
corresponding $\limsup$ set $E_\infty$ is of measure zero. In general the
converse statement is false. However, it is well known that the divergence
counterpart is true under various additional 'independence' hypotheses. In this
paper we revisit these hypotheses and establish both sufficient and necessary
conditions for $E_\infty$ to have either positive or full measure.
|
Automatic speech recognition (ASR) in Sanskrit is interesting, owing to the
various linguistic peculiarities present in the language. The Sanskrit language
is lexically productive, undergoes euphonic assimilation of phones at the word
boundaries and exhibits variations in spelling conventions and in
pronunciations. In this work, we propose the first large scale study of
automatic speech recognition (ASR) in Sanskrit, with an emphasis on the impact
of unit selection in Sanskrit ASR. In this work, we release a 78 hour ASR
dataset for Sanskrit, which faithfully captures several of the linguistic
characteristics expressed by the language. We investigate the role of different
acoustic model and language model units in ASR systems for Sanskrit. We also
propose a new modelling unit, inspired by the syllable level unit selection,
that captures character sequences from one vowel in the word to the next vowel.
We also highlight the importance of choosing graphemic representations for
Sanskrit and show the impact of this choice on word error rates (WER). Finally,
we extend these insights from Sanskrit ASR for building ASR systems in two
other Indic languages, Gujarati and Telugu. For both these languages, our
experimental results show that the use of phonetic based graphemic
representations in ASR results in performance improvements as compared to ASR
systems that use native scripts.
|
We classify connected graphs $G$ whose binomial edge ideal is Gorenstein. The
proof uses methods in prime characteristic.
|
We provide a comprehensive study of the energy transfer phenomenon --
populating a given energy level -- in 3- and 4-level quantum systems coupled to
two thermal baths. In particular, we examine the effects of an external
periodic driving and the coherence induced by the baths on the efficiency of
the energy transfer. We consider the Floquet-Lindblad and the Floquet-Redfield
scenarios, which both are in the Born-Markov, weak-coupling regime but differ
in the treatment of the secular approximation, and for the latter, we develop
an appropriate Floquet-type master equation by employing a partial secular
approximation. Throughout the whole analysis we keep Lamb-shift corrections in
the master equations. We observe that, especially in the Floquet-Redfield
scenario, the driving field can enhance the energy transfer efficiency compared
to the nondriven scenario. In addition, unlike degenerate systems where
Lamb-shift corrections do not contribute significantly on the energy transfer,
in the Redfield and the Floquet-Redfield scenarios these corrections have
nonnegligible effects.
|
For years, the extragalactic community has divided galaxies in two distinct
populations. One of them, featuring blue colours, is actively forming stars,
while the other is made up of "red-and-dead" objects with negligible star
formation. Yet, are these galaxies really dead? Here we would like to highlight
that, as previously reported by several independent groups, state-of-the-art
cosmological numerical simulations predict the existence of a large number of
quenched galaxies that have not formed any star over the last few Gyr. In
contrast, observational measurements of large galaxy samples in the nearby
Universe suggest that even the most passive systems still form stars at some
residual level close to $sSFR\sim10^{-12}~\text{yr}^{-1}$. Unfortunately,
extremely low star formation poses a challenge for both approaches. We conclude
that, at present, the fraction of truly dead galaxies is still an important
open question that must be addressed in order to understand galaxy formation
and evolution.
|
Magneto-optical traps (MOTs) are widely used for laser cooling of atoms. We
have developed a high-flux compact cold-atom source based on a pyramid MOT with
a unique adjustable aperture that is highly suitable for portable quantum
technology devices, including space-based experiments. The adjustability
enabled an investigation into the previously unexplored impact of aperture size
on the atomic flux, and optimisation of the aperture size allowed us to
demonstrate a higher flux than any reported cold-atom sources that use a
pyramid, LVIS, 3D-MOT or grating MOT. We achieved 2.0(1)x10^10 atoms/s of 87-Rb
with a mean velocity of 32(1)m/s, FWHM of 27.6(9)m/s and divergence of
58(3)mrad. Halving the total optical power to 195mW caused only a 26% reduction
of the flux, and a 33% decrease in mean velocity. Methods to further decrease
the velocity as required have been identified. The low power consumption and
small size make this design suitable for a wide range of cold-atom
technologies.
|
The third version of the Hypertext Transfer Protocol (HTTP) is currently in
its final standardization phase by the IETF. Besides better security and
increased flexibility, it promises benefits in terms of performance. HTTP/3
adopts a more efficient header compression schema and replaces TCP with QUIC, a
transport protocol carried over UDP, originally proposed by Google and
currently under standardization too. Although HTTP/3 early implementations
already exist and some websites announce its support, it has been subject to
few studies. In this work, we provide a first measurement study on HTTP/3. We
testify how, during 2020, it has been adopted by some of the leading Internet
companies such as Google, Facebook and Cloudflare. We run a large-scale
measurement campaign toward thousands of websites adopting HTTP/3, aiming at
understanding to what extent it achieves better performance than HTTP/2. We
find that adopting websites often host most web page objects on third-party
servers, which support only HTTP/2 or even HTTP/1.1. Our experiments show that
HTTP/3 provides sizable benefits only in scenarios with high latency or very
poor bandwidth. Despite the adoption of QUIC, we do not find benefits in case
of high packet loss, but we observe large diversity across website providers'
infrastructures.
|
We present a novel method of performing spelling correction on short input
strings, such as search queries or individual words. At its core lies a
procedure for generating artificial typos which closely follow the error
patterns manifested by humans. This procedure is used to train the production
spelling correction model based on a transformer architecture. This model is
currently served in the HubSpot product search. We show that our approach to
typo generation is superior to the widespread practice of adding noise, which
ignores human patterns. We also demonstrate how our approach may be extended to
resource-scarce settings and train spelling correction models for Arabic,
Greek, Russian, and Setswana languages, without using any labeled data.
|
Fair clustering is the process of grouping similar entities together, while
satisfying a mathematically well-defined fairness metric as a constraint. Due
to the practical challenges in precise model specification, the prescribed
fairness constraints are often incomplete and act as proxies to the intended
fairness requirement, leading to biased outcomes when the system is deployed.
We examine how to identify the intended fairness constraint for a problem based
on limited demonstrations from an expert. Each demonstration is a clustering
over a subset of the data.
We present an algorithm to identify the fairness metric from demonstrations
and generate clusters using existing off-the-shelf clustering techniques, and
analyze its theoretical properties. To extend our approach to novel fairness
metrics for which clustering algorithms do not currently exist, we present a
greedy method for clustering. Additionally, we investigate how to generate
interpretable solutions using our approach. Empirical evaluation on three
real-world datasets demonstrates the effectiveness of our approach in quickly
identifying the underlying fairness and interpretability constraints, which are
then used to generate fair and interpretable clusters.
|
The median webpage has increased in size by more than 80% in the last 4
years. This extra complexity allows for a rich browsing experience, but it
hurts the majority of mobile users which still pay for their traffic. This has
motivated several data-saving solutions, which aim at reducing the complexity
of webpages by transforming their content. Despite each method being unique,
they either reduce user privacy by further centralizing web traffic through
data-saving middleboxes or introduce web compatibility (Webcompat) issues by
removing content that breaks pages in unpredictable ways. In this paper, we
argue that data-saving is still possible without impacting either users privacy
or Webcompat. Our main observation is that Web images make up a large portion
of Web traffic and have negligible impact on Webcompat. To this end we make two
main contributions. First, we quantify the potential savings that image
manipulation, such as dimension resizing, quality compression, and transcoding,
enables at large scale: 300 landing and 880 internal pages. Next, we design and
build Browselite, an entirely client-side tool that achieves such data savings
through opportunistically instrumenting existing server-side tooling to perform
image compression, while simultaneously reducing the total amount of image data
fetched. The effect of Browselite on the user experience is quantified using
standard page load metrics and a real user study of over 200 users across 50
optimized web pages. Browselite allows for similar savings to middlebox
approaches, while offering additional security, privacy, and Webcompat
guarantees.
|
Neural volumetric representations such as Neural Radiance Fields (NeRF) have
emerged as a compelling technique for learning to represent 3D scenes from
images with the goal of rendering photorealistic images of the scene from
unobserved viewpoints. However, NeRF's computational requirements are
prohibitive for real-time applications: rendering views from a trained NeRF
requires querying a multilayer perceptron (MLP) hundreds of times per ray. We
present a method to train a NeRF, then precompute and store (i.e. "bake") it as
a novel representation called a Sparse Neural Radiance Grid (SNeRG) that
enables real-time rendering on commodity hardware. To achieve this, we
introduce 1) a reformulation of NeRF's architecture, and 2) a sparse voxel grid
representation with learned feature vectors. The resulting scene representation
retains NeRF's ability to render fine geometric details and view-dependent
appearance, is compact (averaging less than 90 MB per scene), and can be
rendered in real-time (higher than 30 frames per second on a laptop GPU).
Actual screen captures are shown in our video.
|
The recent advancements in multicore machines highlight the need to simplify
concurrent programming in order to leverage their computational power. One way
to achieve this is by designing efficient concurrent data structures (e.g.
stacks, queues, hash-tables, etc.) and synchronization techniques (e.g. locks,
combining techniques, etc.) that perform well in machines with large amounts of
cores. In contrast to ordinary, sequential data-structures, the concurrent
data-structures allow multiple threads to simultaneously access and/or modify
them.
Synch is an open-source framework that not only provides some common
high-performant concurrent data-structures, but it also provides researchers
with the tools for designing and benchmarking high performant concurrent
data-structures. The Synch framework contains a substantial set of concurrent
data-structures such as queues, stacks, combining-objects, hash-tables, locks,
etc. and it provides a user-friendly runtime for developing and benchmarking
concurrent data-structures. Among other features, the provided runtime provides
functionality for creating threads easily (both POSIX and user-level threads),
tools for measuring performance, etc. Moreover, the provided concurrent
data-structures and the runtime are highly optimized for contemporary NUMA
multiprocessors such as AMD Epyc and Intel Xeon.
|
The exponential functional link network (EFLN) has been recently investigated
and applied to nonlinear filtering. This brief proposes an adaptive EFLN
filtering algorithm based on a novel inverse square root (ISR) cost function,
called the EFLN-ISR algorithm, whose learning capability is robust under
impulsive interference. The steady-state performance of EFLN-ISR is rigorously
derived and then confirmed by numerical simulations. Moreover, the validity of
the proposed EFLN-ISR algorithm is justified by the actually experimental
results with the application to hysteretic nonlinear system identification.
|
Cerebral hematoma grows rapidly in 6-24 hours and misprediction of the growth
can be fatal if it is not operated by a brain surgeon. There are two types of
cerebral hematomas: one that grows rapidly and the other that does not grow
rapidly. We are developing the technique of artificial intelligence to
determine whether the CT image includes the cerebral hematoma which leads to
the rapid growth. This problem has various difficulties: the few positive cases
in this classification problem of cerebral hematoma and the targeted hematoma
has deformable object. Other difficulties include the imbalance classification,
the covariate shift, the small data, and the spurious correlation problems. It
is difficult with the plain CNN classification such as VGG. This paper proposes
the joint learning of semantic segmentation and classification and evaluate the
performance of this.
|
We have followed up two ultra-diffuse galaxies (UDGs), detected adjacent to
stellar streams, with Hubble Space Telescope (HST) imaging and HI mapping with
the Jansky Very Large Array (VLA) in order to investigate the possibility that
they might have a tidal origin. With the HST F814W and F555W images we measure
the globular cluster (GC) counts for NGC 2708-Dw1 and NGC 5631-Dw1 as
$2^{+1}_{-1}$ and $5^{+1}_{-2}$, respectively. NGC 2708-Dw1 is undetected in HI
down to a 3$\sigma$ limit of $\log (M_\mathrm{HI}/\mathrm{M_\odot}) = 7.3$, and
there is no apparent HI associated with the nearby stellar stream. There is a
2$\sigma$ HI feature coincident with NGC 5631-Dw1. However, this emission is
blended with a large gaseous tail emanating from NGC 5631 and is not
necessarily associated with the UDG. The presence of any GCs and the lack of
clear HI connections between the UDGs and their parent galaxies strongly
disfavor a tidal dwarf galaxy origin, but cannot entirely rule it out. The GC
counts are consistent with those of normal dwarf galaxies, and the most
probable formation mechanism is one where these UDGs were born as normal dwarfs
and were later tidally stripped and heated. We also identify an over-luminous
($M_\mathrm{V} = -11.1$) GC candidate in NGC 2708-Dw1, which may be a nuclear
star cluster transitioning to an ultra-compact dwarf as the surrounding dwarf
galaxy gets stripped of stars.
|
Hierarchical classification is significant for complex tasks by providing
multi-granular predictions and encouraging better mistakes. As the label
structure decides its performance, many existing approaches attempt to
construct an excellent label structure for promoting the classification
results. In this paper, we consider that different label structures provide a
variety of prior knowledge for category recognition, thus fusing them is
helpful to achieve better hierarchical classification results. Furthermore, we
propose a multi-task multi-structure fusion model to integrate different label
structures. It contains two kinds of branches: one is the traditional
classification branch to classify the common subclasses, the other is
responsible for identifying the heterogeneous superclasses defined by different
label structures. Besides the effect of multiple label structures, we also
explore the architecture of the deep model for better hierachical
classification and adjust the hierarchical evaluation metrics for multiple
label structures. Experimental results on CIFAR100 and Car196 show that our
method obtains significantly better results than using a flat classifier or a
hierarchical classifier with any single label structure.
|
Classical machine learning approaches are sensitive to non-stationarity.
Transfer learning can address non-stationarity by sharing knowledge from one
system to another, however, in areas like machine prognostics and defense, data
is fundamentally limited. Therefore, transfer learning algorithms have little,
if any, examples from which to learn. Herein, we suggest that these constraints
on algorithmic learning can be addressed by systems engineering. We formally
define transfer distance in general terms and demonstrate its use in
empirically quantifying the transferability of models. We consider the use of
transfer distance in the design of machine rebuild procedures to allow for
transferable prognostic models. We also consider the use of transfer distance
in predicting operational performance in computer vision. Practitioners can use
the presented methodology to design and operate systems with consideration for
the learning theoretic challenges faced by component learning systems.
|
This paper considers the inverse problem of recovering both the unknown,
spatially-dependent conductivity $a(x)$ and the nonlinear reaction term $f(u)$
in a reaction-diffusion equation from overposed data. These measurements can
consist of: the value of two different solution measurements taken at a later
time $T$; time-trace profiles from two solutions; or both final time and
time-trace measurements from a single forwards solve data run. We prove both
uniqueness results and the convergence of iteration schemes designed to recover
these coefficients. The last section of the paper shows numerical
reconstructions based on these algorithms.
|
Einstein equivalence principle (EEP), as one of the foundations of general
relativity, is a fundamental test of gravity theories. In this paper, we
propose a new method to test the EEP of electromagnetic interactions through
observations of black hole photon rings, which naturally extends the scale of
Newtonian and post-Newtoian gravity where the EEP violation through a variable
fine structure constant has been well constrained to that of stronger gravity.
We start from a general form of Lagrangian that violates EEP, where a specific
EEP violation model could be regarded as one of the cases of this Lagrangian.
Within the geometrical optical approximation, we find that the dispersion
relation of photons is modified: for photons moving in circular orbit, the
dispersion relation simplifies, and behaves such that photons with different
linear polarizations perceive different gravitational potentials. This makes
the size of black hole photon ring depend on polarization. Further assuming
that the EEP violation is small, we derive an approximate analytic expression
for spherical black holes showing that the change in size of the photon ring is
proportional to the violation parameters. We also discuss several cases of this
analytic expression for specific models. Finally, we explore the effects of
black hole rotation and derive a modified proportionality relation between the
change in size of photon ring and the violation parameters. The numerical and
analytic results show that the influence of black hole rotation on the
constraints of EEP violation is relatively weak for small magnitude of EEP
violation and small rotation speed of black holes.
|
One of the important and widely used classes of models for non-Gaussian time
series is the generalized autoregressive model average models (GARMA), which
specifies an ARMA structure for the conditional mean process of the underlying
time series. However, in many applications one often encounters conditional
heteroskedasticity. In this paper we propose a new class of models, referred to
as GARMA-GARCH models, that jointly specify both the conditional mean and
conditional variance processes of a general non-Gaussian time series. Under the
general modeling framework, we propose three specific models, as examples, for
proportional time series, nonnegative time series, and skewed and heavy-tailed
financial time series. Maximum likelihood estimator (MLE) and quasi Gaussian
MLE (GMLE) are used to estimate the parameters. Simulation studies and three
applications are used to demonstrate the properties of the models and the
estimation procedures.
|
In this paper we consider the second eigenfunction of the Laplacian with
Dirichlet boundary conditions in convex domains. If the domain has \emph{large
eccentricity} then the eigenfunction has \emph{exactly} two nondegenerate
critical points (of course they are one maximum and one minimum). The proof
uses some estimates proved by Jerison ([Jer95a]) and Grieser-Jerison ([GJ96])
jointly with a topological degree argument. Analogous results for higher order
eigenfunctions are proved in rectangular-like domains considered in [GJ09].
|
Transient field-resolved spectroscopy enables studies of ultrafast dynamics
in molecules, nanostructures, or solids with sub-cycle resolution, but previous
work has so far concentrated on extracting the dielectric response at
frequencies below 50\,THz. Here, we implemented transient field-resolved
reflectometry at 50-100\,THz (3-6\,$\mu$m) with MHz repetition rate employing
800\,nm few-cycle excitation pulses that provide sub-10\,fs temporal
resolution. The capabilities of the technique are demonstrated in studies of
ultrafast photorefractive changes in the semiconductors Ge and GaAs, where the
high frequency range permitted to explore the resonance-free Drude response.
The extended frequency range in transient field-resolved spectroscopy can
further enable studies with so far inaccessible transitions, including
intramolecular vibrations in a large range of systems.
|
The decoupling of heavy fields as required by the Appelquist-Carazzone
theorem plays a fundamental role in the construction of any effective field
theory. However, it is not a trivial task to implement a renormalization
prescription that produces the expected decoupling of massive fields, and it is
even more difficult in curved spacetime. Focused on this idea, we consider the
renormalization of the one-loop effective action for the Yukawa interaction
with a background scalar field in curved space. We compute the beta functions
within a generalized DeWitt-Schwinger subtraction procedure and discuss the
decoupling in the running of the coupling constants. For the case of a
quantized scalar field, all the beta function exhibit decoupling, including
also the gravitational ones. For a quantized Dirac field, decoupling appears
almost for all the beta functions. We obtain the anomalous result that the mass
of the background scalar field does not decouple.
|
Baryon-to-meson and baryon-to-photon transition distribution amplitudes
(TDAs) arise in the collinear factorized description of a class of hard
exclusive reactions characterized by the exchange of a non-zero baryon number
in the cross channel. These TDAs extend the concepts of generalized parton
distributions (GPDs) and baryon distribution amplitudes (DAs). In this review
we discuss the general properties and physical interpretation of
baryon-to-meson and baryon-to-photon TDAs. We argue that these non-perturbative
objects are a convenient complementary tool to explore the structure of baryons
at the partonic level. We present an overview of hard exclusive reactions
admitting a description in terms of TDAs. We discuss the first signals from
hard exclusive backward meson electroproduction at JLab with the 6 GeV electron
beam and explore further experimental opportunities to access TDAs at JLab@12
GeV, PANDA and J-PARC.
|
We study homogeneous nucleation in the two-dimensional $q-$state Potts model
for $q=3,5,10,20$ and ferromagnetic couplings $J_{ij} \propto \Theta (R -
|i-j|)$, by means of Monte Carlo simulations employing heat bath dynamics.
Metastability is induced in the low temperature phase through an instantaneous
quench of the magnetic field coupled to one of the $q$ spin states. The quench
depth is adjusted, depending on the value of temperature $T$, interaction range
$R$, and number of states $q$, in such a way that a constant nucleation time is
always obtained. In this setup we analyze the crossover between the classical
compact droplet regime occurring in presence of short range interactions $R
\sim 1$, and the long-range regime $R\gg 1$ where the properties of nucleation
are influenced by the presence of a mean-field spinodal singularity. We
evaluate the metastable susceptibility of the order parameter as well as
various critical droplet properties, which along with the evolution of the
quench depth as a function of $q,T$ and $R$, are then compared with the field
theoretical predictions valid in the large $R$ limit in order to find the onset
of spinodal-assisted nucleation. We find that, with a mild dependence on the
values of $q$ and $T$ considered, spinodal scaling holds for interaction ranges
$R\gtrsim 8-10$, and that signatures of the presence of a pseudo-spinodal are
already visible for remarkably small interaction ranges $R\sim 4-5$. The
influence of spinodal singularities on the occurrence of multi-step nucleation
is also discussed.
|
Consider any network of $n$ identical Kuramoto oscillators in which each
oscillator is coupled bidirectionally with unit strength to at least $\mu
(n-1)$ other oscillators. There is a critical value of the connectivity,
$\mu_c$, such that whenever $\mu>\mu_c$, the system is guaranteed to converge
to the all-in-phase synchronous state for almost all initial conditions, but
when $\mu<\mu_c$, there are networks with other stable states. The precise
value of the critical connectivity remains unknown, but it has been conjectured
to be $\mu_c=0.75$. In 2020, Lu and Steinerberger proved that $\mu_c\leq
0.7889$, and Yoneda, Tatsukawa, and Teramae proved in 2021 that $\mu_c >
0.6838$. In this paper, we prove that $\mu_c\leq 0.75$ and explain why this is
the best upper bound that one can obtain by a purely linear stability analysis.
|
Integrated circuits (ICs) that can operate at high temperature have a wide
variety of applications in the fields of automotive, aerospace, space
exploration, and deep-well drilling. Conventional silicon-based complementary
metal-oxide-semiconductor (CMOS) circuits cannot work at higher than 200
$^\circ$C, leading to the use of wide bandgap semiconductor, especially silicon
carbide (SiC). However, high-density defects at an oxide-SiC interface make it
impossible to predict electrical characteristics of SiC CMOS logic gates in a
wide temperature range and high supply voltage (typically ${\geqq 15}$ V) is
required to compensate their large logic threshold voltage shift. Here, we show
that SiC complementary logic gates composed of p- and n-channel junction
field-effect transistors (JFETs) operate at 300 $^\circ$C with a supply voltage
as low as 1.4 V. The logic threshold voltage shift of the complementary JFET
(CJFET) inverter is 0.2 V from room temperature to 300 $^\circ$C. Furthermore,
temperature dependencies of the static and dynamic characteristics of the CJFET
inverter are well explained by a simple analytical model of SiC JFETs. This
allows us to perform electronic circuit simulation, leading to superior
designability of complex circuits or memories based on SiC CJFET technology,
which operate within a wide temperature range.
|
The Istanbul options were first introduced by Michel Jacques in 1997. These
derivatives are considered as an extension of the Asian options. In this paper,
we propose an analytical approximation formula for a geometric Istanbul call
option (GIC) under the Black-Scholes model. Our approximate pricing formula is
obtained in closed-form using a second-order Taylor expansion. We compare our
theoretical results with those of Monte-Carlo simulations using the control
variates method. Finally, we study the effects of changes in the price of the
underlying asset on the value of GIC.
|
We study three-terminal thermoelectric transport in a two-dimensional Quantum
Point Contact (QPC) connected to left and right electronic reservoirs, as well
as a third one represented by a scanning probe tip. The latter acts as a
voltage probe exchanging heat with the system but no charges on average. The
thermoelectric coefficients are calculated numerically within the
Landauer-B\"uttiker formalism in the low-temperature and linear response
regimes. We find tip-induced oscillations of the local and non-local
thermopowers and study their dependence on the QPC opening. If the latter is
tuned on a conductance plateau, the system behaves as a perfect thermoelectric
diode: for some tip positions the charge current through the QPC, driven by a
local Seebeck effect, can flow in one direction only.
|
Identifying harmful instances, whose absence in a training dataset improves
model performance, is important for building better machine learning models.
Although previous studies have succeeded in estimating harmful instances under
supervised settings, they cannot be trivially extended to generative
adversarial networks (GANs). This is because previous approaches require that
(1) the absence of a training instance directly affects the loss value and that
(2) the change in the loss directly measures the harmfulness of the instance
for the performance of a model. In GAN training, however, neither of the
requirements is satisfied. This is because, (1) the generator's loss is not
directly affected by the training instances as they are not part of the
generator's training steps, and (2) the values of GAN's losses normally do not
capture the generative performance of a model. To this end, (1) we propose an
influence estimation method that uses the Jacobian of the gradient of the
generator's loss with respect to the discriminator's parameters (and vice
versa) to trace how the absence of an instance in the discriminator's training
affects the generator's parameters, and (2) we propose a novel evaluation
scheme, in which we assess harmfulness of each training instance on the basis
of how GAN evaluation metric (e.g., inception score) is expect to change due to
the removal of the instance. We experimentally verified that our influence
estimation method correctly inferred the changes in GAN evaluation metrics.
Further, we demonstrated that the removal of the identified harmful instances
effectively improved the model's generative performance with respect to various
GAN evaluation metrics.
|
We present a post-training weight pruning method for deep neural networks
that achieves accuracy levels tolerable for the production setting and that is
sufficiently fast to be run on commodity hardware such as desktop CPUs or edge
devices. We propose a data-free extension of the approach for computer vision
models based on automatically-generated synthetic fractal images. We obtain
state-of-the-art results for data-free neural network pruning, with ~1.5% top@1
accuracy drop for a ResNet50 on ImageNet at 50% sparsity rate. When using real
data, we are able to get a ResNet50 model on ImageNet with 65% sparsity rate in
8-bit precision in a post-training setting with a ~1% top@1 accuracy drop. We
release the code as a part of the OpenVINO(TM) Post-Training Optimization tool.
|
We show that, in a weakly regular $p$-adic Lie group $G$, the subgroup $G_u$
spanned by the one-parameter subgroups of $G$ admits a Levi decomposition. As a
consequence, there exists a regular open subgroup of $G$ which contains $G_u$.
|
A motion-blurred image is the temporal average of multiple sharp frames over
the exposure time. Recovering these sharp video frames from a single blurred
image is nontrivial, due to not only its strong ill-posedness, but also various
types of complex motion in reality such as rotation and motion in depth. In
this work, we report a generalized video extraction method using the affine
motion modeling, enabling to tackle multiple types of complex motion and their
mixing. In its workflow, the moving objects are first segemented in the alpha
channel. This allows separate recovery of different objects with different
motion. Then, we reduce the variable space by modeling each video clip as a
series of affine transformations of a reference frame, and introduce the
$l0$-norm total variation regularization to attenuate the ringing artifact. The
differentiable affine operators are employed to realize gradient-descent
optimization of the affine model, which follows a novel coarse-to-fine strategy
to further reduce artifacts. As a result, both the affine parameters and sharp
reference image are retrieved. They are finally input into stepwise affine
transformation to recover the sharp video frames. The stepwise retrieval
maintains the nature to bypass the frame order ambiguity. Experiments on both
public datasets and real captured data validate the state-of-the-art
performance of the reported technique.
|
We study multicomponent coagulation via the Smoluchowski coagulation equation
under non-equilibrium stationary conditions induced by a source of small
clusters. The coagulation kernel can be very general, merely satisfying certain
power law asymptotic bounds in terms of the total number of monomers in a
cluster. The bounds are characterized by two parameters and we extend previous
results for one-component systems to classify the parameter values for which
the above stationary solutions do or do not exist. Moreover, we also obtain
criteria for the existence or non-existence of solutions which yield a constant
flux of mass towards large clusters.
|
Ultraviolet (UV) exposure significantly contributes to non-melanoma skin
cancer. In the context of health, UV exposure is the product of time and the UV
Index (UVI), a weighted sum of the irradiance I(lambda) over all wavelengths
from lambda = 250 to 400nm. In our analysis of the United States Environmental
Protection Agency's UV-Net database of over four-hundred thousand spectral
irradiance measurements taken over several years, we found that the UVI is well
estimated by UVI = 77 I(310nm). To better understand this result, we applied an
optical atmospheric model of the terrestrial irradiance spectra and found that
it applies across a wide range of conditions.
|
The use of the full potential of stellar seismology is made difficult by the
improper modeling of the upper-most layers of solar-like stars and their
influence on the modeled frequencies. Our knowledge on these \emph{surface
effects} has improved thanks to the use of 3D hydrodynamical simulations but
the calculation of eigenfrequencies relies on empirical models for the
description of the Lagrangian perturbation of turbulent pressure: the
reduced-$\Gamma_1$ model (RGM) and the gas-$\Gamma_1$ model (GGM). Starting
from the fully compressible turbulence equations, we derive both the GGM and
RGM models using a closure to model the flux of turbulent kinetic energy. It is
found that both models originate from two terms: the source of turbulent
pressure due to compression produced by the oscillations and the divergence of
the flux of turbulent pressure. It is also demonstrated that they are both
compatible with the adiabatic approximation but also imply a number of
questionable assumptions mainly regarding mode physics. Among others
hypothesis, one has to neglect the Lagrangian perturbation of the dissipation
of turbulent kinetic energy into heat and the Lagrangian perturbation of
buoyancy work.
|
We give a link criterion for normal embeddings of definable sets in o-minimal
structures. Namely, we prove that given a definable germ $(X, 0)\subset
(\mathbb{R}^n,0)$ with $(X\setminus\{0\},0)$ connected and a continuous
definable function $\rho: (X,0) \to \mathbb{R}_{\geq 0}$ such that $\rho(x)
\sim \|x\|$, then $(X,0)$ is Lipschitz normally embedded (LNE) if and only if
$(X,0)$ is link Lipschitz normally embedded (LLNE) with respect to $\rho$
(i.e., for $r>0$ small enough, $X\cap \rho^{-1}(r)$ is Lipschitz normally
embedded and its LNE constant is bounded by a constant $C$ independent of $r$).
This is a generalization of Mendes--Sampaio's result for the subanalytic case.
As an application, we give a counterexample to a question on the relation
between Lipschitz normal embedding and MD Homology asked by Bobadilla et al in
their paper about Moderately Discontinuous Homology.
|
We say that a random integer variable $X$ is monotone if the modulus of the
characteristic function of $X$ is decreasing on $[0,\pi]$. This is the case for
many commonly encountered variables, e.g., Bernoulli, Poisson and geometric
random variables. In this note, we provide estimates for the probability that
the sum of independent monotone integer variables attains precisely a specific
value. We do not assume that the variables are identically distributed. Our
estimates are sharp when the specific value is close to the mean, but they are
not useful further out in the tail. By combining with the trick of
\emph{exponential tilting}, we obtain sharp estimates for the point
probabilities in the tail under a slightly stronger assumption on the random
integer variables which we call strong monotonicity.
|
The lack of stability guarantee restricts the practical use of learning-based
methods in core control problems in robotics. We develop new methods for
learning neural control policies and neural Lyapunov critic functions in the
model-free reinforcement learning (RL) setting. We use sample-based approaches
and the Almost Lyapunov function conditions to estimate the region of
attraction and invariance properties through the learned Lyapunov critic
functions. The methods enhance stability of neural controllers for various
nonlinear systems including automobile and quadrotor control.
|
Let $T$ be a linear operator on an $\mathbb{F}_q$-vector space $V$ of
dimension $n$. For any divisor $m$ of $n$, an $m$-dimensional subspace $W$ of
$V$ is $T$-splitting if
$$ V =W\oplus TW\oplus \cdots \oplus T^{d-1}W, $$ where $d=n/m$. Let
$\sigma(m,d;T)$ denote the number of $m$-dimensional $T$-splitting subspaces.
Determining $\sigma(m,d;T)$ for an arbitrary operator $T$ is an open problem.
This problem is closely related to another open problem on Krylov spaces. We
discuss this connection and give explicit formulae for $\sigma(m,d;T)$ in the
case where the invariant factors of $T$ satisfy certain degree conditions. A
connection with another enumeration problem on polynomial matrices is also
discussed.
|
We study the probability distribution of the number of particle and
antiparticle pairs produced via the Schwinger effect when a uniform but
time-dependent electric field is applied to noninteracting scalars or spinors
initially at a thermodynamic equilibrium. We derive the formula for the
characteristic function by employing techniques in mesoscopic physics,
reflecting a close analogy between the Schwinger effect and mesoscopic
tunneling transports. In particular, we find that the pair production in a
medium is enhanced (suppressed) for scalars (spinors) due to the Bose
stimulation (Pauli blocking). Furthermore, in addition to the production of
accelerated pairs by the electric field, the annihilation of decelerated pairs
is found to take place in a medium. Our formula allows us to extract the
probability distributions in various situations, such as those obeying the
generalized trinomial statistics for spin-momentum resolved counting and the
bidirectional Poisson statistics for spin-momentum unresolved counting.
|
The Fornax dwarf spheroidal galaxy has an anomalous number of globular
clusters, five, for its stellar mass. There is a longstanding debate about a
potential sixth globular cluster (Fornax~6) that has recently been
`rediscovered' in DECam imaging. We present new Magellan/M2FS spectroscopy of
the Fornax~6 cluster and Fornax dSph. Combined with literature data we identify
$\sim15-17$ members of the Fornax~6 cluster that this overdensity is indeed a
star cluster and associated with the Fornax dSph. The cluster is significantly
more metal-rich (mean metallicity of $\overline{\rm [Fe/H]}=-0.71\pm0.05$) than
the other five Fornax globular clusters ($-2.5<[Fe/H]<-1.4$) and more
metal-rich than the bulk of Fornax. We measure a velocity dispersion of
$5.6_{-1.6}^{+2.0}\,{\rm km \, s^{-1}}$ corresponding to anomalously high
mass-to-light of 15$<$M/L$<$258 at 90\% confidence when calculated assuming
equilibrium. Two stars inflate this dispersion and may be either Fornax field
stars or as yet unresolved binary stars. Alternatively the Fornax~6 cluster may
be undergoing tidal disruption. Based on its metal-rich nature, the Fornax 6
cluster is likely younger than the other Fornax clusters, with an estimated age
of $\sim2$ Gyr when compared to stellar isochrones. The chemodynamics and star
formation history of Fornax shows imprints of major events such as infall into
the Milky Way, multiple pericenter passages, star formation bursts, and/or
potential mergers or interactions. Any of these events may have triggered the
formation of the Fornax~6 cluster.
|
Two-dimensional (2D) van der Waals (vdW) magnets provide an ideal platform
for exploring, on the fundamental side, new microscopic mechanisms and for
developing, on the technological side, ultra-compact spintronic applications.
So far, bilinear spin Hamiltonians have been commonly adopted to investigate
the magnetic properties of 2D magnets, neglecting higher order magnetic
interactions. However, we here provide quantitative evidence of giant
biquadratic exchange interactions in monolayer NiX2 (X=Cl, Br and I), by
combining first-principles calculations and the newly developed machine
learning method for constructing Hamiltonian. Interestingly, we show that the
ferromagnetic ground state within NiCl2 single layers cannot be explained by
means of bilinear Heisenberg Hamiltonian; rather, the nearest-neighbor
biquadratic interaction is found to be crucial. Furthermore, using a
three-orbitals Hubbard model, we propose that the giant biquadratic exchange
interaction originates from large hopping between unoccupied and occupied
orbitals on neighboring magnetic ions. On a general framework, our work
suggests biquadratic exchange interactions to be important in 2D magnets with
edge-shared octahedra.
|
The Abstraction and Reasoning Corpus (ARC) is a challenging program induction
dataset that was recently proposed by Chollet (2019). Here, we report the first
set of results collected from a behavioral study of humans solving a subset of
tasks from ARC (40 out of 1000). Although this subset of tasks contains
considerable variation, our results showed that humans were able to infer the
underlying program and generate the correct test output for a novel test input
example, with an average of 80% of tasks solved per participant, and with 65%
of tasks being solved by more than 80% of participants. Additionally, we find
interesting patterns of behavioral consistency and variability within the
action sequences during the generation process, the natural language
descriptions to describe the transformations for each task, and the errors
people made. Our findings suggest that people can quickly and reliably
determine the relevant features and properties of a task to compose a correct
solution. Future modeling work could incorporate these findings, potentially by
connecting the natural language descriptions we collected here to the
underlying semantics of ARC.
|
This paper proposes a comparison of four popular interface capturing methods
: the volume of fluid (VOF), the standard level set (SLS), the accurate
conservative level set (ACLS) and the coupled level set and volume of fluid
(CLSVOF). All methods are embedded into a unified low-Mach framework based on a
Cartesian-grid finite-volume discretization. This framework includes a sharp
transport of the interface, a wellbalanced surface tension discretization and a
consistent mass and momentum transport which allows capillary-driven
simulations with high density ratio. The comparison relies on shared metrics
for geometrical accuracy, mass and momentum conservation which exposes the
weakness and strengths of each method. Finally, the versatility and
capabilities of the proposed solver are demonstrated on the simulation of a 3D
head-on collision of two water droplets. Overall, all methods manage to
retrieve reasonable results for all test cases presented. VOF, CLSVOF and ACLS
tend to artificially create little structures while SLS suffers from
conservation issues in the mesh resolution limit. This study leads us to the
conclusion that CLSVOF is the most promising method for two-phase flow
simulations in our specific framework because of its inherent conservation
properties and topology accuracy.
|
In a few years, space telescopes will investigate our Galaxy to detect
evidence of life, mainly by observing rocky planets. In the last decade, the
observation of exoplanet atmospheres and the theoretical works on biosignature
gasses have experienced a considerable acceleration. The~most attractive
feature of the realm of exoplanets is that 40\% of M dwarfs host super-Earths
with a minimum mass between 1 and 30 Earth masses, orbital periods shorter than
50 days, and radii between those of the Earth and Neptune (1--3.8 R$_\oplus$).
Moreover, the recent finding of cyanobacteria able to use far-red (FR) light
for oxygenic photosynthesis due to the synthesis of chlorophylls $d$ and $f$,
extending in vivo light absorption up to 750\ nm, suggests the possibility of
exotic photosynthesis in planets around M dwarfs. Using innovative laboratory
instrumentation, we exposed different cyanobacteria to an M dwarf star
simulated irradiation, comparing their responses to those under solar and FR
simulated lights.~As expected, in FR light, only the cyanobacteria able to
synthesize chlorophyll $d$ and $f$ could grow. Surprisingly, all strains, both
able or unable to use FR light, grew and photosynthesized under the M dwarf
generated spectrum in a similar way to the solar light and much more
efficiently than under the FR one. Our findings highlight the importance of
simulating both the visible and FR light components of an M dwarf spectrum to
correctly evaluate the photosynthetic performances of oxygenic organisms
exposed under such an exotic light~condition.
|
Virtual Reality (VR) has become more and more popular with dropping prices
for systems and a growing number of users. However, the issue of accessibility
in VR has been hardly addressed so far and no uniform approach or standard
exists at this time. In this position paper, we propose a customisable toolkit
implemented at the system-level and discuss the potential benefits of this
approach and challenges that will need to be overcome for a successful
implementation.
|
The task of image-based virtual try-on aims to transfer a target clothing
item onto the corresponding region of a person, which is commonly tackled by
fitting the item to the desired body part and fusing the warped item with the
person. While an increasing number of studies have been conducted, the
resolution of synthesized images is still limited to low (e.g., 256x192), which
acts as the critical limitation against satisfying online consumers. We argue
that the limitation stems from several challenges: as the resolution increases,
the artifacts in the misaligned areas between the warped clothes and the
desired clothing regions become noticeable in the final results; the
architectures used in existing methods have low performance in generating
high-quality body parts and maintaining the texture sharpness of the clothes.
To address the challenges, we propose a novel virtual try-on method called
VITON-HD that successfully synthesizes 1024x768 virtual try-on images.
Specifically, we first prepare the segmentation map to guide our virtual try-on
synthesis, and then roughly fit the target clothing item to a given person's
body. Next, we propose ALIgnment-Aware Segment (ALIAS) normalization and ALIAS
generator to handle the misaligned areas and preserve the details of 1024x768
inputs. Through rigorous comparison with existing methods, we demonstrate that
VITON-HD highly surpasses the baselines in terms of synthesized image quality
both qualitatively and quantitatively. Code is available at
https://github.com/shadow2496/VITON-HD.
|
Symmetries naturally occur in real-world networks and can significantly
influence the observed dynamics. For instance, many synchronization patterns
result from the underlying network symmetries, and high symmetries are known to
increase the stability of synchronization. Yet, here we find that general
macroscopic features of network solutions such as regularity can be induced by
breaking their symmetry of interactions. We demonstrate this effect in an
ecological multilayer network where the topological asymmetries occur
naturally. These asymmetries rescue the system from chaotic oscillations by
establishing stable periodic orbits and equilibria. We call this phenomenon
asymmetry-induced order and uncover its mechanism by analyzing both
analytically and numerically the suppression of dynamics on the system's
synchronization manifold. Moreover, the bifurcation scenario describing the
route from chaos to order is also disclosed. We demonstrate that this result
also holds for generic node dynamics by analyzing coupled paradigmatic
R\"ossler and Lorenz systems.
|
This paper describes the submission to the IWSLT 2021 offline speech
translation task by the UPC Machine Translation group. The task consists of
building a system capable of translating English audio recordings extracted
from TED talks into German text. Submitted systems can be either cascade or
end-to-end and use a custom or given segmentation. Our submission is an
end-to-end speech translation system, which combines pre-trained models
(Wav2Vec 2.0 and mBART) with coupling modules between the encoder and decoder,
and uses an efficient fine-tuning technique, which trains only 20% of its total
parameters. We show that adding an Adapter to the system and pre-training it,
can increase the convergence speed and the final result, with which we achieve
a BLEU score of 27.3 on the MuST-C test set. Our final model is an ensemble
that obtains 28.22 BLEU score on the same set. Our submission also uses a
custom segmentation algorithm that employs pre-trained Wav2Vec 2.0 for
identifying periods of untranscribable text and can bring improvements of 2.5
to 3 BLEU score on the IWSLT 2019 test set, as compared to the result with the
given segmentation.
|
Three dimensional space is said to be spherically symmetric if it admits
SO(3) as the group of isometries. Under this symmetry condition, the Einsteins
Field equations for vacuum, yields the Schwarzschild Metric as the unique
solution, which essentially is the statement of the well known Birkhoffs
Theorem. Geometrically speaking this theorem claims that the pseudo-Riemanian
space-times provide more isometries than expected from the original metric
holonomy/ansatz. In this paper we use the method of Lie Symmetry Analysis to
analyze the Einsteins Vacuum Field Equations so as to obtain the Symmetry
Generators of the corresponding Differential Equation. Additionally, applying
the Noether Point Symmetry method we have obtained the conserved quantities
corresponding to the generators of the Schwarzschild Lagrangian and paving way
to reformulate the Birkhoffs Theorem from a different approach.
|
Scanning quantum dot microscopy is a recently developed high-resolution
microscopy technique that is based on atomic force microscopy and is capable of
imaging the electrostatic potential of nanostructures like molecules or single
atoms. Recently, it could be shown that it not only yields qualitatively but
also quantitatively cutting edge images even on an atomic level. In this paper
we present how control is a key enabling element to this. The developed control
approach consists of a two-degree-of-freedom control framework that comprises a
feedforward and a feedback part. For the latter we design two tailored feedback
controllers. The feedforward part generates a reference for the current scanned
line based on the previously scanned one. We discuss in detail various aspects
of the presented control approach and its implications for scanning quantum dot
microscopy. We evaluate the influence of the feedforward part and compare the
two proposed feedback controllers. The proposed control algorithms speed up
scanning quantum dot microscopy by more than a magnitude and enable to scan
large sample areas.
|
Low resolution fine-grained classification has widespread applicability for
applications where data is captured at a distance such as surveillance and
mobile photography. While fine-grained classification with high resolution
images has received significant attention, limited attention has been given to
low resolution images. These images suffer from the inherent challenge of
limited information content and the absence of fine details useful for
sub-category classification. This results in low inter-class variations across
samples of visually similar classes. In order to address these challenges, this
research proposes a novel attribute-assisted loss, which utilizes ancillary
information to learn discriminative features for classification. The proposed
loss function enables a model to learn class-specific discriminative features,
while incorporating attribute-level separability. Evaluation is performed on
multiple datasets with different models, for four resolutions varying from
32x32 to 224x224. Different experiments demonstrate the efficacy of the
proposed attributeassisted loss for low resolution fine-grained classification.
|
What is the power of constant-depth circuits with $MOD_m$ gates, that can
count modulo $m$? Can they efficiently compute MAJORITY and other symmetric
functions? When $m$ is a constant prime power, the answer is well understood:
Razborov and Smolensky proved in the 1980s that MAJORITY and $MOD_m$ require
super-polynomial-size $MOD_q$ circuits, where $q$ is any prime power not
dividing $m$. However, relatively little is known about the power of $MOD_m$
circuits for non-prime-power $m$. For example, it is still open whether every
problem in $EXP$ can be computed by depth-$3$ circuits of polynomial size and
only $MOD_6$ gates.
We shed some light on the difficulty of proving lower bounds for $MOD_m$
circuits, by giving new upper bounds. We construct $MOD_m$ circuits computing
symmetric functions with non-prime power $m$, with size-depth tradeoffs that
beat the longstanding lower bounds for $AC^0[m]$ circuits for prime power $m$.
Our size-depth tradeoff circuits have essentially optimal dependence on $m$ and
$d$ in the exponent, under a natural circuit complexity hypothesis.
For example, we show for every $\varepsilon > 0$ that every symmetric
function can be computed with depth-3 $MOD_m$ circuits of
$\exp(O(n^{\varepsilon}))$ size, for a constant $m$ depending only on
$\varepsilon > 0$. That is, depth-$3$ $CC^0$ circuits can compute any symmetric
function in \emph{subexponential} size. This demonstrates a significant
difference in the power of depth-$3$ $CC^0$ circuits, compared to other models:
for certain symmetric functions, depth-$3$ $AC^0$ circuits require
$2^{\Omega(\sqrt{n})}$ size [H{\aa}stad 1986], and depth-$3$ $AC^0[p^k]$
circuits (for fixed prime power $p^k$) require $2^{\Omega(n^{1/6})}$ size
[Smolensky 1987]. Even for depth-two $MOD_p \circ MOD_m$ circuits,
$2^{\Omega(n)}$ lower bounds were known [Barrington Straubing Th\'erien 1990].
|
We discuss stiffening of matter in quark-hadron continuity. We introduce a
model that relates quark wave functions in a baryon and the occupation
probability of states for baryons and quarks in dense matter. In a dilute
regime, the confined quarks contribute to the energy density through the masses
of baryons, but do not directly contribute to the pressure; hence, the
equations of state are very soft. This dilute regime continues until the low
momentum states for quarks get saturated; this may happen even before baryons
fully overlap, possibly at density slightly above the nuclear saturation
density. After the saturation the pressure grows rapidly while changes in
energy density are modest, producing a peak in the speed of sound. If we use
baryonic descriptions for quark distributions near the Fermi surface, we reach
a description similar to the quarkyonic matter model of McLerran and Reddy.
With a simple adjustment of quark interactions to get the nucleon mass, our
model becomes consistent with the constraints from 1.4-solar mass neutron
stars, but the high density part is too soft to account for two-solar mass
neutron stars. We delineate the relation between the saturation effects and
short range interactions of quarks, suggesting interactions that leave low
density equations of state unchanged but stiffen the high density part.
|
The coherent superposition of non-orthogonal fermionic Gaussian states has
been shown to be an efficient approximation to the ground states of quantum
impurity problems [Bravyi and Gosset,Comm. Math. Phys.,356 451 (2017)]. We
present a practical approach for performing a variational calculation based on
such states. Our method is based on approximate imaginary-time equations of
motion that decouple the dynamics of each Gaussian state forming the ansatz. It
is independent of the lattice connectivity of the model and the implementation
is highly parallelizable. To benchmark our variational method, we calculate the
spin-spin correlation function and R\'enyi entanglement entropy of an Anderson
impurity, allowing us to identify the screening cloud and compare to density
matrix renormalization group calculations. Secondly, we study the screening
cloud of the two-channel Kondo model, a problem difficult to tackle using
existing numerical tools.
|
The production of polarized proton beams with multi-GeV energies in
ultra-intense laser interaction with targets is studied with three-dimensional
Particle-In-Cell simulations. A near-critical density plasma target with
pre-polarized proton and tritium ions is considered for the proton
acceleration. The pre-polarized protons are initially accelerated by laser
radiation pressure before injection and further acceleration in a bubble-like
wakefield. The temporal dynamics of proton polarization is tracked via the
T-BMT equation, and it is found that the proton polarization state can be
altered both by the laser field and the magnetic component of the wakefield.
The dependence of the proton acceleration and polarization on the ratio of the
ion species is determined, and it is found that the protons can be efficiently
accelerated as long as their relative fraction is less than 20%, in which case
the bubble size is large enough for the protons to obtain sufficient energy to
overcome the bubble injection threshold.
|
This work presents the derivation of a model for the heating process of the
air of a glass dome, where an indoor swimming pool is located in the bottom of
the dome. The problem can be reduced from a three dimensional to a two
dimensional one. The main goal is the formulation of a proper optimization
problem for computing the optimal heating of the air after a given time. For
that, the model of the heating process as a partial differential equation is
formulated as well as the optimization problem subject to the time-dependent
partial differential equation. This yields the optimal heating of the air under
the glass dome such that the desired temperature distribution is attained after
a given time. The discrete formulation of the optimization problem and a proper
numerical method for it, the projected gradient method, are discussed. Finally,
numerical experiments are presented which show the practical performance of the
optimal control problem and its numerical solution method discussed.
|
Constituted with a massive black hole and a stellar mass compact object,
Extreme Mass Ratio Inspiral (EMRI) events hold unique opportunity for the study
of massive black holes, such as by measuring and checking the relations among
the mass, spin and quadrupole moment of a massive black hole, putting the
no-hair theorem to test. TianQin is a planned space-based gravitational wave
observatory and EMRI is one of its main types of sources. It is important to
estimate the capacity of TianQin on testing the no-hair theorem with EMRIs. In
this work, we use the analytic kludge waveform with quadrupole moment
corrections and study how the quadrupole moment can be constrained with
TianQin. We find that TianQin can measure the dimensionless quadrupole moment
parameter with accuracy to the level of $10^{-5}$ under suitable scenarios. The
choice of the waveform cutoff is found to have significant effect on the
result: if the Schwarzschild cutoff is used, the accuracy depends strongly on
the mass of the massive black hole, while the spin has negligible impact; if
the Kerr cutoff is used, however, the dependence on the spin is more
significant. We have also analyzed the cases when TianQin is observing
simultaneously with other detectors such as LISA.
|
In this paper, we investigate the Dirac equation with the Killingbeck
potential under the external magnetic field in non-commutative space.
Corresponding to the expressions of the energy level and wave functions in spin
symmetry limit and pseudo-spin symmetry limit are derived by using the Bethe
ansatz method. The parameter B associated with the external magnetic field and
non-commutative parameter {\theta} make to modify the energy level for
considered systems.
|
A 360{\deg} perception of scene geometry is essential for automated driving,
notably for parking and urban driving scenarios. Typically, it is achieved
using surround-view fisheye cameras, focusing on the near-field area around the
vehicle. The majority of current depth estimation approaches focus on employing
just a single camera, which cannot be straightforwardly generalized to multiple
cameras. The depth estimation model must be tested on a variety of cameras
equipped to millions of cars with varying camera geometries. Even within a
single car, intrinsics vary due to manufacturing tolerances. Deep learning
models are sensitive to these changes, and it is practically infeasible to
train and test on each camera variant. As a result, we present novel
camera-geometry adaptive multi-scale convolutions which utilize the camera
parameters as a conditional input, enabling the model to generalize to
previously unseen fisheye cameras. Additionally, we improve the distance
estimation by pairwise and patchwise vector-based self-attention encoder
networks. We evaluate our approach on the Fisheye WoodScape surround-view
dataset, significantly improving over previous approaches. We also show a
generalization of our approach across different camera viewing angles and
perform extensive experiments to support our contributions. To enable
comparison with other approaches, we evaluate the front camera data on the
KITTI dataset (pinhole camera images) and achieve state-of-the-art performance
among self-supervised monocular methods. An overview video with qualitative
results is provided at https://youtu.be/bmX0UcU9wtA. Baseline code and dataset
will be made public.
|
We report on the data set, data handling, and detailed analysis techniques of
the first neutrino-mass measurement by the Karlsruhe Tritium Neutrino (KATRIN)
experiment, which probes the absolute neutrino-mass scale via the $\beta$-decay
kinematics of molecular tritium. The source is highly pure, cryogenic T$_2$
gas. The $\beta$ electrons are guided along magnetic field lines toward a
high-resolution, integrating spectrometer for energy analysis. A silicon
detector counts $\beta$ electrons above the energy threshold of the
spectrometer, so that a scan of the thresholds produces a precise measurement
of the high-energy spectral tail. After detailed theoretical studies,
simulations, and commissioning measurements, extending from the molecular
final-state distribution to inelastic scattering in the source to subtleties of
the electromagnetic fields, our independent, blind analyses allow us to set an
upper limit of 1.1 eV on the neutrino-mass scale at a 90\% confidence level.
This first result, based on a few weeks of running at a reduced source
intensity and dominated by statistical uncertainty, improves on prior limits by
nearly a factor of two. This result establishes an analysis framework for
future KATRIN measurements, and provides important input to both particle
theory and cosmology.
|
Formed by using laser inter-satellite links (LISLs) among satellites in
upcoming low Earth orbit and very low Earth orbit satellite constellations,
optical wireless satellite networks (OWSNs), also known as free-space optical
satellite networks, can provide a better alternative to existing optical fiber
terrestrial networks (OFTNs) for long-distance inter-continental data
communications. The LISLs operate at the speed of light in vacuum in space,
which gives OWSNs a crucial advantage over OFTNs in terms of latency. In this
paper, we employ the satellite constellation for Phase I of Starlink and LISLs
between satellites to simulate an OWSN. Then, we compare the network latency of
this OWSN and the OFTN under three different scenarios for long-distance
inter-continental data communications. The results show that the OWSN performs
better than the OFTN in all scenarios. It is observed that the longer the
length of the inter-continental connection between the source and the
destination, the better the latency improvement offered by the OWSN compared to
OFTN.
|
We propose a useful integral representation of the quenched free energy which
is applicable to any random systems. Our formula involves the generating
function of multi-boundary correlators, which can be interpreted on the bulk
gravity side as spacetime D-branes introduced by Marolf and Maxfield in
[arXiv:2002.08950]. As an example, we apply our formalism to the Airy limit of
the random matrix model and compute its quenched free energy under certain
approximations of the generating function of correlators. It turns out that the
resulting quenched free energy is a monotonically decreasing function of the
temperature, as expected.
|
Let $\phi:X\rightarrow \mathbb{P}^n$ be a morphism of varieties. Given a
hyperplane $H$ in $\mathbb{P}^n$, there is a Gysin map from the compactly
supported cohomology of $\phi^{-1}(H)$ to that of $X$. We give conditions on
the degree of the cohomology under which this map is an isomorphism for all but
a low-dimensional set of hyperplanes, generalizing results due to Skorobogatov,
Benoist, and Poonen-Slavov. Our argument is based on Beilinson's theory of
singular supports for \'etale sheaves.
|
We present FedScale, a diverse set of challenging and realistic benchmark
datasets to facilitate scalable, comprehensive, and reproducible federated
learning (FL) research. FedScale datasets are large-scale, encompassing a
diverse range of important FL tasks, such as image classification, object
detection, word prediction, and speech recognition. For each dataset, we
provide a unified evaluation protocol using realistic data splits and
evaluation metrics. To meet the pressing need for reproducing realistic FL at
scale, we have also built an efficient evaluation platform to simplify and
standardize the process of FL experimental setup and model evaluation. Our
evaluation platform provides flexible APIs to implement new FL algorithms and
includes new execution backends with minimal developer efforts. Finally, we
perform in-depth benchmark experiments on these datasets. Our experiments
suggest fruitful opportunities in heterogeneity-aware co-optimizations of the
system and statistical efficiency under realistic FL characteristics. FedScale
is open-source with permissive licenses and actively maintained, and we welcome
feedback and contributions from the community.
|
State-of-the-art motor vehicles are able to break for pedestrians in an
emergency. We investigate what it would take to issue an early warning to the
driver so he/she has time to react. We have identified that predicting the
intention of a pedestrian reliably by position is a particularly hard
challenge. This paper describes an early pedestrian warning demonstration
system.
|
This paper concerns the verification of continuous-time polynomial spline
trajectories against linear temporal logic specifications (LTL without 'next').
Each atomic proposition is assumed to represent a state space region described
by a multivariate polynomial inequality. The proposed approach samples a
trajectory strategically, to capture every one of its region transitions. This
yields a discrete word called a trace, which is amenable to established formal
methods for path checking. The original continuous-time trajectory is shown to
satisfy the specification if and only if its trace does. General topological
conditions on the sample points are derived that ensure a trace is recorded for
arbitrary continuous paths, given arbitrary region descriptions. Using
techniques from computer algebra, a trace generation algorithm is developed to
satisfy these conditions when the path and region boundaries are defined by
polynomials. The proposed PolyTrace algorithm has polynomial complexity in the
number of atomic propositions, and is guaranteed to produce a trace of any
polynomial path. Its performance is demonstrated via numerical examples and a
case study from robotics.
|
As a part of science of science (SciSci) research, the evolution of
scientific disciplines has been attracting a great deal of attention recently.
This kind of discipline level analysis not only give insights of one particular
field but also shed light on general principles of scientific enterprise. In
this paper we focus on graphene research, a fast growing field covers both
theoretical and applied study. Using co-clustering method, we split graphene
literature into two groups and confirm that one group is about theoretical
research (T) and another corresponds to applied research (A). We analyze the
proportion of T/A and found applied research becomes more and more popular
after 2007. Geographical analysis demonstrated that countries have different
preference in terms of T/A and they reacted differently to research trend. The
interaction between two groups has been analyzed and shows that T extremely
relies on T and A heavily relies on A, however the situation is very stable for
T but changed markedly for A. No geographic difference is found for the
interaction dynamics. Our results give a comprehensive picture of graphene
research evolution and also provide a general framework which is able to
analyze other disciplines.
|
Free-space optical communication is emerging as a low-power, low-cost, and
high data rate alternative to radio-frequency communication in short-to
medium-range applications. However, it requires a close-to-line-of-sight link
between the transmitter and the receiver. This paper proposes a robust $\cHi$
control law for free-space optical (FSO) beam pointing error systems under
controlled weak turbulence conditions. The objective is to maintain the
transmitter-receiver line, which means the center of the optical beam as close
as possible to the center of the receiving aperture within a prescribed
disturbance attenuation level. First, we derive an augmented nonlinear
discrete-time model for pointing error loss due to misalignment caused by weak
atmospheric turbulence. We then investigate the $\cHi$-norm optimization
problem that guarantees the closed-loop pointing error is stable and ensures
the prescribed weak disturbance attenuation. Furthermore, we evaluate the
closed-loop outage probability error and bit error rate (BER) that quantify the
free-space optical communication performance in fading channels. Finally, the
paper concludes with a numerical simulation of the proposed approach to the FSO
link's error performance.
|
Fake news has now grown into a big problem for societies and also a major
challenge for people fighting disinformation. This phenomenon plagues
democratic elections, reputations of individual persons or organizations, and
has negatively impacted citizens, (e.g., during the COVID-19 pandemic in the US
or Brazil). Hence, developing effective tools to fight this phenomenon by
employing advanced Machine Learning (ML) methods poses a significant challenge.
The following paper displays the present body of knowledge on the application
of such intelligent tools in the fight against disinformation. It starts by
showing the historical perspective and the current role of fake news in the
information war. Proposed solutions based solely on the work of experts are
analysed and the most important directions of the application of intelligent
systems in the detection of misinformation sources are pointed out.
Additionally, the paper presents some useful resources (mainly datasets useful
when assessing ML solutions for fake news detection) and provides a short
overview of the most important R&D projects related to this subject. The main
purpose of this work is to analyse the current state of knowledge in detecting
fake news; on the one hand to show possible solutions, and on the other hand to
identify the main challenges and methodological gaps to motivate future
research.
|
We study adaptive methods for differentially private convex optimization,
proposing and analyzing differentially private variants of a Stochastic
Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the
AdaGrad algorithm. We provide upper bounds on the regret of both algorithms and
show that the bounds are (worst-case) optimal. As a consequence of our
development, we show that our private versions of AdaGrad outperform adaptive
SGD, which in turn outperforms traditional SGD in scenarios with non-isotropic
gradients where (non-private) Adagrad provably outperforms SGD. The major
challenge is that the isotropic noise typically added for privacy dominates the
signal in gradient geometry for high-dimensional problems; approaches to this
that effectively optimize over lower-dimensional subspaces simply ignore the
actual problems that varying gradient geometries introduce. In contrast, we
study non-isotropic clipping and noise addition, developing a principled
theoretical approach; the consequent procedures also enjoy significantly
stronger empirical performance than prior approaches.
|
In this paper we show that wormholes in (2+1) dimensions (3-D) cannot be
sourced solely by both Casimir energy and tension, differently from what
happens in a 4-D scenario, in which case it has been shown recently, by the
direct computation of the exact shape and redshift functions of a wormhole
solution, that this is possible. We show that in a 3-D spacetime the same is
not true since the arising of at least an event horizon is inevitable. We do
the analysis for massive and massless fermions, as well as for scalar fields,
considering quasi-periodic boundary conditions and find that a possibility to
circumvent such a restriction is to introduce, besides the 3-D Casimir energy
density and tension, a cosmological constant, embedding the surface in a 4-D
manifold and applying a perpendicular weak magnetic field. This causes an
additional tension on it, which contributes to the formation of the wormhole.
Finally, we discuss the possibility of producing the condensed matter analogous
of this wormhole in a graphene sheet and analyze the electronic transport
through it.
|
We propose the Automatic-differentiated Physics-Informed Echo State Network
(API-ESN). The network is constrained by the physical equations through the
reservoir's exact time-derivative, which is computed by automatic
differentiation. As compared to the original Physics-Informed Echo State
Network, the accuracy of the time-derivative is increased by up to seven orders
of magnitude. This increased accuracy is key in chaotic dynamical systems,
where errors grows exponentially in time. The network is showcased in the
reconstruction of unmeasured (hidden) states of a chaotic system. The API-ESN
eliminates a source of error, which is present in existing physics-informed
echo state networks, in the computation of the time-derivative. This opens up
new possibilities for an accurate reconstruction of chaotic dynamical states.
|
The aerodynamic performance of the high-lift configuration greatly influences
the safety and economy of commercial aircraft. Accurately predicting the
aerodynamic performance of the high-lift configuration, especially the stall
behavior, is important for aircraft design. However, the complex flow phenomena
of high-lift configurations pose substantial difficulties to current turbulence
models. In this paper, a three-equation k-(v^2)-{\omega} turbulence model for
the Reynolds-averaged Navier-Stokes equations is used to compute the stall
behavior of high-lift configurations. A separated shear layer fixed function is
implemented in the turbulence model to better capture the nonequilibrium
characteristics of turbulence. Different high-lift configurations, including
the two-dimensional multielement NLR7301 and Omar airfoils and a complex
full-configuration model (JAXA Standard Model), are numerically tested. The
results indicate that the effect of the nonequilibrium characteristics of
turbulence is significant in the free shear layer, which is key to accurately
predicting the stall behavior of high-lift devices. The modified SPF k-(v^2
)-{\omega} model is more accurate in predicting stall behavior than the
Spalart-Allmaras, shear stress transport, and original k-(v^2)-{\omega} models
for the full high-lift configuration. The relative errors in the predicted
maximum lift coefficients are within 3% of the experimental data.
|
Aberration-corrected scanning electron microscopy (AC-STEM) can provide
valuable information on the atomic structure of nanoclusters, an essential
input for gaining an understanding of their physical and chemical properties. A
systematic method is presented here for the extraction of atom coordinates from
an AC-STEM image in a way that is general enough to be applicable to irregular
structures. The two-dimensional information from the image is complemented with
an approximate description of the atomic interactions so as to construct a
three-dimensional structure and, at a final stage, the structure is refined
using electron density functional theory (DFT) calculations. The method is
applied to an AC-STEM image of Au55. Analysis of the local structure shows that
the cluster is a combination of a part with icosahedral structure elements and
a part with local atomic arrangement characteristic of crystal packing,
including a segment of a flat surface facet. The energy landscape of the
cluster is explored in calculations of minimum energy paths between the optimal
fit structure and other candidates generated in the analysis. This reveals low
energy barriers for conformational changes, showing that such transitions can
occur on laboratory timescale even at room temperature and lead to large
changes in the AC-STEM image. The paths furthermore reveal additional cluster
configurations, some with lower DFT energy and providing nearly as good fit to
the experimental image.
|
We present a method for exploring analogue Hawking radiation using a laser
pulse propagating through an underdense plasma. The propagating fields in the
Hawking effect are local perturbations of the plasma density and laser
amplitude. We derive the dependence of the resulting Hawking temperature on the
dimensionless amplitude of the laser and the behaviour of the spot area of the
laser at the analogue event horizon. We demonstrate one possible way of
obtaining the analogue Hawking temperature in terms of the plasma wavelength,
and our analysis shows that for a high intensity near-IR laser the analogue
Hawking temperature is less than approximately 25K for a reasonable choice of
parameters.
|
Ionic polymer-metal composites consist in a thin film of electro-active
polymers (Nafion R for example) sandwiched between two metallic electrodes.
They can be used as sensors or actuators. The polymer is saturated with water,
which causes a complete dissociation and the release of small cations. The
strip undergoes large bending motions when it is submitted to an orthogonal
electric field and vice versa. We used a continuous medium approach and a
coarse grain model; the system is depicted as a deformable porous medium in
which flows an ionic solution. We write microscale balance laws and
thermodynamic relations for each phase, then for the complete material using an
average technique. Entropy production, then constitutive equations are deduced
: a Kelvin-Voigt stress-strain relation, generalized Fourier's and Darcy's laws
and a Nernst-Planck equation. We applied this model to a cantilever E.A.P.
strip undergoing a continuous potential difference (static case); a shear force
may be applied to the free end to prevent its displacement. Applied forces and
deflection are calculated using a beam model in large displacements. The
results obtained are in good agreement with the experimental data published in
the literature.
|
Learning representation for source code is a foundation of many program
analysis tasks. In recent years, neural networks have already shown success in
this area, but most existing models did not make full use of the unique
structural information of programs. Although abstract syntax tree-based neural
models can handle the tree structure in the source code, they cannot capture
the richness of different types of substructure in programs. In this paper, we
propose a modular tree network (MTN) which dynamically composes different
neural network units into tree structures based on the input abstract syntax
tree. Different from previous tree-structural neural network models, MTN can
capture the semantic differences between types of ASTsubstructures. We evaluate
our model on two tasks: program classification and code clone detection.
Ourmodel achieves the best performance compared with state-of-the-art
approaches in both tasks, showing the advantage of leveraging more elaborate
structure information of the source code.
|
Motivated by the recent surge of criminal activities with
cross-cryptocurrency trades, we introduce a new topological perspective to
structural anomaly detection in dynamic multilayer networks. We postulate that
anomalies in the underlying blockchain transaction graph that are composed of
multiple layers are likely to also be manifested in anomalous patterns of the
network shape properties. As such, we invoke the machinery of clique persistent
homology on graphs to systematically and efficiently track evolution of the
network shape and, as a result, to detect changes in the underlying network
topology and geometry. We develop a new persistence summary for multilayer
networks, called stacked persistence diagram, and prove its stability under
input data perturbations. We validate our new topological anomaly detection
framework in application to dynamic multilayer networks from the Ethereum
Blockchain and the Ripple Credit Network, and demonstrate that our stacked PD
approach substantially outperforms state-of-art techniques.
|
Consider a deterministically growing surface of any dimension, where the
growth at a point is an arbitrary nonlinear function of the heights at that
point and its neighboring points. Assuming that this nonlinear function is
monotone, invariant under the symmetries of the lattice, equivariant under
constant shifts, and twice continuously differentiable, it is shown that any
such growing surface approaches a solution of the deterministic KPZ equation in
a suitable space-time scaling limit.
|
Extracting the interaction rules of biological agents from movement sequences
pose challenges in various domains. Granger causality is a practical framework
for analyzing the interactions from observed time-series data; however, this
framework ignores the structures and assumptions of the generative process in
animal behaviors, which may lead to interpretational problems and sometimes
erroneous assessments of causality. In this paper, we propose a new framework
for learning Granger causality from multi-animal trajectories via augmented
theory-based behavioral models with interpretable data-driven models. We adopt
an approach for augmenting incomplete multi-agent behavioral models described
by time-varying dynamical systems with neural networks. For efficient and
interpretable learning, our model leverages theory-based architectures
separating navigation and motion processes, and the theory-guided
regularization for reliable behavioral modeling. This can provide interpretable
signs of Granger-causal effects over time, i.e., when specific others cause the
approach or separation. In experiments using synthetic datasets, our method
achieved better performance than various baselines. We then analyzed
multi-animal datasets of mice, flies, birds, and bats, which verified our
method and obtained novel biological insights.
|
While attention-based encoder-decoder (AED) models have been successfully
extended to the online variants for streaming automatic speech recognition
(ASR), such as monotonic chunkwise attention (MoChA), the models still have a
large label emission latency because of the unconstrained end-to-end training
objective. Previous works tackled this problem by leveraging alignment
information to control the timing to emit tokens during training. In this work,
we propose a simple alignment-free regularization method, StableEmit, to
encourage MoChA to emit tokens earlier. StableEmit discounts the selection
probabilities in hard monotonic attention for token boundary detection by a
constant factor and regularizes them to recover the total attention mass during
training. As a result, the scale of the selection probabilities is increased,
and the values can reach a threshold for token emission earlier, leading to a
reduction of emission latency and deletion errors. Moreover, StableEmit can be
combined with methods that constraint alignments to further improve the
accuracy and latency. Experimental evaluations with LSTM and Conformer encoders
demonstrate that StableEmit significantly reduces the recognition errors and
the emission latency simultaneously. We also show that the use of alignment
information is complementary in both metrics.
|
This paper describes a test and case study of self-evaluation of online
courses during the pandemic time. Due to the Covid-19, the whole world needs to
sit on lockdown in different periods. Many things need to be done in all kinds
of business including the education sector of countries. To sustain the
education development teaching methods had to switch from traditional
face-to-face teaching to online courses. The government made decisions in a
short time and educational institutions had no time to prepare the materials
for the online teaching. All courses of the Mongolian University of
Pharmaceutical Sciences switched to online lessons. Challenges were raised
before professors and tutors during online teaching. Our university did not
have a specific learning management system for online teaching and e-learning.
Therefore professors used different platforms for their online teaching such as
Zoom, Microsoft teams for instance. Moreover, different social networking
platforms played an active role in communication between students and
professors. The situation is very difficult for professors and students. To
measure the quality of online courses and to figure out the positive and weak
points of online teaching we need an evaluation of e-learning. The focus of
this paper is to share the evaluation process of e-learning based on a
structure-oriented evaluation model.
|
This report contains the description of two novel job shop scheduling
benchmarks that resemble instances of real scheduling problem as they appear in
industry. In particular, the aim was to provide large-scale benchmarks (up to 1
million operations) to test the state-of-the-art scheduling solutions on
problems that are closer to what occurs in a real industrial context. The first
benchmark is an extension of the well known Taillard benchmark (1992), while
the second is a collection of scheduling instances with a known-optimum
solution.
|
Sentiment analysis can provide a suitable lead for the tools used in software
engineering along with the API recommendation systems and relevant libraries to
be used. In this context, the existing tools like SentiCR, SentiStrength-SE,
etc. exhibited low f1-scores that completely defeats the purpose of deployment
of such strategies, thereby there is enough scope for performance improvement.
Recent advancements show that transformer based pre-trained models (e.g., BERT,
RoBERTa, ALBERT, etc.) have displayed better results in the text classification
task. Following this context, the present research explores different
BERT-based models to analyze the sentences in GitHub comments, Jira comments,
and Stack Overflow posts. The paper presents three different strategies to
analyse BERT based model for sentiment analysis, where in the first strategy
the BERT based pre-trained models are fine-tuned; in the second strategy an
ensemble model is developed from BERT variants, and in the third strategy a
compressed model (Distil BERT) is used. The experimental results show that the
BERT based ensemble approach and the compressed BERT model attain improvements
by 6-12% over prevailing tools for the F1 measure on all three datasets.
|
Neural networks notoriously suffer from the problem of catastrophic
forgetting, the phenomenon of forgetting the past knowledge when acquiring new
knowledge. Overcoming catastrophic forgetting is of significant importance to
emulate the process of "incremental learning", where the model is capable of
learning from sequential experience in an efficient and robust way.
State-of-the-art techniques for incremental learning make use of knowledge
distillation towards preventing catastrophic forgetting. Therein, one updates
the network while ensuring that the network's responses to previously seen
concepts remain stable throughout updates. This in practice is done by
minimizing the dissimilarity between current and previous responses of the
network one way or another. Our work contributes a novel method to the arsenal
of distillation techniques. In contrast to the previous state of the art, we
propose to firstly construct low-dimensional manifolds for previous and current
responses and minimize the dissimilarity between the responses along the
geodesic connecting the manifolds. This induces a more formidable knowledge
distillation with smooth properties which preserves the past knowledge more
efficiently as observed by our comprehensive empirical study.
|
One of the problems of conventional visual quality evaluation criteria such
as PSNR and MSE is the lack of appropriate standards based on the human visual
system (HVS). They are calculated based on the difference of the corresponding
pixels in the original and manipulated image. Hence, they practically do not
provide a correct understanding of the image quality. Watermarking is an image
processing application in which the image's visual quality is an essential
criterion for its evaluation. Watermarking requires a criterion based on the
HVS that provides more accurate values than conventional measures such as PSNR.
This paper proposes a weighted fuzzy-based criterion that tries to find
essential parts of an image based on the HVS. Then these parts will have larger
weights in computing the final value of PSNR. We compare our results against
standard PSNR, and our experiments show considerable consequences.
|
Hermite best approximation vectors of a real number $\theta$ were introduced
by Lagarias. A nonzero vector (p, q) $\in$ Z x N is a Hermite best
approximation vector of $\theta$ if there exists $\Delta$ > 0 such that (p --
q$\theta$) 2 + q 2 /$\Delta$ $\le$ (a -- b$\theta$) 2 + b 2 /$\Delta$ for all
nonzero (a, b) $\in$ Z 2. Hermite observed that if q > 0 then the fraction p/q
must be a convergent of the continued fraction expansion of $\theta$ and
Lagarias pointed out that some convergents are not associated with a Hermite
best approximation vectors. In this note we show that the almost sure
proportion of Hermite best approximation vectors among convergents is ln 3/ ln
4. The main tool of the proof is the natural extension of the Gauss map x
$\in$]0, 1[$\rightarrow$ {1/x}.
|
To make off-screen interaction without specialized hardware practical, we
investigate using deep learning methods to process the common built-in IMU
sensor (accelerometers and gyroscopes) on mobile phones into a useful set of
one-handed interaction events. We present the design, training, implementation
and applications of TapNet, a multi-task network that detects tapping on the
smartphone. With phone form factor as auxiliary information, TapNet can jointly
learn from data across devices and simultaneously recognize multiple tap
properties, including tap direction and tap location. We developed two datasets
consisting of over 135K training samples, 38K testing samples, and 32
participants in total. Experimental evaluation demonstrated the effectiveness
of the TapNet design and its significant improvement over the state of the art.
Along with the datasets,
(https://sites.google.com/site/michaelxlhuang/datasets/tapnet-dataset), and
extensive experiments, TapNet establishes a new technical foundation for
off-screen mobile input.
|
In this report we are aiming at introducing a global measure of
non-classicality of the state space of $N$-level quantum systems and estimating
it in the limit of large $N$. For this purpose we employ the Wigner function
negativity as a non-classicality criteria. Thus, the specific volume of the
support of negative values of Wigner function is treated as a measure of
non-classicality of an individual state. Assuming that the states of an
$N$-level quantum system are distributed by Hilbert-Schmidt measure
(Hilbert-Schmidt ensemble), we define the global measure as the average
non-classicality of the individual states over the Hilbert-Schmidt ensemble. We
present the numerical estimate of this quantity as a result of random
generation of states, and prove a proposition claiming its exact value in the
limit of $N\to \infty$
|
While language identification is a fundamental speech and language processing
task, for many languages and language families it remains a challenging task.
For many low-resource and endangered languages this is in part due to resource
availability: where larger datasets exist, they may be single-speaker or have
different domains than desired application scenarios, demanding a need for
domain and speaker-invariant language identification systems. This year's
shared task on robust spoken language identification sought to investigate just
this scenario: systems were to be trained on largely single-speaker speech from
one domain, but evaluated on data in other domains recorded from speakers under
different recording circumstances, mimicking realistic low-resource scenarios.
We see that domain and speaker mismatch proves very challenging for current
methods which can perform above 95% accuracy in-domain, which domain adaptation
can address to some degree, but that these conditions merit further
investigation to make spoken language identification accessible in many
scenarios.
|
We establish an uncountable amenable ergodic Roth theorem, in which the
acting group is not assumed to be countable and the space need not be
separable. This extends a previous result of Bergelson, McCutcheon and Zhang.
Using this uncountable Roth theorem, we establish the following two additional
results.
[(i)] We establish a combinatorial application about triangular patterns in
certain subsets of the Cartesian square of arbitrary amenable groups, extending
a result of Bergelson, McCutcheon and Zhang for countable amenable groups.
[(ii)] We establish a uniform bound on the lower Banach density of the set of
double recurrence times along all $\Gamma$-systems, where $\Gamma$ is any group
in a class of uniformly amenable groups. As a special case, we obtain this
uniformity over all $\mathbb{Z}$-systems, and our result seems to be novel
already in this particular case.
Our uncountable Roth theorem is crucial in the proof of both of these
results.
|
We consider a class of anisotropic spin-$\frac{1}{2}$ models with competing
ferro- and antiferromagnetic interactions on two-dimensional Tasaki and kagome
lattices consisting of corner sharing triangles. For certain values of the
interactions the ground state is macroscopically degenerated in zero magnetic
field. In this case the ground state manifold consists of isolated magnons as
well as the bound magnon complexes. The ground state degeneracy is estimated
using a special form of exact wave function which admits arrow configuration
representation on two-dimensional lattice. The comparison of this estimate with
the result for some special exactly solved models shows that the used approach
determines the number of the ground states with exponential accuracy. It is
shown that the main contribution to the ground state degeneracy and the
residual entropy is given by the bound magnon complexes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.