abstract
stringlengths 42
2.09k
|
---|
Virtual try-on methods aim to generate images of fashion models wearing
arbitrary combinations of garments. This is a challenging task because the
generated image must appear realistic and accurately display the interaction
between garments. Prior works produce images that are filled with artifacts and
fail to capture important visual details necessary for commercial applications.
We propose Outfit Visualization Net (OVNet) to capture these important details
(e.g. buttons, shading, textures, realistic hemlines, and interactions between
garments) and produce high quality multiple-garment virtual try-on images.
OVNet consists of 1) a semantic layout generator and 2) an image generation
pipeline using multiple coordinated warps. We train the warper to output
multiple warps using a cascade loss, which refines each successive warp to
focus on poorly generated regions of a previous warp and yields consistent
improvements in detail. In addition, we introduce a method for matching outfits
with the most suitable model and produce significant improvements for both our
and other previous try-on methods. Through quantitative and qualitative
analysis, we demonstrate our method generates substantially higher-quality
studio images compared to prior works for multi-garment outfits. An interactive
interface powered by this method has been deployed on fashion e-commerce
websites and received overwhelmingly positive feedback.
|
Within the framework of Einstein-Gauss-Bonnet theory in five-dimensional
spacetime ($5D$ EGB), we derive the hydrostatic equilibrium equations and solve
them numerically to obtain the neutron stars for both isotropic and anisotropic
distribution of matter. The mass-radius relations are obtained for SLy equation
of state, which describes both the solid crust and the liquid core of neutron
stars, and for a wide range of the Gauss-Bonnet coupling parameter $\alpha$.
More specifically, we find that the contribution of the Gauss-Bonnet term leads
to substantial deviations from the Einstein gravity. We also discuss that after
a certain value of $\alpha$, the theory admits higher maximum masses compared
with general relativity, however, the causality condition is violated in the
high-mass region. Finally, our results are compared with the recent
observations data on mass-radius diagram.
|
Deep models have improved state-of-the-art for both supervised and
unsupervised learning. For example, deep embedded clustering (DEC) has greatly
improved the unsupervised clustering performance, by using stacked autoencoders
for representation learning. However, one weakness of deep modeling is that the
local neighborhood structure in the original space is not necessarily preserved
in the latent space. To preserve local geometry, various methods have been
proposed in the supervised and semi-supervised learning literature (e.g.,
spectral clustering and label propagation) using graph Laplacian
regularization. In this paper, we combine the strength of deep representation
learning with measure propagation (MP), a KL-divergence based graph
regularization method originally used in the semi-supervised scenario. The main
assumption of MP is that if two data points are close in the original space,
they are likely to belong to the same class, measured by KL-divergence of class
membership distribution. By taking the same assumption in the unsupervised
learning scenario, we propose our Deep Embedded Clustering Aided by Measure
Propagation (DECAMP) model. We evaluate DECAMP on short text clustering tasks.
On three public datasets, DECAMP performs competitively with other
state-of-the-art baselines, including baselines using additional data to
generate word embeddings used in the clustering process. As an example, on the
Stackoverflow dataset, DECAMP achieved a clustering accuracy of 79%, which is
about 5% higher than all existing baselines. These empirical results suggest
that DECAMP is a very effective method for unsupervised learning.
|
Many physical, biological and neural systems behave as coupled oscillators,
with characteristic phase coupling across different frequencies. Methods such
as $n:m$ phase locking value and bi-phase locking value have previously been
proposed to quantify phase coupling between two resonant frequencies (e.g. $f$,
$2f/3$) and across three frequencies (e.g. $f_1$, $f_2$, $f_1+f_2$),
respectively. However, the existing phase coupling metrics have their
limitations and limited applications. They cannot be used to detect or quantify
phase coupling across multiple frequencies (e.g. $f_1$, $f_2$, $f_3$, $f_4$,
$f_1+f_2+f_3-f_4$), or coupling that involves non-integer multiples of the
frequencies (e.g. $f_1$, $f_2$, $2f_1/3+f_2/3$). To address the gap, this paper
proposes a generalized approach, named multi-phase locking value (M-PLV), for
the quantification of various types of instantaneous multi-frequency phase
coupling. Different from most instantaneous phase coupling metrics that measure
the simultaneous phase coupling, the proposed M-PLV method also allows the
detection of delayed phase coupling and the associated time lag between coupled
oscillators. The M-PLV has been tested on cases where synthetic coupled signals
are generated using white Gaussian signals, and a system comprised of multiple
coupled R\"ossler oscillators. Results indicate that the M-PLV can provide a
reliable estimation of the time window and frequency combination where the
phase coupling is significant, as well as a precise determination of time lag
in the case of delayed coupling. This method has the potential to become a
powerful new tool for exploring phase coupling in complex nonlinear dynamic
systems.
|
Transcriptomic analysis are characterized by being not directly quantitative
and only providing relative measurements of expression levels up to an unknown
individual scaling factor. This difficulty is enhanced for differential
expression analysis. Several methods have been proposed to circumvent this lack
of knowledge by estimating the unknown individual scaling factors however, even
the most used one, are suffering from being built on hardly justifiable
biological hypotheses or from having weak statistical background. Only two
methods withstand this analysis: one based on largest connected graph component
hardly usable for large amount of expressions like in NGS, the second based on
$\log$-linear fits which unfortunately require a first step which uses one of
the methods described before.
We introduce a new procedure for differential analysis in the context of
transcriptomic data. It is the result of pooling together several differential
analyses each based on randomly picked genes used as reference genes. It
provides a differential analysis free from the estimation of the individual
scaling factors or any other knowledge. Theoretical properties are investigated
both in term of FWER and power. Moreover in the context of Poisson or negative
binomial modelization of the transcriptomic expressions, we derived a test with
non asymptotic control of its bounds. We complete our study by some empirical
simulations and apply our procedure to a real data set of hepatic miRNA
expressions from a mouse model of non-alcoholic steatohepatitis (NASH), the
CDAHFD model. This study on real data provides new hits with good biological
explanations.
|
The world today is experiencing an abundance of music like no other time, and
attempts to group music into clusters have become increasingly prevalent.
Common standards for grouping music were songs, artists, and genres, with
artists or songs exploring similar genres of music seen as related. These
clustering attempts serve critical purposes for various stakeholders involved
in the music industry. For end users of music services, they may want to group
their music so that they can easily navigate inside their music library; for
music streaming platforms like Spotify, companies may want to establish a solid
dataset of related songs in order to successfully provide personalized music
recommendations and coherent playlists to their users. Due to increased
competition in the streaming market, platforms are trying their best to find
novel ways of learning similarities between audio to gain competitive
advantage. Our team, comprised of music lovers with different tastes, was
interested in the same issue, and created Music-Circles, an interactive
visualization of music from the Billboard. Music-Circles links audio feature
data offered by Spotify to popular songs to create unique vectors for each
song, and calculate similarities between these vectors to cluster them. Through
interacting with Music-Circles, users can gain understandings of audio
features, view characteristic trends in popular music, and find out which music
cluster they belong to.
|
We propose a manifold matching approach to generative models which includes a
distribution generator (or data generator) and a metric generator. In our
framework, we view the real data set as some manifold embedded in a
high-dimensional Euclidean space. The distribution generator aims at generating
samples that follow some distribution condensed around the real data manifold.
It is achieved by matching two sets of points using their geometric shape
descriptors, such as centroid and $p$-diameter, with learned distance metric;
the metric generator utilizes both real data and generated samples to learn a
distance metric which is close to some intrinsic geodesic distance on the real
data manifold. The produced distance metric is further used for manifold
matching. The two networks are learned simultaneously during the training
process. We apply the approach on both unsupervised and supervised learning
tasks: in unconditional image generation task, the proposed method obtains
competitive results compared with existing generative models; in
super-resolution task, we incorporate the framework in perception-based models
and improve visual qualities by producing samples with more natural textures.
Experiments and analysis demonstrate the feasibility and effectiveness of the
proposed framework.
|
We introduce a two-dimensional Hele-Shaw type free boundary model for
motility of eukaryotic cells on substrates. The key ingredients of this model
are the Darcy law for overdamped motion of the cytoskeleton gel (active gel)
coupled with advection-diffusion equation for myosin density leading to
elliptic-parabolic Keller-Segel system. This system is supplemented with
Hele-Shaw type boundary conditions: Young-Laplace equation for pressure and
continuity of velocities. We first show that radially symmetric stationary
solutions become unstable and bifurcate to traveling wave solutions at a
critical value of the total myosin mass. Next we perform linear stability
analysis of these traveling wave solutions and identify the type of bifurcation
(sub- or supercritical). Our study sheds light on the mathematics underlying
instability/stability transitions in this model. Specifically, we show that
these transitions occur via generalized eigenvectors of the linearized
operator.
|
The algebraic monoid structure of an incidence algebra is investigated. We
show that the multiplicative structure alone determines the algebra
automorphisms of the incidence algebra. We present a formula that expresses the
complexity of the incidence monoid with respect to the two sided action of its
maximal torus in terms of the zeta polynomial of the poset. In addition, we
characterize the finite (connected) posets whose incidence monoids have
complexity $\leq 1$. Finally, we determine the covering relations of the
adherence order on the incidence monoid of a star poset.
|
In this paper, we consider a frequency-based portfolio optimization problem
with $m \geq 2$ assets when the expected logarithmic growth (ELG) rate of
wealth is used as the performance metric. With the aid of the notion called
dominant asset, it is known that the optimal ELG level is achieved by investing
all available funds on that asset. However, such an "all-in" strategy is
arguably too risky to implement in practice. Motivated by this issue, we study
the case where the portfolio weights are chosen in a rather ad-hoc manner and a
buy-and-hold strategy is subsequently used. Then we show that, if the
underlying portfolio contains a dominant asset, buy and hold on that specific
asset is asymptotically log-optimal with a sublinear rate of convergence. This
result also extends to the scenario where a trader either does not have a
probabilistic model for the returns or does not trust a model obtained from
historical data. To be more specific, we show that if a market contains a
dominant asset, buy and hold a market portfolio involving nonzero weights for
each asset is asymptotically log-optimal. Additionally, this paper also
includes a conjecture regarding the property called high-frequency maximality.
That is, in the absence of transaction costs, high-frequency rebalancing is
unbeatable in the ELG sense. Support for the conjecture, involving a lemma for
a weak version of the conjecture, is provided. This conjecture, if true,
enables us to improve the log-optimality result obtained previously. Finally, a
result that indicates a way regarding an issue about when should one to
rebalance their portfolio if needed, is also provided. Examples, some involving
simulations with historical data, are also provided along the way to illustrate
the~theory.
|
In this paper, we develop a new classification method for manifold-valued
data in the framework of probabilistic learning vector quantization. In many
classification scenarios, the data can be naturally represented by symmetric
positive definite matrices, which are inherently points that live on a curved
Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds,
traditional Euclidean machine learning algorithms yield poor results on such
data. In this paper, we generalize the probabilistic learning vector
quantization algorithm for data points living on the manifold of symmetric
positive definite matrices equipped with Riemannian natural metric
(affine-invariant metric). By exploiting the induced Riemannian distance, we
derive the probabilistic learning Riemannian space quantization algorithm,
obtaining the learning rule through Riemannian gradient descent. Empirical
investigations on synthetic data, image data , and motor imagery EEG data
demonstrate the superior performance of the proposed method.
|
The extreme loads experienced by the wind turbine in the extreme wind events
are critical for the evaluation of structural reliability. Hence, the load
alleviation control methods need to be designed and deployed to reduce the
adverse effects of extreme wind events. This work demonstrates that the extreme
loads are highly correlated to wind conditions such as turbulence-induced wind
shears. Based on this insight, this work proposes a turbulence-based load
alleviation control strategy for adapting the controller to changes in wind
condition. The estimation of the rotor averaged wind shear based on the rotor
loads is illustrated, and is herein used to statistically characterize the
extreme wind events for control purpose. To demonstrates the benefits,
simulations are carried out using high-fidelity aero-elastic tool and the DTU
10 MW reference turbine in normal and extreme turbulence wind conditions. The
results indicate that the proposed method can effectively decrease the
exceedance probability of the extreme loads. Meanwhile, the method can minimize
the loss of annual energy production in normal operating condition.
|
The asymptotics of the ground state $u(r)$ of the Schr\"odinger--Newton
equation in $\mathbb{R}^3$ was determined by V. Moroz and J. van Schaftingen to
be $u(r) \sim A e^{-r}/ r^{1 - \|u\|_2^2/8\pi}$ for some $A>0$, in units that
fix the exponential rate to unity. They left open the value of $\|u\|_2^2$, the
squared $L^2$ norm of $u$. Here it is rigorously shown that $2^{1/3}3\pi^2\leq
\|u\|_2^2\leq 2^{3}\pi^{3/2}$. It is reported that numerically
$\|u\|_2^2\approx 14.03\pi$, revealing that the monomial prefactor of $e^{-r}$
increases with $r$ in a concave manner. Asymptotic results are proposed for the
Schr\"odinger--Newton equation with external $\sim - K/r$ potential, and for
the related Hartree equation of a bosonic atom or ion.
|
In this work, we extended a stochastic model for football leagues based on
the team's potential [R. da Silva et al. Comput. Phys. Commun. \textbf{184}
661--670 (2013)] for making predictions instead of only performing a successful
characterization of the statistics on the punctuation of the real leagues. Our
adaptation considers the advantage of playing at home when considering the
potential of the home and away teams. The algorithm predicts the tournament's
outcome by using the market value or/and the ongoing team's performance as
initial conditions in the context of Monte Carlo simulations. We present and
compare our results to the worldwide known SPI predictions performed by the
"FiveThirtyEight" project. The results show that the algorithm can deliver good
predictions even with a few ingredients and in more complicated seasons like
the 2020 editions where the matches were played without fans in the stadiums.
|
In this paper we investigate how the bootstrap can be applied to time series
regressions when the volatility of the innovations is random and
non-stationary. The volatility of many economic and financial time series
displays persistent changes and possible non-stationarity. However, the theory
of the bootstrap for such models has focused on deterministic changes of the
unconditional variance and little is known about the performance and the
validity of the bootstrap when the volatility is driven by a non-stationary
stochastic process. This includes near-integrated volatility processes as well
as near-integrated GARCH processes. This paper develops conditions for
bootstrap validity in time series regressions with non-stationary, stochastic
volatility. We show that in such cases the distribution of bootstrap statistics
(conditional on the data) is random in the limit. Consequently, the
conventional approaches to proving bootstrap validity, involving weak
convergence in probability of the bootstrap statistic, fail to deliver the
required results. Instead, we use the concept of `weak convergence in
distribution' to develop and establish novel conditions for validity of the
wild bootstrap, conditional on the volatility process. We apply our results to
several testing problems in the presence of non-stationary stochastic
volatility, including testing in a location model, testing for structural
change and testing for an autoregressive unit root. Sufficient conditions for
bootstrap validity include the absence of statistical leverage effects, i.e.,
correlation between the error process and its future conditional variance. The
results are illustrated using Monte Carlo simulations, which indicate that the
wild bootstrap leads to size control even in small samples.
|
In automotive domain, operation of secondary tasks like accessing
infotainment system, adjusting air conditioning vents, and side mirrors
distract drivers from driving. Though existing modalities like gesture and
speech recognition systems facilitate undertaking secondary tasks by reducing
duration of eyes off the road, those often require remembering a set of
gestures or screen sequences. In this paper, we have proposed two different
modalities for drivers to virtually touch the dashboard display using a laser
tracker with a mechanical switch and an eye gaze switch. We compared
performances of our proposed modalities against conventional touch modality in
automotive environment by comparing pointing and selection times of
representative secondary task and also analysed effect on driving performance
in terms of deviation from lane, average speed, variation in perceived workload
and system usability. We did not find significant difference in driving and
pointing performance between laser tracking system and existing touchscreen
system. Our result also showed that the driving and pointing performance of the
virtual touch system with eye gaze switch was significantly better than the
same with mechanical switch. We evaluated the efficacy of the proposed virtual
touch system with eye gaze switch inside a real car and investigated acceptance
of the system by professional drivers using qualitative research. The
quantitative and qualitative studies indicated importance of using multimodal
system inside car and highlighted several criteria for acceptance of new
automotive user interface.
|
This paper aims to provide an overview of the ethical concerns in artificial
intelligence (AI) and the framework that is needed to mitigate those risks, and
to suggest a practical path to ensure the development and use of AI at the
United Nations (UN) aligns with our ethical values. The overview discusses how
AI is an increasingly powerful tool with potential for good, albeit one with a
high risk of negative side-effects that go against fundamental human rights and
UN values. It explains the need for ethical principles for AI aligned with
principles for data governance, as data and AI are tightly interwoven. It
explores different ethical frameworks that exist and tools such as assessment
lists. It recommends that the UN develop a framework consisting of ethical
principles, architectural standards, assessment methods, tools and
methodologies, and a policy to govern the implementation and adherence to this
framework, accompanied by an education program for staff.
|
In the formalism of generalized holographic dark energy (HDE), the
holographic cut-off is generalized to depend upon $L_\mathrm{IR} =
L_\mathrm{IR} \left( L_\mathrm{p}, \dot L_\mathrm{p}, \ddot L_\mathrm{p},
\cdots, L_\mathrm{f}, \dot L_\mathrm{f}, \cdots, a\right)$ with $L_\mathrm{p}$
and $L_\mathrm{f}$ are the particle horizon and the future horizon,
respectively (moreover $a$ is the scale factor of the universe). Based on such
formalism, in the present paper, we show that a wide class of dark energy (DE)
models can be regarded as different candidates of the generalized HDE family,
with respective cut-offs. This can be thought as a symmetry between the
generalized HDE and different DE models. In this regard, we consider several
entropic dark energy models - like Tsallis entropic DE, the R\'{e}nyi entropic
DE, and the Sharma-Mittal entropic DE - and showed that they are indeed
equivalent with the generalized HDE. Such equivalence between the entropic DE
and the generalized HDE is extended to the scenario where the respective
exponents of the entropy functions are allowed to vary with the expansion of
the universe. Besides the entropic DE models, the correspondence with the
generalized HDE is also established for the Quintessence and for the Ricci DE
models. In all the above cases, the effective equation of state (EoS) parameter
corresponds to the holographic energy density are determined, by which the
equivalence of various DE models with the respective generalized HDE models are
further confirmed. The equivalent holographic cut-offs are determined by two
ways: (1) in terms of the particle horizon and its derivatives, (2) in terms of
the future horizon horizon and its derivatives.
|
Automated program verification is a difficult problem. It is undecidable even
for transition systems over Linear Integer Arithmetic (LIA). Extending the
transition system with theory of Arrays, further complicates the problem by
requiring inference and reasoning with universally quantified formulas. In this
paper, we present a new algorithm, Quic3, that extends IC3 to infer universally
quantified invariants over the combined theory of LIA and Arrays. Unlike other
approaches that use either IC3 or an SMT solver as a black box, Quic3 carefully
manages quantified generalization (to construct quantified invariants) and
quantifier instantiation (to detect convergence in the presence of
quantifiers). While Quic3 is not guaranteed to converge, it is guaranteed to
make progress by exploring longer and longer executions. We have implemented
Quic3 within the Constrained Horn Clause solver engine of Z3 and experimented
with it by applying Quic3 to verifying a variety of public benchmarks of array
manipulating C programs.
|
Fission properties of the actinide nuclei are deduced from theoretical
analysis. We investigate potential energy surfaces and fission barriers and
predict the fission fragment mass-yields of actinide isotopes. The results are
compared with experimental data where available. The calculations were
performed in the macroscopic-microscopic approximation with the
Lublin-Strasbourg Drop (LSD) for the macroscopic part and the microscopic
energy corrections were evaluated in the Yukawa-folded potential. The Fourier
nuclear shape parametrization is used to describe the nuclear shape, including
the non-axial degree of freedom. The fission fragment mass-yields of considered
nuclei are evaluated within a 3D collective model using the Born-Oppenheimer
approximation.
|
Light beams carrying orbital-angular-momentum (OAM) play an important role in
optical manipulation and communication owing to their unbounded state space.
However, it is still challenging to efficiently discriminate OAM modes with
large topological charges and thus only a small part of the OAM states have
been usually used. Here we demonstrate that neural networks can be trained to
sort OAM modes with large topological charges and unknown superpositions. Using
intensity images of OAM modes generalized in simulations and experiments as the
input data, we illustrate that our neural network has great generalization
power to recognize OAM modes of large topological charges beyond training areas
with high accuracy. Moreover, the trained neural network can correctly classify
and predict arbitrary superpositions of two OAM modes with random topological
charges. Our machine learning approach only requires a small portion of
experimental samples and significantly reduces the cost in experiments, which
paves the way to study the OAM physics and increase the state space of OAM
beams in practical applications.
|
Product quantization (PQ) is a widely used technique for ad-hoc retrieval.
Recent studies propose supervised PQ, where the embedding and quantization
models can be jointly trained with supervised learning. However, there is a
lack of appropriate formulation of the joint training objective; thus, the
improvements over previous non-supervised baselines are limited in reality. In
this work, we propose the Matching-oriented Product Quantization (MoPQ), where
a novel objective Multinoulli Contrastive Loss (MCL) is formulated. With the
minimization of MCL, we are able to maximize the matching probability of query
and ground-truth key, which contributes to the optimal retrieval accuracy.
Given that the exact computation of MCL is intractable due to the demand of
vast contrastive samples, we further propose the Differentiable Cross-device
Sampling (DCS), which significantly augments the contrastive samples for
precise approximation of MCL. We conduct extensive experimental studies on four
real-world datasets, whose results verify the effectiveness of MoPQ. The code
is available at https://github.com/microsoft/MoPQ.
|
We show that a conjecture of Putman-Wieland, which posits the nonexistence of
finite orbits for higher Prym representations of the mapping class group, is
equivalent to the existence of surface-by-surface and surface-by-free groups
which do not virtually algebraically fiber.
|
Children in a large number of international and cross-cultural families in
and outside of the US learn and speak more than one language. However, parents
often struggle to acquaint their young children with their local language if
the child spends majority of time at home and with their spoken language if
they go to daycare or school. By reviewing relevant literature about the role
of screen media content in young children's language learning, and interviewing
a subset of parents raising multilingual children, we explore the potential of
designing conversational user interfaces which can double as an assistive
language aid.We present a preliminary list of objectives to guide the the
design of conversational user interfaces dialogue for young children's
bilingual language acquisition.
|
Inelastic scattering experiments are key methods for mapping the full
dispersion of fundamental excitations of solids in the ground as well as
non-equilibrium states. A quantitative analysis of inelastic scattering in
terms of phonon excitations requires identifying the role of multi-phonon
processes. Here, we develop an efficient first-principles methodology for
calculating the all-phonon quantum mechanical structure factor of solids. We
demonstrate our method by obtaining excellent agreement between measurements
and calculations of the diffuse scattering patterns of black phosphorus,
showing that multi-phonon processes play a substantial role. The present
approach constitutes a step towards the interpretation of static and
time-resolved electron, X-ray, and neutron inelastic scattering data.
|
We introduce a family of Generalized Continuous Maxwell Demons (GCMDs)
operating on idealized single-bit equilibrium devices that combine the
single-measurement Szilard and the Continuous Maxwell Demon protocols. We
derive the cycle-distributions for extracted work, information-content and
time, and compute the power and information-to-work efficiency fluctuations for
the different models. We show that the efficiency at maximum power is maximal
for an opportunistic protocol of continuous-type in the dynamical regime
dominated by rare events. We also extend the analysis to finite-time work
extracting protocols by mapping them to a three-state GCMD. We show that
dynamical finite-time correlations in this model increase the
information-to-work conversion efficiency, underlining the role of temporal
correlations in optimizing information-to-energy conversion.
|
The UN-Habitat estimates that over one billion people live in slums around
the world. However, state-of-the-art techniques to detect the location of slum
areas employ high-resolution satellite imagery, which is costly to obtain and
process. As a result, researchers have started to look at utilising free and
open-access medium resolution satellite imagery. Yet, there is no clear
consensus on which data preparation and machine learning approaches are the
most appropriate to use with such imagery data. In this paper, we evaluate two
techniques (multi-spectral data and grey-level co-occurrence matrix feature
extraction) on an open-access dataset consisting of labelled Sentinel-2 images
with a spatial resolution of 10 meters. Both techniques were paired with a
canonical correlation forests classifier. The results show that the grey-level
co-occurrence matrix performed better than multi-spectral data for all four
cities. It had an average accuracy for the slum class of 97% and a mean
intersection over union of 94%, while multi-spectral data had 75% and 64% for
the respective metrics. These results indicate that open-access satellite
imagery with a resolution of at least 10 meters may be suitable for keeping
track of development goals such as the detection of slums in cities.
|
Legal English is a sublanguage that is important for everyone but not for
everyone to understand. Pretrained models have become best practices among
current deep learning approaches for different problems. It would be a waste or
even a danger if these models were applied in practice without knowledge of the
sublanguage of the law. In this paper, we raise the issue and propose a trivial
solution by introducing BERTLaw a legal sublanguage pretrained model. The
paper's experiments demonstrate the superior effectiveness of the method
compared to the baseline pretrained model
|
In this article, we first introduced the inflated unit Lindley distribution
considering zero or/and one inflation scenario and studied its basic
distributional and structural properties. Both the distributions are shown to
be members of exponential family with full rank. Different parameter estimation
methods are discussed and supporting simulation studies to check their efficacy
are also presented. Proportion of students passing the high school leaving
examination for the schools across the state of Manipur in India for the year
2020 are then modeled using the proposed distributions and compared with the
inflated beta distribution to justify its benefits.
|
Readability assessment is the task of evaluating the reading difficulty of a
given piece of text. Although research on computational approaches to
readability assessment is now two decades old, there is not much work on
synthesizing this research. This article is a brief survey of contemporary
research on developing computational models for readability assessment. We
identify the common approaches, discuss their shortcomings, and identify some
challenges for the future. Where possible, we also connect computational
research with insights from related work in other disciplines such as education
and psychology.
|
Event-based sensors have the potential to optimize energy consumption at
every stage in the signal processing pipeline, including data acquisition,
transmission, processing and storage. However, almost all state-of-the-art
systems are still built upon the classical Nyquist-based periodic signal
acquisition. In this work, we design and validate the Polygonal Approximation
Sampler (PAS), a novel circuit to implement a general-purpose event-based
sampler using a polygonal approximation algorithm as the underlying sampling
trigger. The circuit can be dynamically reconfigured to produce a coarse or a
detailed reconstruction of the analog input, by adjusting the error threshold
of the approximation. The proposed circuit is designed at the Register Transfer
Level and processes each input sample received from the ADC in a single clock
cycle. The PAS has been tested with three different types of archetypal signals
captured by wearable devices (electrocardiogram, accelerometer and respiration
data) and compared with a standard periodic ADC. These tests show that
single-channel signals, with slow variations and constant segments (like the
used single-lead ECG and the respiration signals) take great advantage from the
used sampling technique, reducing the amount of data used up to 99% without
significant performance degradation. At the same time, multi-channel signals
(like the six-dimensional accelerometer signal) can still benefit from the
designed circuit, achieving a reduction factor up to 80% with minor performance
degradation. These results open the door to new types of wearable sensors with
reduced size and higher battery lifetime.
|
Space information networks (SIN) are facing an ever-increasing thirst for
high-speed and high-capacity seamless data transmission due to the integration
of ground, air, and space communications. However, this imposes a new paradigm
on the architecture design of the integrated SIN. Recently, reconfigurable
intelligent surfaces (RISs) and mobile edge computing (MEC) are the most
promising techniques, conceived to improve communication and computation
capability by reconfiguring the wireless propagation environment and
offloading. Hence, converging RISs and MEC in SIN is becoming an effort to reap
the double benefits of computation and communication. In this article, we
propose an RIS-assisted collaborative MEC architecture for SIN and discuss its
implementation. Then we present its potential benefits, major challenges, and
feasible applications. Subsequently, we study different cases to evaluate the
system data rate and latency. Finally, we conclude with a list of open issues
in this research area.
|
The presence of stable, compact circumbinary discs of gas and dust around
post-asymptotic giant branch (post-AGB) binary systems has been well
established. We focus on one such system: IRAS 08544-4431. We present an
interferometric multi-wavelength analysis of the circumstellar environment of
IRAS 08544-4431. The aim is to constrain different contributions to the total
flux in the H, K, L, and N-bands in the radial direction. The data from
VLTI/PIONIER, VLTI/GRAVITY, and VLTI/MATISSE range from the near-infrared,
where the post-AGB star dominates, to the mid-infrared, where the disc
dominates. We fitted two geometric models to the visibility data to reproduce
the circumbinary disc: a ring with a Gaussian width and a flat disc model with
a temperature gradient. The flux contributions from the disc, the primary star
(modelled as a point-source), and an over-resolved component are recovered
along with the radial size of the emission, the temperature of the disc as a
function of radius, and the spectral dependencies of the different components.
The trends of all visibility data were well reproduced with the geometric
models. The near-infrared data were best fitted with a Gaussian ring model
while the mid-infrared data favoured a temperature gradient model. This implies
that a vertical structure is present at the disc inner rim, which we attribute
to a rounded puffed-up inner rim. The N-to-K size ratio is 2.8, referring to a
continuous flat source, analogues to young stellar objects. By combining
optical interferometric instruments operating at different wavelengths we can
resolve the complex structure of circumstellar discs and study the
wavelength-dependent opacity profile. A detailed radial, vertical, and
azimuthal structural analysis awaits a radiative transfer treatment in 3D to
capture all non-radial complexity.
|
We report on the development and extensive characterization of co-sputtered
tantala-zirconia thin films, with the goal to decrease coating Brownian noise
in present and future gravitational-wave detectors. We tested a variety of
sputtering processes of different energies and deposition rates, and we
considered the effect of different values of cation ratio $\eta =$ Zr/(Zr+Ta)
and of post-deposition heat treatment temperature $T_a$ on the optical and
mechanical properties of the films. Co-sputtered zirconia proved to be an
efficient way to frustrate crystallization in tantala thin films, allowing for
a substantial increase of the maximum annealing temperature and hence for a
decrease of coating mechanical loss. The lowest average coating loss was
observed for an ion-beam sputtered sample with $\eta = 0.485 \pm 0.004$
annealed at 800 $^{\circ}$C, yielding $\overline{\varphi} = 1.8 \times
10^{-4}$. All coating samples showed cracks after annealing. Although in
principle our measurements are sensitive to such defects, we found no evidence
that our results were affected. The issue could be solved, at least for
ion-beam sputtered coatings, by decreasing heating and cooling rates down to 7
$^{\circ}$C/h. While we observed as little optical absorption as in the
coatings of current gravitational-wave interferometers (0.5 parts per million),
further development will be needed to decrease light scattering and avoid the
formation of defects upon annealing.
|
Beginning with the Everett-DeWitt many-worlds interpretation of quantum
mechanics, there have been a series of proposals for how the state vector of a
quantum system might split at any instant into orthogonal branches, each of
which exhibits approximately classical behavior. Here we propose a
decomposition of a state vector into branches by finding the minimum of a
measure of the mean squared quantum complexity of the branches in the branch
decomposition. In a non-relativistic formulation of this proposal, branching
occurs repeatedly over time, with each branch splitting successively into
further sub-branches among which the branch followed by the real world is
chosen randomly according to the Born rule. In a Lorentz covariant version, the
real world is a single random draw from the set of branches at asymptotically
late time, restored to finite time by sequentially retracing the set of
branching events implied by the late time choice. The complexity measure
depends on a parameter $b$ with units of volume which sets the boundary between
quantum and classical behavior. The value of $b$ is, in principle, accessible
to experiment.
|
We prove existence, uniqueness and non-negativity of solutions of certain
integral equations describing the density of states $u(z)$ in the spectral
theory of soliton gases for the one dimensional integrable focusing Nonlinear
Schr\"{o}dinger Equation (fNLS) and for the Korteweg de Vries (KdV) equation.
Our proofs are based on ideas and methods of potential theory. In particular,
we show that the minimizing (positive) measure for certain energy functional is
absolutely continuous and its density $u(z)\geq 0$ solves the required integral
equation. In a similar fashion we show that $v(z)$, the temporal analog of
$u(z)$, is the difference of densities of two absolutely continuous measures.
Together, integral equations for $u,v$ represent nonlinear dispersion relation
for the fNLS soliton gas. We also discuss smoothness and other properties of
the obtained solutions. Finally, we obtain exact solutions of the above
integral equations in the case of a KdV condensate and a bound state fNLS
condensate. Our results is a first step towards a mathematical foundation for
the spectral theory of soliton and breather gases, which appeared in work of El
and Tovbis, Phys. Rev. E, 2020. It is expected that the presented ideas and
methods will be useful for studying similar classes of integral equation
describing, for example, breather gases for the fNLS, as well as soliton gases
of various integrable systems.
|
The second law of thermodynamics is asymmetric with respect to time as it
says that the entropy of the universe must have been lower in the past and will
be higher in the future. How this time-asymmetric law arises from the
time-symmetric equations of motion has been the subject of extensive discussion
in the scientific literature. The currently accepted resolution of the problem
is to assume that the universe began in a low entropy state for an unknown
reason. But the probability of this happening by chance is exceedingly small,
if all microstates are assigned equal a-priori probabilities. In this paper, I
explore another possible explanation, which is that our observations of the
time-asymmetric increase of entropy could simply be the result of the way we
assign a-priori probabilities differently to past and future events.
|
Recently, there have been several papers that discuss the extension of the
Pinball loss Support Vector Machine (Pin-SVM) model, originally proposed by
Huang et al.,[1][2]. Pin-SVM classifier deals with the pinball loss function,
which has been defined in terms of the parameter $\tau$. The parameter $\tau$
can take values in $[ -1,1]$. The existing Pin-SVM model requires to solve the
same optimization problem for all values of $\tau$ in $[ -1,1]$. In this paper,
we improve the existing Pin-SVM model for the binary classification task. At
first, we note that there is major difficulty in Pin-SVM model (Huang et al.
[1]) for $ -1 \leq \tau < 0$. Specifically, we show that the Pin-SVM model
requires the solution of different optimization problem for $ -1 \leq \tau <
0$. We further propose a unified model termed as Unified Pin-SVM which results
in a QPP valid for all $-1\leq \tau \leq 1$ and hence more convenient to use.
The proposed Unified Pin-SVM model can obtain a significant improvement in
accuracy over the existing Pin-SVM model which has also been empirically
justified by extensive numerical experiments with real-world datasets.
|
The presence of non-zero helicity in intergalactic magnetic fields is a
smoking gun for their primordial origin since they have to be generated by
processes that break CP invariance. As an experimental signature for the
presence of helical magnetic fields, an estimator $Q$ based on the triple
scalar product of the wave-vectors of photons generated in electromagnetic
cascades from, e.g., TeV blazars, has been suggested previously. We propose to
apply deep learning to helicity classification employing Convolutional Neural
Networks and show that this method outperforms the $Q$ estimator.
|
Being able to segment unseen classes not observed during training is an
important technical challenge in deep learning, because of its potential to
reduce the expensive annotation required for semantic segmentation. Prior
zero-label semantic segmentation works approach this task by learning
visual-semantic embeddings or generative models. However, they are prone to
overfitting on the seen classes because there is no training signal for them.
In this paper, we study the challenging generalized zero-label semantic
segmentation task where the model has to segment both seen and unseen classes
at test time. We assume that pixels of unseen classes could be present in the
training images but without being annotated. Our idea is to capture the latent
information on unseen classes by supervising the model with self-produced
pseudo-labels for unlabeled pixels. We propose a consistency regularizer to
filter out noisy pseudo-labels by taking the intersections of the pseudo-labels
generated from different augmentations of the same image. Our framework
generates pseudo-labels and then retrain the model with human-annotated and
pseudo-labelled data. This procedure is repeated for several iterations. As a
result, our approach achieves the new state-of-the-art on PascalVOC12 and
COCO-stuff datasets in the challenging generalized zero-label semantic
segmentation setting, surpassing other existing methods addressing this task
with more complex strategies.
|
We consider a diffusion given by a small noise perturbation of a dynamical
system driven by a potential function with a finite number of local minima. The
classical results of Freidlin and Wentzell show that the time this diffusion
spends in the domain of attraction of one of these local minima is
approximately exponentially distributed and hence the diffusion should behave
approximately like a Markov chain on the local minima. By the work of Bovier
and collaborators, the local minima can be associated with the small
eigenvalues of the diffusion generator. In Part I of this work, by applying a
Markov mapping theorem, we used the eigenfunctions of the generator to couple
this diffusion to a Markov chain whose generator has eigenvalues equal to the
eigenvalues of the diffusion generator that are associated with the local
minima and established explicit formulas for conditional probabilities
associated with this coupling. The fundamental question now becomes to relate
the coupled Markov chain to the approximate Markov chain suggested by the
results of Freidlin and Wentzel. In this paper, we take up this question and
provide a complete analysis of this relationship in the special case of a
double-well potential in one dimension.
|
To reconcile the two experimental findings on La_{2-x}Sr_{x}CuO_4, namely,
Fermi surface (FS) observed by angle-resolved photoemission spectroscopy and
sharp incommensurate magnetic peaks by neutron scattering, we propose a picture
that a quasi-one-dimensional FS (q-1dFS) is realized in each CuO_{2} plane
whose q-1d direction alternates along the c-axis.
|
We consider the problem of finding the subset of order statistics that
contains the most information about a sample of random variables drawn
independently from some known parametric distribution. We leverage
information-theoretic quantities, such as entropy and mutual information, to
quantify the level of informativeness and rigorously characterize the amount of
information contained in any subset of the complete collection of order
statistics. As an example, we show how these informativeness metrics can be
evaluated for a sample of discrete Bernoulli and continuous Uniform random
variables. Finally, we unveil how our most informative order statistics
framework can be applied to image processing applications. Specifically, we
investigate how the proposed measures can be used to choose the coefficients of
the L-estimator filter to denoise an image corrupted by random noise. We show
that both for discrete (e.g., salt-pepper noise) and continuous (e.g., mixed
Gaussian noise) noise distributions, the proposed method is competitive with
off-the-shelf filters, such as the median and the total variation filters, as
well as with wavelet-based denoising methods.
|
In [6] the authors gave a generalization of the concept of Igusa-Todorov
algebra and proved that those algebras, named Lat-Igusa-Todorov (or LIT for
short), satisfy the finitistic dimension conjecture. In this paper we explore
the scope of that generalization and give conditions for a triangular matrix
algebra to be LIT in terms of the algebras and the module used in its
definition.
|
Classical machine learning (ML) provides a potentially powerful approach to
solving challenging quantum many-body problems in physics and chemistry.
However, the advantages of ML over more traditional methods have not been
firmly established. In this work, we prove that classical ML algorithms can
efficiently predict ground state properties of gapped Hamiltonians in finite
spatial dimensions, after learning from data obtained by measuring other
Hamiltonians in the same quantum phase of matter. In contrast, under widely
accepted complexity theory assumptions, classical algorithms that do not learn
from data cannot achieve the same guarantee. We also prove that classical ML
algorithms can efficiently classify a wide range of quantum phases of matter.
Our arguments are based on the concept of a classical shadow, a succinct
classical description of a many-body quantum state that can be constructed in
feasible quantum experiments and be used to predict many properties of the
state. Extensive numerical experiments corroborate our theoretical results in a
variety of scenarios, including Rydberg atom systems, 2D random Heisenberg
models, symmetry-protected topological phases, and topologically ordered
phases.
|
We investigate a mechanism for a super-massive black hole at the center of a
galaxy to wander in the nucleus region. A situation is supposed in which the
central black hole tends to move by the gravitational attractions from the
nearby molecular clouds in a nuclear bulge but is braked via the dynamical
frictions by the ambient stars there. We estimate the approximate kinetic
energy of the black hole in an equilibrium between the energy gain rate through
the gravitational attractions and the energy loss rate through the dynamical
frictions, in a nuclear bulge composed of a nuclear stellar disk and a nuclear
stellar cluster as observed from our Galaxy. The wandering distance of the
black hole in the gravitational potential of the nuclear bulge is evaluated to
get as large as several 10 pc, when the black hole mass is relatively small.
The distance, however, shrinks as the black hole mass increases and the
equilibrium solution between the energy gain and loss disappears when the black
hole mass exceeds an upper limit. As a result, we can expect the following
scenario for the evolution of the black hole mass: When the black hole mass is
smaller than the upper limit, mass accretion of the interstellar matter in the
circum-nuclear region, causing the AGN activities, makes the black hole mass
larger. However, when the mass gets to the upper limit, the black hole loses
the balancing force against the dynamical friction and starts spiraling
downward to the gravity center. From simple parameter scaling, the upper mass
limit of the black hole is found to be proportional to the bulge mass and this
could explain the observed correlation of the black hole mass with the bulge
mass.
|
We study the estimation of the linear discriminant with projection pursuit, a
method that is blind in the sense that it does not use the class labels in the
estimation. Our viewpoint is asymptotic and, as our main contribution, we
derive central limit theorems for estimators based on three different
projection indices, skewness, kurtosis and their convex combination. The
results show that in each case the limiting covariance matrix is proportional
to that of linear discriminant analysis (LDA), an unblind estimator of the
discriminant. An extensive comparative study between the asymptotic variances
reveals that projection pursuit is able to achieve efficiency equal to LDA when
the groups are arbitrarily well-separated and their sizes are reasonably
balanced. We conclude with a real data example and a simulation study
investigating the validity of the obtained asymptotic formulas for finite
samples.
|
Different from traditional knowledge graphs (KGs) where facts are represented
as entity-relation-entity triplets, hyper-relational KGs (HKGs) allow triplets
to be associated with additional relation-entity pairs (a.k.a qualifiers) to
convey more complex information. How to effectively and efficiently model the
triplet-qualifier relationship for prediction tasks such as HKG completion is
an open challenge for research. This paper proposes to improve the
best-performing method in HKG completion, namely STARE, by introducing two
novel revisions: (1) Replacing the computation-heavy graph neural network
module with light-weight entity/relation embedding processing techniques for
efficiency improvement without sacrificing effectiveness; (2) Adding a
qualifier-oriented auxiliary training task for boosting the prediction power of
our approach on HKG completion. The proposed approach consistently outperforms
STARE in our experiments on three benchmark datasets, with significantly
improved computational efficiency.
|
Although there are a small number of work to conduct patent research by
building knowledge graph, but without constructing patent knowledge graph using
patent documents and combining latest natural language processing methods to
mine hidden rich semantic relationships in existing patents and predict new
possible patents. In this paper, we propose a new patent vacancy prediction
approach named PatentMiner to mine rich semantic knowledge and predict new
potential patents based on knowledge graph (KG) and graph attention mechanism.
Firstly, patent knowledge graph over time (e.g. year) is constructed by
carrying out named entity recognition and relation extrac-tion from patent
documents. Secondly, Common Neighbor Method (CNM), Graph Attention Networks
(GAT) and Context-enhanced Graph Attention Networks (CGAT) are proposed to
perform link prediction in the constructed knowledge graph to dig out the
potential triples. Finally, patents are defined on the knowledge graph by means
of co-occurrence relationship, that is, each patent is represented as a fully
connected subgraph containing all its entities and co-occurrence relationships
of the patent in the knowledge graph; Furthermore, we propose a new patent
prediction task which predicts a fully connected subgraph with newly added
prediction links as a new pa-tent. The experimental results demonstrate that
our proposed patent predic-tion approach can correctly predict new patents and
Context-enhanced Graph Attention Networks is much better than the baseline.
Meanwhile, our proposed patent vacancy prediction task still has significant
room to im-prove.
|
The fast-growing Emerging Market (EM) economies and their improved
transparency and liquidity have attracted international investors. However, the
external price shocks can result in a higher level of volatility as well as
domestic policy instability. Therefore, an efficient risk measure and hedging
strategies are needed to help investors protect their investments against this
risk. In this paper, a daily systemic risk measure, called FRM (Financial Risk
Meter) is proposed. The FRM-EM is applied to capture systemic risk behavior
embedded in the returns of the 25 largest EMs FIs, covering the BRIMST (Brazil,
Russia, India, Mexico, South Africa, and Turkey), and thereby reflects the
financial linkages between these economies. Concerning the Macro factors, in
addition to the Adrian and Brunnermeier (2016) Macro, we include the EM
sovereign yield spread over respective US Treasuries and the above-mentioned
countries currencies. The results indicated that the FRM of EMs FIs reached its
maximum during the US financial crisis following by COVID 19 crisis and the
Macro factors explain the BRIMST FIs with various degrees of sensibility. We
then study the relationship between those factors and the tail event network
behavior to build our policy recommendations to help the investors to choose
the suitable market for in-vestment and tail-event optimized portfolios. For
that purpose, an overlapping region between portfolio optimization strategies
and FRM network centrality is developed. We propose a robust and
well-diversified tail-event and cluster risk-sensitive portfolio allocation
model and compare it to more classical approaches
|
Deep neural networks offer numerous potential applications across geoscience,
for example, one could argue that they are the state-of-the-art method for
predicting faults in seismic datasets. In quantitative reservoir
characterization workflows, it is common to incorporate the uncertainty of
predictions thus such subsurface models should provide calibrated probabilities
and the associated uncertainties in their predictions. It has been shown that
popular Deep Learning-based models are often miscalibrated, and due to their
deterministic nature, provide no means to interpret the uncertainty of their
predictions. We compare three different approaches to obtaining probabilistic
models based on convolutional neural networks in a Bayesian formalism, namely
Deep Ensembles, Concrete Dropout, and Stochastic Weight Averaging-Gaussian
(SWAG). These methods are consistently applied to fault detection case studies
where Deep Ensembles use independently trained models to provide fault
probabilities, Concrete Dropout represents an extension to the popular Dropout
technique to approximate Bayesian neural networks, and finally, we apply SWAG,
a recent method that is based on the Bayesian inference equivalence of
mini-batch Stochastic Gradient Descent. We provide quantitative results in
terms of model calibration and uncertainty representation, as well as
qualitative results on synthetic and real seismic datasets. Our results show
that the approximate Bayesian methods, Concrete Dropout and SWAG, both provide
well-calibrated predictions and uncertainty attributes at a lower computational
cost when compared to the baseline Deep Ensemble approach. The resulting
uncertainties also offer a possibility to further improve the model performance
as well as enhancing the interpretability of the models.
|
Forecasting competitions are the equivalent of laboratory experimentation
widely used in physical and life sciences. They provide useful, objective
information to improve the theory and practice of forecasting, advancing the
field, expanding its usage and enhancing its value to decision and
policymakers. We describe ten design attributes to be considered when
organizing forecasting competitions, taking into account trade-offs between
optimal choices and practical concerns like costs, as well as the time and
effort required to participate in them. Consequently, we map all major past
competitions in respect to their design attributes, identifying similarities
and differences between them, as well as design gaps, and making suggestions
about the principles to be included in future competitions, putting a
particular emphasis on learning as much as possible from their implementation
in order to help improve forecasting accuracy and uncertainty. We discuss that
the task of forecasting often presents a multitude of challenges that can be
difficult to be captured in a single forecasting contest. To assess the caliber
of a forecaster, we, therefore, propose that organizers of future competitions
consider a multi-contest approach. We suggest the idea of a forecasting
"athlon", where different challenges of varying characteristics take place.
|
We consider the problem of online reinforcement learning for the Stochastic
Shortest Path (SSP) problem modeled as an unknown MDP with an absorbing state.
We propose PSRL-SSP, a simple posterior sampling-based reinforcement learning
algorithm for the SSP problem. The algorithm operates in epochs. At the
beginning of each epoch, a sample is drawn from the posterior distribution on
the unknown model dynamics, and the optimal policy with respect to the drawn
sample is followed during that epoch. An epoch completes if either the number
of visits to the goal state in the current epoch exceeds that of the previous
epoch, or the number of visits to any of the state-action pairs is doubled. We
establish a Bayesian regret bound of $O(B_\star S\sqrt{AK})$, where $B_\star$
is an upper bound on the expected cost of the optimal policy, $S$ is the size
of the state space, $A$ is the size of the action space, and $K$ is the number
of episodes. The algorithm only requires the knowledge of the prior
distribution, and has no hyper-parameters to tune. It is the first such
posterior sampling algorithm and outperforms numerically previously proposed
optimism-based algorithms.
|
In our research, we focus on the response to the non-consensual distribution
of intimate or sexually explicit digital images of adults, also referred as
revenge porn, from the point of view of the victims. In this paper, we present
a preliminary expert analysis of the process for reporting revenge porn abuses
in selected content sharing platforms. Among these, we included social
networks, image hosting websites, video hosting platforms, forums, and
pornographic sites. We looked at the way to report abuse, concerning both the
non-consensual online distribution of private sexual image or video (revenge
pornography), as well as the use of deepfake techniques, where the face of a
person can be replaced on original visual content with the aim of portraying
the victim in the context of sexual behaviours. This preliminary analysis is
directed to understand the current practices and potential issues in the
procedures designed by the providers for reporting these abuses.
|
In January 2019, the UK Government published its Maritime 2050 on Navigating
the Future strategy. In the strategy, the government highlighted the importance
of digitalization (with well-designed regulatory support) to achieve its goal
of ensuring that the UK plays a global leadership role in the maritime sector.
Ports, the gateways for 95% of UK trade movements, were identified as key sites
for investment in technological innovation. The government identified the
potential of the Internet of Things (IoT), in conjunction with other
information-sharing technologies, such as shared data platforms, and Artificial
Intelligence applications (AI), to synchronize processes within the port
ecosystem leading to improved efficiency, safety, and environmental benefits,
including improved air quality and lower greenhouse gas emissions.
|
Consider a nonuniformly hyperbolic map $ T $ modelled by a Young tower with
tails of the form $ O(n^{-\beta}) $, $ \beta>2 $. We prove optimal moment
bounds for Birkhoff sums $ \sum_{i=0}^{n-1}v\circ T^i $ and iterated sums $
\sum_{0\le i<j<n}v\circ T^i\, w\circ T^j $, where $ v,w:M\to \Bbb{R}$ are
(dynamically) H\"older observables. Previously iterated moment bounds were only
known for $ \beta>5$. Our method of proof is as follows; (i) prove that $ T $
satisfies an abstract functional correlation bound, (ii) use a weak dependence
argument to show that the functional correlation bound implies moment
estimates.
Such iterated moment bounds arise when using rough path theory to prove
deterministic homogenisation results. Indeed, by a recent result of Chevyrev,
Friz, Korepanov, Melbourne & Zhang we have convergence an It\^o diffusion for
fast-slow systems of the form \[
x^{(n)}_{k+1}=x_k^{(n)}+n^{-1}a(x_k^{(n)},y_k)+n^{-1/2}b(x_k^{(n)},y_k) , \quad
y_{k+1}=T y_k \] in the optimal range $ \beta>2. $
|
Recently, vision transformers and MLP-based models have been developed in
order to address some of the prevalent weaknesses in convolutional neural
networks. Due to the novelty of transformers being used in this domain along
with the self-attention mechanism, it remains unclear to what degree these
architectures are robust to corruptions. Despite some works proposing that data
augmentation remains essential for a model to be robust against corruptions, we
propose to explore the impact that the architecture has on corruption
robustness. We find that vision transformer architectures are inherently more
robust to corruptions than the ResNet-50 and MLP-Mixers. We also find that
vision transformers with 5 times fewer parameters than a ResNet-50 have more
shape bias. Our code is available to reproduce.
|
Recent trend towards increasing large machine learning models require both
training and inference tasks to be distributed. Considering the huge cost of
training these models, it is imperative to unlock optimizations in computation
and communication to obtain best performance. However, current logical
separation between computation and communication kernels in deep learning
frameworks misses the optimization opportunities across such barrier. Breaking
this abstraction with a holistic consideration can provide many optimizations
to provide performance improvements in distributed workloads. Manually applying
these optimizations needs modifications in underlying computation and
communication libraries for each scenario, which is time consuming and
error-prone.
Therefore, we present CoCoNeT, with a DSL to express a program with both
computation and communication. CoCoNeT contains several machine learning aware
transformations to optimize a program and a compiler to generate high
performance kernels. Providing both computation and communication as first
class constructs allows users to work on a high-level abstraction and apply
powerful optimizations, such as fusion or overlapping of communication and
computation. CoCoNeT enables us to optimize data-, model-and pipeline-parallel
workloads in large language models with only a few lines of code. Experiments
show CoCoNeT significantly outperforms state-of-the-art distributed machine
learning implementations.
|
Rapid progress in additive manufacturing methods has created a new class of
ultralight and strong architected metamaterials that resemble periodic truss
structures. The mechanical performance of these metamaterials with a very large
number of unit cells is ultimately limited by their tolerance to damage and
defects, but an understanding of this sensitivity has remained elusive. Using a
stretching-dominated micro-architecture and metamaterial specimens comprising
millions of unit-cells we show that not only is the stress intensity factor, as
used in conventional elastic fracture mechanics, insufficient to characterize
fracture but also that conventional fracture testing protocols are inadequate.
Via a combination of numerical calculations and asymptotic analyses, we extend
the ideas of fracture mechanics and develop a general test and design protocol
for the failure of metamaterials.
|
A fundamental advantage of Petri net models is the possibility to
automatically compute useful system invariants from the syntax of the net.
Classical techniques used for this are place invariants, P-components, siphons
or traps. Recently, Bozga et al. have presented a novel technique for the
\emph{parameterized} verification of safety properties of systems with a ring
or array architecture. They show that the statement \enquote{for every instance
of the parameterized Petri net, all markings satisfying the linear invariants
associated to all the P-components, siphons and traps of the instance are safe}
can be encoded in \acs{WS1S} and checked using tools like MONA. However, while
the technique certifies that this infinite set of linear invariants extracted
from P-components, siphons or traps are strong enough to prove safety, it does
not return an explanation of this fact understandable by humans. We present a
CEGAR loop that constructs a \emph{finite} set of \emph{parameterized}
P-components, siphons or traps, whose infinitely many instances are strong
enough to prove safety. For this we design parameterization procedures for
different architectures.
|
I discuss possible consequences of A. D. Sakharov's hypothesis of
cosmological transitions with changes in the signature of the metric, based on
the path integral approach. This hypothesis raises a number of mathematical and
philosophical questions. Mathematical questions concern the definition of the
path integral to include integration over spacetime regions with different
signatures of the metric. One possible way to describe the changes in the
signature is to admit time and space coordinates to be purely imaginary. It may
look like a generalization of what we have in the case of pseudo-Riemannian
manifolds with a non-trivial topology. The signature in these regions can be
fixed by special gauge conditions on components of the metric tensor. The
problem is what boundary conditions should be imposed on the boundaries of
these regions and how they should be taken into account in the definition of
the path integral. The philosophical question is what distinguishes the time
coordinate among other coordinates but the sign of the corresponding principal
value of the metric tensor. In particular, I try to speculate how the existence
of the regions with different signature can affect the evolution of the
Universe.
|
Decomposing taxes by source (labor, capital, sales), we analyze the impact of
automation (1) on tax revenues, (2) the structure of taxation, and (3) identify
channels of impact in 19 EU countries during 1995-2016. Robots and Information
and Communication Technologies (ICT) are different technologies designed to
automate manual (robots) or cognitive tasks (ICT).
Until 2007, robot diffusion led to decreasing factor and tax income, and a
shift from taxes on capital to goods. ICTs changed the structure of taxation
from capital to labor. We find decreasing employment, but increasing wages and
labor income. After 2008, robots have no effect but we find an ICT-induced
increase in capital income, a rise of services, but no effect on taxation.
Automation goes through different phases with different economic impacts which
affect the amount and structure of taxes. Whether automation erodes taxation
depends (a) on the technology type, (b) the stage of diffusion and (c) local
conditions.
|
In this paper, we propose the novel problem of Subteam Replacement: given a
team of people embedded in a social network to complete a certain task, and a
subset of members - subteam - in this team which have become unavailable, find
another set of people who can perform the subteam's role in the larger team.
The ability to simultaneously replace multiple team members is highly
appreciated in settings such as corporate management where team structure is
highly volatile and large-scale changes are commonplace. We conjecture that a
good candidate subteam should have high skill and structural similarity with
the replaced subteam while sharing a similar connection with the larger team as
a whole. Based on this conjecture, we propose a novel graph kernel which
evaluates the goodness of candidate subteams in this holistic way freely
adjustable to the need of the situation. To tackle the significant
computational difficulties, we combine our kernel with a fast approximate
algorithm which (a) employs effective pruning strategies, (b) exploits the
similarity between candidate team structures to reduce kernel computations, and
(c) features a solid theoretical bound obtained from mathematical properties of
the problem. We extensively test our solution on both synthetic and real
datasets to demonstrate its consistency and efficiency. Our proposed graph
kernel results in more suitable replacements being proposed compared to graph
kernels used in previous work, and our algorithm consistently outperforms
alternative choices by finding near-optimal solutions while scaling linearly
with the size of the replaced subteam.
|
We describe our straight-forward approach for Tasks 5 and 6 of 2021 Social
Media Mining for Health Applications (SMM4H) shared tasks. Our system is based
on fine-tuning Distill- BERT on each task, as well as first fine-tuning the
model on the other task. We explore how much fine-tuning is necessary for
accurately classifying tweets as containing self-reported COVID-19 symptoms
(Task 5) or whether a tweet related to COVID-19 is self-reporting, non-personal
reporting, or a literature/news mention of the virus (Task 6).
|
The multi-point Taylor polynomial, which is the general, unique and of
minimum degree ($mk+m-1$) polynomial $P_{k,m}(x)$ which interpolates a
function's derivatives in multiple points is presented in its explicit form. A
proof that this expression satisfies the multi-point Taylor polynomial's
defining property is given. Namely, it is proven that for a k-differentiable
function $f$ and a set of different m-points $\{a_1,...,a_m\}$, this polynomial
satisfies $P^{(n)}_{k,m}(a_i) = f^{(n)}(a_i)\quad \forall \, i = 1,...,m\quad
\&\quad \forall \, n = 0,...,k$. A discussion regarding previous expressions
presented in the literature, which mostly consisted in recursion formulas and
not explicit formulas, is made.
|
We propose a new sensing method based on the measurement of the second-order
autocorrelation of the output of micro- and nanolasers with intensity feedback.
The sensing function is implemented through the feedback-induced threshold
shift, whose photon statistics is controlled by the feedback level in a
characteristic way for different laser sizes. The specific response offers
performances which can be adapted to different kinds of sensors. We propose the
implementation of two schemes capable of providing a quantitative sensing
signal and covering a broad range of feedback levels: one is utilizing the
evolution of g$^{(2)}$(0), the other one is the ratio between central and side
peaks in g$^{(2)}(\tau)$. Laser-threshold-based sensing could, thanks to its
potential sensitivity, gain relevance in biomolecular diagnostics and security
monitoring.
|
Technical debt has become a well-known metaphor among software professionals,
visualizing how shortcuts taken during development can accumulate and become a
burden for software projects. In the traditional notion of technical debt,
software developers borrow from the maintainability and extensibility of a
software system, thus they are the ones paying the interest. User experience
(UX) debt, on the other hand, focuses on shortcuts taken to speed up
development at the expense of subpar usability, thus mainly borrowing from
users' efficiency. With this article, we want to build awareness for this
often-overlooked form of technical debt by outlining classes of UX debts that
we observed in practice and by pointing to the lack of research and tool
support targeting UX debt in general.
|
High spectral resolution observations toward the low mass-loss rate C-rich,
J-type AGB star Y CVn have been carried out at 7.5, 13.1 and 14.0 um with
SOFIA/EXES and IRTF/TEXES. Around 130 HCN and H13CN lines of bands v2, 2v2,
2v2-v2, 3v2-2v2, 3v2-v2, and 4v2-2v2 have been identified involving lower
levels with energies up to ~3900 K. These lines have been complemented with the
pure rotational lines J=1-0 and 3-2 of the vibrational states up to 2v2
acquired with the IRAM 30 m telescope, and with the continuum taken with ISO.
We have analyzed the data with a ro-vibrational diagram and a code which models
the absorption and emission of the circumstellar envelope of an AGB star. The
continuum is produced by the star with a small contribution from dust grains
comprising warm to hot SiC and cold amorphous carbon. The HCN abundance
distribution seems to be anisotropic. The ejected gas is accelerated up to the
terminal velocity (~8 km/s) from the photosphere to ~3R* but there is evidence
of higher velocities (>9-10 km/s) beyond this region. In the vicinity of Y CVn,
the line widths are as high as ~10 km/s, which implies a maximum turbulent
velocity of 6 km/s or the existence of other physical mechanisms probably
related to matter ejection that involve higher gas expansion velocities than
expected. HCN is rotationally and vibrationally out of LTE throughout the whole
envelope. A difference of about 1500 K in the rotational temperature at the
photosphere is needed to explain the observations at 7.5 and 13-14 um. Our
analysis finds a total HCN column density that ranges from ~2.1E+18 to 3.5E+18
cm^{-2}, an abundance with respect to H2 of 3.5E-05 to 1.3E-04, and a 12C/13C
isotopic ratio of ~2.5 throughout the whole envelope.
|
We consider a randomised version of Kleene's realisability interpretation of
intuitionistic arithmetic in which computability is replaced with randomised
computability with positive probability. In particular, we show that (i) the
set of randomly realisable statements is closed under intuitionistic
first-order logic, but (ii) different from the set of realisable statements,
that (iii) "realisability with probability 1" is the same as realisability and
(iv) that the axioms of bounded Heyting's arithmetic are randomly realisable,
but some instances of the full induction scheme fail to be randomly realisable.
|
Photoluminescence (PL) is a light-matter quantum interaction associated with
the chemical potential of light formulated by the Generalized Planck's law.
Without knowing the inherent temperature dependence of chemical potential, the
Generalized Planck's law is insufficient to characterize PL(T). Recent
experiments showed that PL at low temperatures conserves the emitted photon
rate, accompanied by a blue-shift and transition to thermal emission at a
higher temperature. Here, we theoretically study temperature-dependent PL by
including phononic interactions in a detailed balance analysis. Our solution
validates recent experiments and predicts important relations, including i) An
inherent relation between emissivity and the quantum efficiency of a system,
ii) A universal point defined by the pump and the temperature where the
emission rate is fixed to any material, iii) A new phonon-induced quenching
mechanism, and iv) Thermalization of the photon spectrum. These findings are
relevant to and important for all photonic fields where the temperature is
dominant.
|
An important step in the task of neural network design, such as
hyper-parameter optimization (HPO) or neural architecture search (NAS), is the
evaluation of a candidate model's performance. Given fixed computational
resources, one can either invest more time training each model to obtain more
accurate estimates of final performance, or spend more time exploring a greater
variety of models in the configuration space. In this work, we aim to optimize
this exploration-exploitation trade-off in the context of HPO and NAS for image
classification by accurately approximating a model's maximal performance early
in the training process. In contrast to recent accelerated NAS methods
customized for certain search spaces, e.g., requiring the search space to be
differentiable, our method is flexible and imposes almost no constraints on the
search space. Our method uses the evolution history of features of a network
during the early stages of training to build a proxy classifier that matches
the peak performance of the network under consideration. We show that our
method can be combined with multiple search algorithms to find better solutions
to a wide range of tasks in HPO and NAS. Using a sampling-based search
algorithm and parallel computing, our method can find an architecture which is
better than DARTS and with an 80% reduction in wall-clock search time.
|
We present an analysis of the kinematics of 14 satellites of the Milky Way
(MW). We use proper motions (PMs) from the $Gaia$ Early Data Release 3 (EDR3)
and line-of-sight velocities ($v_{\mathrm{los}}$) available in the literature
to derive the systemic 3D motion of these systems. For six of them, namely the
Carina, Draco, Fornax, Sculptor, Sextans, and Ursa Minor dwarf spheroidal
galaxies (dSph), we study the internal kinematics projecting the stellar PMs
into radial, $V_R$ (expansion/contraction), and tangential, $V_T$ (rotation),
velocity components with respect to the centre of mass. We find significant
rotation in the Carina ($|V_T| = 9.6 \pm 4.5 \ {\rm{km \ s^{-1}}}\>$), Fornax
($|V_T| = 2.8 \pm 1.3 \ {\rm{km \ s^{-1}}}\>$), and Sculptor ($|V_T| = 3.0 \pm
1.0 \ {\rm{km \ s^{-1}}}\>$) dSphs. Besides the Sagittarius dSph, these are the
first measurements of internal rotation in the plane of the sky in the MW's
classical dSphs. All galaxies except Carina show $|V_T| / \sigma_v < 1$. We
find that slower rotators tend to show, on average, larger sky-projected
ellipticity (as expected for a sample with random viewing angles) and are
located at smaller Galactocentric distances (as expected for tidal stirring
scenarios in which rotation is transformed into random motions as satellites
sink into the parent halo). However, these trends are small and not
statistically significant, indicating that rotation has not played a dominant
role in shaping the 3D structure of these galaxies. Either tidal stirring had a
weak impact on the evolution of these systems or it perturbed them with similar
efficiency regardless of their current Galactocentric distance.
|
In this paper, a numerical study is conducted to investigate boiling of a
cryogen on a solid surface as well as on a liquid surface. Both single mode and
multi-mode boiling is reported for boiling on to a solid surface. In case of
boiling on a liquid surface, liquid nitrogen is selected as the cryogen
(boiling fluid) and water is chosen as the base fluid (heating fluid).
Different flow instabilities and their underlying consequences during boiling
of a cryogen are also discussed. For the boiling on a solid surface, in the
single mode, bubble growth, its departure, and area weighted average heat flux
are reported, where they increase linearly with increase in the wall superheat.
Asymmetry in the bubble growth and departure of 2nd batch of the vapor bubbles
have been observed due to local fluctuations and turbulence created just after
the pinch off of the 1st batch of vapor bubbles in case of multi-mode boiling
on the solid surface. Boiling of LN2 on a liquid surface is reported for a base
fluid (Water) temperature of 300 K. Vapor film thickness decreases with time
and the minimum film thickness just before rupture is 7.62 micrometer,
dominance of thermocapillary over vapor thrust causes breaking of the vapor
film at 0.0325s. The difference in evaporation rate and vapor generation,
before and after vapor film collapse is significant.
|
Generating videos with content and motion variations is a challenging task in
computer vision. While the recent development of GAN allows video generation
from latent representations, it is not easy to produce videos with particular
content of motion patterns of interest. In this paper, we propose Dual Motion
Transfer GAN (Dual-MTGAN), which takes image and video data as inputs while
learning disentangled content and motion representations. Our Dual-MTGAN is
able to perform deterministic motion transfer and stochastic motion generation.
Based on a given image, the former preserves the input content and transfers
motion patterns observed from another video sequence, and the latter directly
produces videos with plausible yet diverse motion patterns based on the input
image. The proposed model is trained in an end-to-end manner, without the need
to utilize pre-defined motion features like pose or facial landmarks. Our
quantitative and qualitative results would confirm the effectiveness and
robustness of our model in addressing such conditioned image-to-video tasks.
|
The infinite many symmetries of DS(Davey-Stewartson) system are closely
connected to the integrable deformations of surfaces in $\mathbb{R}^{4}$. In
this paper, we give a direct algorithm to construct the expression of the
DS(Davey-Stewartson) hierarchy by two scalar pseudo-differential operators
involved with $\partial$ and $\hat{\partial}$.
|
Young giant planets and brown dwarf companions emit near-infrared radiation
that can be linearly polarized up to several percent. This polarization can
reveal the presence of a circumsubstellar accretion disk, rotation-induced
oblateness of the atmosphere, or an inhomogeneous distribution of atmospheric
dust clouds. We measured the near-infrared linear polarization of 20 known
directly imaged exoplanets and brown dwarf companions with the high-contrast
imager SPHERE-IRDIS at the VLT. We reduced the data using the IRDAP pipeline to
correct for the instrumental polarization and crosstalk with an absolute
polarimetric accuracy <0.1% in the degree of polarization. We report the first
detection of polarization originating from substellar companions, with a
polarization of several tenths of a percent for DH Tau B and GSC 6214-210 B in
H-band. By comparing the measured polarization with that of nearby stars, we
find that the polarization is unlikely to be caused by interstellar dust.
Because the companions have previously measured hydrogen emission lines and red
colors, the polarization most likely originates from circumsubstellar disks.
Through radiative transfer modeling, we constrain the position angles of the
disks and find that the disks must have high inclinations. The presence of
these disks as well as the misalignment of the disk of DH Tau B with the disk
around its primary star suggest in situ formation of the companions. For the 18
other companions, we do not detect significant polarization and place
subpercent upper limits on their degree of polarization. These non-detections
may indicate the absence of circumsubstellar disks, a slow rotation rate of
young companions, the upper atmospheres containing primarily submicron-sized
dust grains, and/or limited cloud inhomogeneity. Finally, we present images of
the circumstellar disks of DH Tau, GQ Lup, PDS 70, Beta Pic, and HD 106906.
|
In superconducting quantum circuits (SQCs), chiral routing quantum
information is often realized with the ferrite circulators, which are usually
buck, lossy and require strong magnetic fields. To overcome these problems, we
propose a novel method to realize chiral quantum networks by exploiting the
giant atom effects in SQC platforms. By assuming each coupling point modulated
with time, the interaction becomes momentum-dependent, and the giant atoms will
chirally emit photons due to interference effects. The chiral factor can
approach 1, and both the emission direction and rate can be freely tuned by the
modulating signals. We demonstrate that the high-fidelity state transfer
between remote giant atoms can be realized. Our proposal can be integrated on
the superconducting chip easily, and has the potential to work as a tunable
toolbox for quantum information processing in future chiral quantum networks.
|
We extend HPQCD's earlier $n_f=4$ lattice-QCD analysis of the ratio of
$\overline{\mathrm{MSB}}$ masses of the $b$ and $c$ quark to include results
from finer lattices (down to 0.03fm) and a new calculation of QED contributions
to the mass ratio. We find that
$\overline{m}_b(\mu)/\overline{m}_c(\mu)=4.586(12)$ at renormalization scale
$\mu=3$\,GeV. This result is nonperturbative. Combining it with HPQCD's recent
lattice QCD$+$QED determination of $\overline{m}_c(3\mathrm{GeV})$ gives a new
value for the $b$-quark mass: $\overline{m}_b(3\mathrm{GeV}) = 4.513(26)$GeV.
The $b$-mass corresponds to $\overline{m}_b(\overline{m}_b, n_f=5) =
4.202(21)$GeV. These results are the first based on simulations that include
QED.
|
In the fractional nonrelativistic potential model, the decomposition of heavy
quarkonium in a hot magnetized medium is investigated. The analytical solution
of the fractional radial Schrodinger equation for the hot-magnetized
interaction potential is displayed by using the conformable fractional
Nikiforov-Uvarov method. Analytical expressions for the energy eigenvalues and
the radial wave function are obtained for arbitrary quantum numbers. Next, we
study the charmonium and bottmonium binding energies for different magnetic
field values in the thermal medium. The effect of the fractional parameter on
the decomposition temperature is also analyzed for charmonium and bottomonium
in the presence of hot magnetized media. We conclude that the dissociation of
heavy quarkonium in the fractional nonrelativistic potential model is more
practical than the classical nonrelativistic potential model.
|
The study of advanced quantum devices for energy storage has attracted the
attention of the scientific community in the past few years. Although several
theoretical progresses have been achieved recently, experimental proposals of
platforms operating as quantum batteries under ambient conditions are still
lacking. In this context, this work presents a feasible realization of a
quantum battery in a carboxylate-based metal complex, which can store a finite
amount of extractable work under the form of quantum discord at room
temperature, and recharge by thermalization with a reservoir. Moreover, the
stored work can be evaluated through non-destructive measurements of the
compound's magnetic susceptibility. These results pave the way for the
development of enhanced energy storage platforms through material engineering.
|
We define the doubling zeta integral for smooth families of representations
of classical groups. Following this we prove a rationality result for these
zeta integrals and show that they satisfy a functional equation. Moreover, we
show that there exists an apropriate normalizing factor which allows us to
construct $\gamma$-factors for smooth families out of the functional equation.
We prove that under certain hypothesis, specializing this $\gamma$-factor at a
point of the family yields the $\gamma$-factor defined by Piateski-Shapiro and
Rallis.
|
Future generation wireless networks are designed with extremely low delay
requirements which makes even small contributed delays important. On the other
hand, software defined networking (SDN) has been introduced as a key enabler of
future wireless and cellular networks in order to make them more flexible. In
SDN, a central controller manages all network equipments by setting the
match-action pairs in flow tables of the devices. However, these flow tables
have limited capacity and thus are not capable of storing the rules of all the
users. In this paper, we consider an SDN-enabled base station (SD-BS) in a cell
equipped with a limited capacity flow table. We analyze the expected delay
incurred in processing of the incoming packets to the SD-BS and present a
mathematical expression for it in terms of density of the users and cell area.
|
We train deep generative models on datasets of reflexive polytopes. This
enables us to compare how well the models have picked up on various global
properties of generated samples. Our datasets are complete in the sense that
every single example, up to changes of coordinate, is included in the dataset.
Using this property we also perform tests checking to what extent the models
are merely memorizing the data. We also train models on the same dataset
represented in two different ways, enabling us to measure which form is easiest
to learn from. We use these experiments to show that deep generative models can
learn to generate geometric objects with non-trivial global properties, and
that the models learn some underlying properties of the objects rather than
simply memorizing the data.
|
Nano-membrane tri-gate beta-gallium oxide (\b{eta}-Ga2O3) field-effect
transistors (FETs) on SiO2/Si substrate fabricated via exfoliation have been
demonstrated for the first time. By employing electron beam lithography, the
minimum-sized features can be defined with a 50 nm fin structure. For
high-quality interface between \b{eta}-Ga2O3 and gate dielectric, atomic
layer-deposited 15-nm-thick aluminum oxide (Al2O3) was utilized with
Tri-methyl-aluminum (TMA) self-cleaning surface treatment. The fabricated
devices demonstrate extremely low subthreshold slope (SS) of 61 mV/dec, high
drain current (IDS) ON/OFF ratio of 1.5 X 109, and negligible transfer
characteristic hysteresis. We also experimentally demonstrated robustness of
these devices with current-voltage (I-V) characteristics measured at
temperatures up to 400 {\deg}C.
|
This technical report outlines the fundamental workings of the game logic
behind Ludii, a general game system, that can be used to play a wide variety of
games. Ludii is a program developed for the ERC-funded Digital Ludeme Project,
in which mathematical and computational approaches are used to study how games
were played, and spread, throughout history. This report explains how general
game states and equipment are represented in Ludii, and how the rule ludemes
dictating play are implemented behind the scenes, giving some insight into the
core game logic behind the Ludii general game player. This guide is intended to
help game designers using the Ludii game description language to understand it
more completely and make fuller use of its features when describing their
games.
|
In the classical partial vertex cover problem, we are given a graph $G$ and
two positive integers $R$ and $L$. The goal is to check whether there is a
subset $V'$ of $V$ of size at most $R$, such that $V'$ covers at least $L$
edges of $G$. The problem is NP-hard as it includes the Vertex Cover problem.
Previous research has addressed the extension of this problem where one has
weight-functions defined on sets of vertices and edges of $G$. In this paper,
we consider the following version of the problem where on the input we are
given an edge-weighted bipartite graph $G$, and three positive integers $R$,
$S$ and $T$. The goal is to check whether $G$ has a subset $V'$ of vertices of
$G$ of size at most $R$, such that the edges of $G$ covered by $V'$ have weight
at least $S$ and they include a matching of weight at least $T$. In the paper,
we address this problem from the perspective of fixed-parameter tractability.
One of our hardness results is obtained via a reduction from the bi-objective
knapsack problem, which we show to be W[1]-hard with respect to one of
parameters. We believe that this problem might be useful in obtaining similar
results in other situations.
|
Cytoskeletal networks are the main actuators of cellular mechanics, and a
foundational example for active matter physics. In cytoskeletal networks,
motion is generated on small scales by filaments that push and pull on each
other via molecular-scale motors. These local actuations give rise to large
scale stresses and motion. To understand how microscopic processes can give
rise to self-organized behavior on larger scales it is important to consider
what mechanisms mediate long-ranged mechanical interactions in the systems. Two
scenarios have been considered in the recent literature. The first are systems
which are relatively sparse, in which most of the large scale momentum transfer
is mediated by the solvent in which cytoskeletal filaments are suspended. The
second, are systems in which filaments are coupled via crosslink molecules
throughout. Here, we review the differences and commonalities between the
physics of these two regimes. We also survey the literature for the numbers
that allow us to place a material within either of these two classes.
|
We are interested in the effect of Dirichlet boundary conditions on the nodal
length of Laplace eigenfunctions. We study random Gaussian Laplace
eigenfunctions on the two dimensional square and find a two terms asymptotic
expansion for the expectation of the nodal length in any square of side larger
than the Planck scale, along a denisty one sequence of energy levels. The proof
relies on a new study of lattice points in small arcs, and shows that the said
expectation is independent of the position of the square, giving the same
asymptotic expansion both near and far from the boundaries.
|
In agreement with the gravitational-wave events which are constantly
increasing, new aspects of the internal structure of compact stars have come to
light. A scenario in which a first order transition takes place inside these
stars is of particular interest as it can lead, under conditions, to a third
gravitationally stable branch (besides white dwarfs and neutron stars). This is
known as the twin star scenario. The new branch yields stars with the same mass
as normal compact stars but quite different radii. In the current work, we
focus on hybrid stars undergone a hadron to quark phase transition near their
core and how this new stable configuration arises. Emphasis is to be given
especially in the aspects of the phase transition and its parameterization in
two different ways, namely with Maxwell construction and with Gibbs
construction. Qualitative findings of mass-radius relations of these stars will
also be presented.
|
Current densities are induced in the electronic structure of molecules when
they are exposed to external magnetic fields. Aromatic molecular rings sustain
net diatropic ring currents, whereas the net ring current in antiaromatic
molecular rings is paratropic and flows in the opposite, non-classical
direction. We present computational methods and protocols to calculate, analyse
and visualise magnetically induced current densities in molecules. Calculated
ring-current strengths are used for quantifying the degree of aromaticity. The
methods have been demonstrated by investigating ring-current strengths and the
degree of aromaticity of aromatic, antiaromatic and non-aromatic six-membered
hydrocarbon rings. Current-density pathways and ring-current strengths of
aromatic and antiaromatic porphyrinoids and other polycyclic molecules have
been studied. The aromaticity and current density of M\"obius-twisted molecules
has been investigated to find the dependence on the twist and the spatial
deformation of the molecular ring. Current densities of fullerene, gaudiene and
toroidal carbon nanotubes have also been studied.
|
Photometric observations of accreting, low-mass, pre-main-sequence stars
(i.e., Classical T Tauri stars; CTTS) have revealed different categories of
variability. Several of these classifications have been linked to changes in
$\dot{M}$. To test how accretion variability conditions lead to different
light-curve morphologies, we used 1D hydrodynamic simulations of accretion
along a magnetic field line coupled with radiative transfer models and a simple
treatment of rotation to generate synthetic light curves. We adopted previously
developed metrics in order to classify observations to facilitate comparisons
between observations and our models. We found that stellar mass, magnetic field
geometry, corotation radius, inclination, and turbulence all play roles in
producing the observed light curves and that no single parameter is entirely
dominant in controlling the observed variability. While the periodic behavior
of the light curve is most strongly affected by the inclination, it is also a
function of the magnetic field geometry and inner disk turbulence. Objects with
either pure dipole fields, strong aligned octupole components, or high
turbulence in the inner disk all tend to display accretion bursts. Objects with
anti-aligned octupole components or aligned, weaker octupole components tend to
show light curves with slightly fewer bursts. We did not find clear monotonic
trends between the stellar mass and empirical classification. This work
establishes the groundwork for more detailed characterization of well-studied
targets as more light curves of CTTS become available through missions such as
the Transiting Exoplanet Survey Satellite (TESS).
|
Much recent literature has formulated structure-from-motion (SfM) as a
self-supervised learning problem where the goal is to jointly learn neural
network models of depth and egomotion through view synthesis. Herein, we
address the open problem of how to optimally couple the depth and egomotion
network components. Toward this end, we introduce several notions of coupling,
categorize existing approaches, and present a novel tightly-coupled approach
that leverages the interdependence of depth and egomotion at training and at
inference time. Our approach uses iterative view synthesis to recursively
update the egomotion network input, permitting contextual information to be
passed between the components without explicit weight sharing. Through
substantial experiments, we demonstrate that our approach promotes consistency
between the depth and egomotion predictions at test time, improves
generalization on new data, and leads to state-of-the-art accuracy on indoor
and outdoor depth and egomotion evaluation benchmarks.
|
Instances-reweighted adversarial training (IRAT) can significantly boost the
robustness of trained models, where data being less/more vulnerable to the
given attack are assigned smaller/larger weights during training. However, when
tested on attacks different from the given attack simulated in training, the
robustness may drop significantly (e.g., even worse than no reweighting). In
this paper, we study this problem and propose our solution--locally reweighted
adversarial training (LRAT). The rationale behind IRAT is that we do not need
to pay much attention to an instance that is already safe under the attack. We
argue that the safeness should be attack-dependent, so that for the same
instance, its weight can change given different attacks based on the same
model. Thus, if the attack simulated in training is mis-specified, the weights
of IRAT are misleading. To this end, LRAT pairs each instance with its
adversarial variants and performs local reweighting inside each pair, while
performing no global reweighting--the rationale is to fit the instance itself
if it is immune to the attack, but not to skip the pair, in order to passively
defend different attacks in future. Experiments show that LRAT works better
than both IRAT (i.e., global reweighting) and the standard AT (i.e., no
reweighting) when trained with an attack and tested on different attacks.
|
A Cayley (di)graph $Cay(G,S)$ of a group $G$ with respect to a subset $S$ of
$G$ is called normal if the right regular representation of $G$ is a normal
subgroup in the full automorphism group of $Cay(G,S)$, and is called a
CI-(di)graph if for every $T\subseteq G$, $Cay(G,S)\cong Cay(G,T)$ implies that
there is $\sigma\in Aut(G)$ such that $S^\sigma=T$. We call a group $G$ a
NDCI-group if all normal Cayley digraphs of $G$ are CI-digraphs, and a
NCI-group if all normal Cayley graphs of $G$ are CI-graphs, respectively. In
this paper, we prove that a cyclic group of order $n$ is a NDCI-group if and
only if $8\nmid n$, and is a NCI-group if and only if either $n=8$ or $8\nmid
n$.
|
Recently the first example of a family of pro-$p$ groups, for $p$ a prime,
with full normal Hausdorff spectrum was constructed. In this paper we further
investigate this family by computing their finitely generated Hausdorff
spectrum with respect to each of the five standard filtration series: the
$p$-power series, the iterated $p$-power series, the lower $p$-series, the
Frattini series and the dimension subgroup series. Here the finitely generated
Hausdorff spectra of these groups consist of infinitely many rational numbers,
and their computation requires a rather technical approach. This result also
gives further evidence to the non-existence of a finitely generated pro-$p$
group with uncountable finitely generated Hausdorff spectrum.
|
While Semi-supervised learning has gained much attention in computer vision
on image data, yet limited research exists on its applicability in the time
series domain. In this work, we investigate the transferability of
state-of-the-art deep semi-supervised models from image to time series
classification. We discuss the necessary model adaptations, in particular an
appropriate model backbone architecture and the use of tailored data
augmentation strategies. Based on these adaptations, we explore the potential
of deep semi-supervised learning in the context of time series classification
by evaluating our methods on large public time series classification problems
with varying amounts of labelled samples. We perform extensive comparisons
under a decidedly realistic and appropriate evaluation scheme with a unified
reimplementation of all algorithms considered, which is yet lacking in the
field. We find that these transferred semi-supervised models show significant
performance gains over strong supervised, semi-supervised and self-supervised
alternatives, especially for scenarios with very few labelled samples.
|
In this article, we give a proof of multiplicativity for $\gamma$-factors, an
equality of parabolically induced and inducing factors, in the context of the
Braverman-Kazhdan/Ngo program, under the assumption of commutativity of the
corresponding Fourier transforms and a certain generalized Harish-Chandra
transform. We also discuss the resolution of singularities and their
rationality for reductive monoids, which are among the basic objects in the
program.
|
Stationary waves in the condensate of electron-hole pairs in the $n-p$
bilayer system are studied. The system demonstrates the transition from a
uniform (superfluid) to a nonuniform (supersolid) state. The precursor of this
transition is the appearance of the roton-type minimum in the collective mode
spectrum. Stationary waves occur in the flow of the condensate past an
obstacle. It is shown that the roton-type minimum manifests itself in a rather
complicated stationary wave pattern with several families of crests which cross
one another. It is found that the stationary wave pattern is essentially
modified under variation in the density of the condensate and under variation
in the flow velocity. It is shown that the pattern is formed in the main part
by shortwave modes in the case of a point obstacle. The contribution of
longwave modes is clearly visible in the case of a weak extended obstacle,
where the stationary wave pattern resembles the ship wave pattern.
|
Let $q$ be an odd prime power. Denote by $r(q)$ the value $q$ modulo 4. In
this paper we establish a correspondence between two types of maximal cliques
of order $\frac{q+r(q)}{2}$ in the Paley graph of order $q^2$.
|
Pool boiling is one of the efficient ways to remove heat from high-power
electronics, heat exchangers, and nuclear reactors. Nowadays, the pool boiling
method is tried to exercise frequently due to its ability to remove high heat
flux compared with natural/forced convection, while maintaining at low wall
superheat. But this pool boiling heat transfer capacity is also limited by an
important parameter, called critical heat flux (CHF). At the point of CHF, the
heat transfer coefficient (HTC) drastically decreases due to the change of the
heat transfer regime from nucleate boiling to film boiling. This is why, the
enhancement of CHF, while maintaining low wall superheat is of great interest
to engineers and researchers.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.