_id
stringlengths 3
8
| text
stringlengths 34
3.44k
|
---|---|
c_114584 | For most diseases, building large databases of labeled genetic data is an
expensive and time-demanding task. To address this, we introduce genetic
Generative Adversarial Networks (gGAN), a semi-supervised approach based on an
innovative GAN architecture to create large synthetic genetic data sets
starting with a small amount of labeled data and a large amount of unlabeled
data. Our goal is to determine the propensity of a new individual to develop
the severe form of the illness from their genetic profile alone. The proposed
model achieved satisfactory results using real genetic data from different
datasets and populations, in which the test populations may not have the same
genetic profiles. The proposed model is self-aware and capable of determining
whether a new genetic profile has enough compatibility with the data on which
the network was trained and is thus suitable for prediction. The code and
datasets used can be found at https://github.com/caio-davi/gGAN. |
c_256265 | Automatic event detection from time series signals has wide applications,
such as abnormal event detection in video surveillance and event detection in
geophysical data. Traditional detection methods detect events primarily by the
use of similarity and correlation in data. Those methods can be inefficient and
yield low accuracy. In recent years, because of the significantly increased
computational power, machine learning techniques have revolutionized many
science and engineering domains. In this study, we apply a deep-learning-based
method to the detection of events from time series seismic signals. However, a
direct adaptation of the similar ideas from 2D object detection to our problem
faces two challenges. The first challenge is that the duration of earthquake
event varies significantly; The other is that the proposals generated are
temporally correlated. To address these challenges, we propose a novel cascaded
region-based convolutional neural network to capture earthquake events in
different sizes, while incorporating contextual information to enrich features
for each individual proposal. To achieve a better generalization performance,
we use densely connected blocks as the backbone of our network. Because of the
fact that some positive events are not correctly annotated, we further
formulate the detection problem as a learning-from-noise problem. To verify the
performance of our detection methods, we employ our methods to seismic data
generated from a bi-axial "earthquake machine" located at Rock Mechanics
Laboratory, and we acquire labels with the help of experts. Through our
numerical tests, we show that our novel detection techniques yield high
accuracy. Therefore, our novel deep-learning-based detection methods can
potentially be powerful tools for locating events from time series data in
various applications. |
c_232617 | The availability of large amounts of time series data, paired with the
performance of deep-learning algorithms on a broad class of problems, has
recently led to significant interest in the use of sequence-to-sequence models
for time series forecasting. We provide the first theoretical analysis of this
time series forecasting framework. We include a comparison of
sequence-to-sequence modeling to classical time series models, and as such our
theory can serve as a quantitative guide for practitioners choosing between
different modeling methodologies. |
c_287405 | Autonomous robots need to be able to adapt to unforeseen situations and to
acquire new skills through trial and error. Reinforcement learning in principle
offers a suitable methodological framework for this kind of autonomous
learning. However current computational reinforcement learning agents mostly
learn each individual skill entirely from scratch. How can we enable artificial
agents, such as robots, to acquire some form of generic knowledge, which they
could leverage for the learning of new skills? This paper argues that, like the
brain, the cognitive system of artificial agents has to develop a world model
to support adaptive behavior and learning. Inspiration is taken from two recent
developments in the cognitive science literature: predictive processing
theories of cognition, and the sensorimotor contingencies theory of perception.
Based on these, a hypothesis is formulated about what the content of
information might be that is encoded in an internal world model, and how an
agent could autonomously acquire it. A computational model is described to
formalize this hypothesis, and is evaluated in a series of simulation
experiments. |
c_92923 | Fuzzy time series forecasting methods are very popular among researchers for
predicting future values as they are not based on the strict assumptions of
traditional time series forecasting methods. Non-stochastic methods of fuzzy
time series forecasting are preferred by the researchers as they provide more
significant forecasting results. There are generally, four factors that
determine the performance of the forecasting method (1) number of intervals
(NOIs) and length of intervals to partition universe of discourse (UOD) (2)
fuzzification rules or feature representation of crisp time series (3) method
of establishing fuzzy logic rule (FLRs) between input and target values (4)
defuzzification rule to get crisp forecasted value. Considering the first two
factors to improve the forecasting accuracy, we proposed a novel non-stochastic
method fuzzy time series forecasting in which interval index number and
membership value are used as input features to predict future value. We
suggested a simple rounding-off range and suitable step size method to find the
optimal number of intervals (NOIs) and used fuzzy c-means clustering process to
divide UOD into intervals of unequal length. We implement support vector
machine (SVM) to establish FLRs. To test our proposed method we conduct a
simulated study on five widely used real time series and compare the
performance with some recently developed models. We also examine the
performance of the proposed model by using multi-layer perceptron (MLP) instead
of SVM. Two performance measures RSME and SMAPE are used for performance
analysis and observed better forecasting accuracy by the proposed model. |
c_88851 | The field of predictive process monitoring focuses on case-level models to
predict a single specific outcome such as a particular objective, (remaining)
time, or next activity/remaining sequence. Recently, a longer-horizon,
model-wide approach has been proposed in the form of process model forecasting,
which predicts the future state of a whole process model through the
forecasting of all activity-to-activity relations at once using time series
forecasting.
This paper introduces the concept of \emph{predictive process model
monitoring} which sits in the middle of both predictive process monitoring and
process model forecasting. Concretely, by modelling a process model as a set of
constraints being present between activities over time, we can capture more
detailed information between activities compared to process model forecasting,
while being compatible with typical predictive process monitoring objectives
which are often expressed in the same language as these constraints. To achieve
this, Processes-As-Movies (PAM) is introduced, i.e., a novel technique capable
of jointly mining and predicting declarative process constraints between
activities in various windows of a process' execution. PAM predicts what
declarative rules hold for a trace (objective-based), which also supports the
prediction of all constraints together as a process model (model-based).
Various recurrent neural network topologies inspired by video analysis tailored
to temporal high-dimensional input are used to model the process model
evolution with windows as time steps, including encoder-decoder long short-term
memory networks, and convolutional long short-term memory networks. Results
obtained over real-life event logs show that these topologies are effective in
terms of predictive accuracy and precision. |
c_139581 | Deep learning performs remarkably well on many time series analysis tasks
recently. The superior performance of deep neural networks relies heavily on a
large number of training data to avoid overfitting. However, the labeled data
of many real-world time series applications may be limited such as
classification in medical time series and anomaly detection in AIOps. As an
effective way to enhance the size and quality of the training data, data
augmentation is crucial to the successful application of deep learning models
on time series data. In this paper, we systematically review different data
augmentation methods for time series. We propose a taxonomy for the reviewed
methods, and then provide a structured review for these methods by highlighting
their strengths and limitations. We also empirically compare different data
augmentation methods for different tasks including time series classification,
anomaly detection, and forecasting. Finally, we discuss and highlight five
future directions to provide useful research guidance. |
c_134496 | We participated in the M4 competition for time series forecasting and
describe here our methods for forecasting daily time series. We used an
ensemble of five statistical forecasting methods and a method that we refer to
as the correlator. Our retrospective analysis using the ground truth values
published by the M4 organisers after the competition demonstrates that the
correlator was responsible for most of our gains over the naive constant
forecasting method. We identify data leakage as one reason for its success,
partly due to test data selected from different time intervals, and partly due
to quality issues in the original time series. We suggest that future
forecasting competitions should provide actual dates for the time series so
that some of those leakages could be avoided by the participants. |
c_115566 | Evaluating the reliability of intelligent physical systems against rare
safety-critical events poses a huge testing burden for real-world applications.
Simulation provides a useful platform to evaluate the extremal risks of these
systems before their deployments. Importance Sampling (IS), while proven to be
powerful for rare-event simulation, faces challenges in handling these
learning-based systems due to their black-box nature that fundamentally
undermines its efficiency guarantee, which can lead to under-estimation without
diagnostically detected. We propose a framework called Deep Probabilistic
Accelerated Evaluation (Deep-PrAE) to design statistically guaranteed IS, by
converting black-box samplers that are versatile but could lack guarantees,
into one with what we call a relaxed efficiency certificate that allows
accurate estimation of bounds on the safety-critical event probability. We
present the theory of Deep-PrAE that combines the dominating point concept with
rare-event set learning via deep neural network classifiers, and demonstrate
its effectiveness in numerical examples including the safety-testing of an
intelligent driving algorithm. |
c_136598 | Time Series Forecasting is at the core of many practical applications such as
sales forecasting for business, rainfall forecasting for agriculture and many
others. Though this problem has been extensively studied for years, it is still
considered a challenging problem due to complex and evolving nature of time
series data. Typical methods proposed for time series forecasting modeled
linear or non-linear dependencies between data observations. However it is a
generally accepted notion that no one method is universally effective for all
kinds of time series data. Attempts have been made to use dynamic and weighted
combination of heterogeneous and independent forecasting models and it has been
found to be a promising direction to tackle this problem. This method is based
on the assumption that different forecasters have different specialization and
varying performance for different distribution of data and weights are
dynamically assigned to multiple forecasters accordingly. However in many
practical time series data-set, the distribution of data slowly evolves with
time. We propose to employ a re-weighting based method to adjust the assigned
weights to various forecasters in order to account for such distribution-drift.
An exhaustive testing was performed against both real-world and synthesized
time-series. Experimental results show the competitiveness of the method in
comparison to state-of-the-art approaches for combining forecasters and
handling drift. |
c_268195 | Performance and high availability have become increasingly important drivers,
amongst other drivers, for user retention in the context of web services such
as social networks, and web search. Exogenic and/or endogenic factors often
give rise to anomalies, making it very challenging to maintain high
availability, while also delivering high performance. Given that
service-oriented architectures (SOA) typically have a large number of services,
with each service having a large set of metrics, automatic detection of
anomalies is non-trivial.
Although there exists a large body of prior research in anomaly detection,
existing techniques are not applicable in the context of social network data,
owing to the inherent seasonal and trend components in the time series data.
To this end, we developed two novel statistical techniques for automatically
detecting anomalies in cloud infrastructure data. Specifically, the techniques
employ statistical learning to detect anomalies in both application, and system
metrics. Seasonal decomposition is employed to filter the trend and seasonal
components of the time series, followed by the use of robust statistical
metrics -- median and median absolute deviation (MAD) -- to accurately detect
anomalies, even in the presence of seasonal spikes.
We demonstrate the efficacy of the proposed techniques from three different
perspectives, viz., capacity planning, user behavior, and supervised learning.
In particular, we used production data for evaluation, and we report Precision,
Recall, and F-measure in each case. |
c_99781 | Time series forecasting involves collecting and analyzing past observations
to develop a model to extrapolate such observations into the future.
Forecasting of future events is important in many fields to support decision
making as it contributes to reducing the future uncertainty. We propose
explainable boosted linear regression (EBLR) algorithm for time series
forecasting, which is an iterative method that starts with a base model, and
explains the model's errors through regression trees. At each iteration, the
path leading to highest error is added as a new variable to the base model. In
this regard, our approach can be considered as an improvement over general time
series models since it enables incorporating nonlinear features by residuals
explanation. More importantly, use of the single rule that contributes to the
error most allows for interpretable results. The proposed approach extends to
probabilistic forecasting through generating prediction intervals based on the
empirical error distribution. We conduct a detailed numerical study with EBLR
and compare against various other approaches. We observe that EBLR
substantially improves the base model performance through extracted features,
and provide a comparable performance to other well established approaches. The
interpretability of the model predictions and high predictive accuracy of EBLR
makes it a promising method for time series forecasting. |
c_239585 | Classical anomaly detection is principally concerned with point-based
anomalies, those anomalies that occur at a single point in time. Yet, many
real-world anomalies are range-based, meaning they occur over a period of time.
Motivated by this observation, we present a new mathematical model to evaluate
the accuracy of time series classification algorithms. Our model expands the
well-known Precision and Recall metrics to measure ranges, while simultaneously
enabling customization support for domain-specific preferences. |
c_144366 | Choosing the technique that is the best at forecasting your data, is a
problem that arises in any forecasting application. Decades of research have
resulted into an enormous amount of forecasting methods that stem from
statistics, econometrics and machine learning (ML), which leads to a very
difficult and elaborate choice to make in any forecasting exercise. This paper
aims to facilitate this process for high-level tactical sales forecasts by
comparing a large array of techniques for 35 times series that consist of both
industry data from the Coca-Cola Company and publicly available datasets.
However, instead of solely focusing on the accuracy of the resulting forecasts,
this paper introduces a novel and completely automated profit-driven approach
that takes into account the expected profit that a technique can create during
both the model building and evaluation process. The expected profit function
that is used for this purpose, is easy to understand and adaptable to any
situation by combining forecasting accuracy with business expertise.
Furthermore, we examine the added value of ML techniques, the inclusion of
external factors and the use of seasonal models in order to ascertain which
type of model works best in tactical sales forecasting. Our findings show that
simple seasonal time series models consistently outperform other methodologies
and that the profit-driven approach can lead to selecting a different
forecasting model. |
c_42017 | Recent advances in AIoT technologies have led to an increasing popularity of
utilizing machine learning algorithms to detect operational failures for
cyber-physical systems (CPS). In its basic form, an anomaly detection module
monitors the sensor measurements and actuator states from the physical plant,
and detects anomalies in these measurements to identify abnormal operation
status. Nevertheless, building effective anomaly detection models for CPS is
rather challenging as the model has to accurately detect anomalies in presence
of highly complicated system dynamics and unknown amount of sensor noise. In
this work, we propose a novel time series anomaly detection method called
Neural System Identification and Bayesian Filtering (NSIBF) in which a
specially crafted neural network architecture is posed for system
identification, i.e., capturing the dynamics of CPS in a dynamical state-space
model; then a Bayesian filtering algorithm is naturally applied on top of the
"identified" state-space model for robust anomaly detection by tracking the
uncertainty of the hidden state of the system recursively over time. We provide
qualitative as well as quantitative experiments with the proposed method on a
synthetic and three real-world CPS datasets, showing that NSIBF compares
favorably to the state-of-the-art methods with considerable improvements on
anomaly detection in CPS. |
c_21517 | Several techniques for multivariate time series anomaly detection have been
proposed recently, but a systematic comparison on a common set of datasets and
metrics is lacking. This paper presents a systematic and comprehensive
evaluation of unsupervised and semi-supervised deep-learning based methods for
anomaly detection and diagnosis on multivariate time series data from
cyberphysical systems. Unlike previous works, we vary the model and
post-processing of model errors, i.e. the scoring functions independently of
each other, through a grid of 10 models and 4 scoring functions, comparing
these variants to state of the art methods. In time-series anomaly detection,
detecting anomalous events is more important than detecting individual
anomalous time-points. Through experiments, we find that the existing
evaluation metrics either do not take events into account, or cannot
distinguish between a good detector and trivial detectors, such as a random or
an all-positive detector. We propose a new metric to overcome these drawbacks,
namely, the composite F-score ($Fc_1$), for evaluating time-series anomaly
detection.
Our study highlights that dynamic scoring functions work much better than
static ones for multivariate time series anomaly detection, and the choice of
scoring functions often matters more than the choice of the underlying model.
We also find that a simple, channel-wise model - the Univariate Fully-Connected
Auto-Encoder, with the dynamic Gaussian scoring function emerges as a winning
candidate for both anomaly detection and diagnosis, beating state of the art
algorithms. |
c_289239 | Mechanical devices such as engines, vehicles, aircrafts, etc., are typically
instrumented with numerous sensors to capture the behavior and health of the
machine. However, there are often external factors or variables which are not
captured by sensors leading to time-series which are inherently unpredictable.
For instance, manual controls and/or unmonitored environmental conditions or
load may lead to inherently unpredictable time-series. Detecting anomalies in
such scenarios becomes challenging using standard approaches based on
mathematical models that rely on stationarity, or prediction models that
utilize prediction errors to detect anomalies. We propose a Long Short Term
Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD)
that learns to reconstruct 'normal' time-series behavior, and thereafter uses
reconstruction error to detect anomalies. We experiment with three publicly
available quasi predictable time-series datasets: power demand, space shuttle,
and ECG, and two real-world engine datasets with both predictive and
unpredictable behavior. We show that EncDec-AD is robust and can detect
anomalies from predictable, unpredictable, periodic, aperiodic, and
quasi-periodic time-series. Further, we show that EncDec-AD is able to detect
anomalies from short time-series (length as small as 30) as well as long
time-series (length as large as 500). |
c_75707 | Many software engineering tasks, such as testing, and anomaly detection can
benefit from the ability to infer a behavioral model of the software.Most
existing inference approaches assume access to code to collect execution
sequences. In this paper, we investigate a black-box scenario, where the system
under analysis cannot be instrumented, in this granular fashion.This scenario
is particularly prevalent with control systems' log analysis in the form of
continuous signals. In this situation, an execution trace amounts to a
multivariate time-series of input and output signals, where different states of
the system correspond to different `phases` in the time-series. The main
challenge is to detect when these phase changes take place. Unfortunately, most
existing solutions are either univariate, make assumptions on the data
distribution, or have limited learning power.Therefore, we propose a hybrid
deep neural network that accepts as input a multivariate time series and
applies a set of convolutional and recurrent layers to learn the non-linear
correlations between signals and the patterns over time.We show how this
approach can be used to accurately detect state changes, and how the inferred
models can be successfully applied to transfer-learning scenarios, to
accurately process traces from different products with similar execution
characteristics. Our experimental results on two UAV autopilot case studies
indicate that our approach is highly accurate (over 90% F1 score for state
classification) and significantly improves baselines (by up to 102% for change
point detection).Using transfer learning we also show that up to 90% of the
maximum achievable F1 scores in the open-source case study can be achieved by
reusing the trained models from the industrial case and only fine tuning them
using as low as 5 labeled samples, which reduces the manual labeling effort by
98%. |
c_21789 | Predicting disaster events from seismic data is of paramount importance and
can save thousands of lives, especially in earthquake-prone areas and
habitations around volcanic craters. The drastic rise in the number of seismic
monitoring stations in recent years has allowed the collection of a huge
quantity of data, outpacing the capacity of seismologists. Due to the complex
nature of the seismological data, it is often difficult for seismologists to
detect subtle patterns with major implications. Machine learning algorithms
have been demonstrated to be effective in classification and prediction tasks
for seismic data. It has been widely known that some animals can sense
disasters like earthquakes from seismic signals well before the disaster
strikes. Mel spectrogram has been widely used for speech recognition as it
scales the actual frequencies according to human hearing. In this paper, we
propose a variant of the Mel spectrogram to scale the raw frequencies of
seismic data to the hearing of such animals that can sense disasters from
seismic signals. We are using a Computer vision algorithm along with clustering
that allows for the classification of unlabelled seismic data. |
c_196135 | The explosion of time series data in recent years has brought a flourish of
new time series analysis methods, for forecasting, clustering, classification
and other tasks. The evaluation of these new methods requires either collecting
or simulating a diverse set of time series benchmarking data to enable reliable
comparisons against alternative approaches. We propose GeneRAting TIme Series
with diverse and controllable characteristics, named GRATIS, with the use of
mixture autoregressive (MAR) models. We simulate sets of time series using MAR
models and investigate the diversity and coverage of the generated time series
in a time series feature space. By tuning the parameters of the MAR models,
GRATIS is also able to efficiently generate new time series with controllable
features. In general, as a costless surrogate to the traditional data
collection approach, GRATIS can be used as an evaluation tool for tasks such as
time series forecasting and classification. We illustrate the usefulness of our
time series generation process through a time series forecasting application. |
c_68891 | Autoencoders have been proposed as a powerful tool for model-independent
anomaly detection in high-energy physics. The operating principle is that
events which do not belong to the space of training data will be reconstructed
poorly, thus flagging them as anomalies. We point out that in a variety of
examples of interest, the connection between large reconstruction error and
anomalies is not so clear. In particular, for data sets with nontrivial
topology, there will always be points that erroneously seem anomalous due to
global issues. Conversely, neural networks typically have an inductive bias or
prior to locally interpolate such that undersampled or rare events may be
reconstructed with small error, despite actually being the desired anomalies.
Taken together, these facts are in tension with the simple picture of the
autoencoder as an anomaly detector. Using a series of illustrative
low-dimensional examples, we show explicitly how the intrinsic and extrinsic
topology of the dataset affects the behavior of an autoencoder and how this
topology is manifested in the latent space representation during training. We
ground this analysis in the discussion of a mock "bump hunt" in which the
autoencoder fails to identify an anomalous "signal" for reasons tied to the
intrinsic topology of $n$-particle phase space. |
c_48931 | I describe the rationale for, and design of, an agent-based simulation model
of a contemporary online sports-betting exchange: such exchanges, closely
related to the exchange mechanisms at the heart of major financial markets,
have revolutionized the gambling industry in the past 20 years, but gathering
sufficiently large quantities of rich and temporally high-resolution data from
real exchanges - i.e., the sort of data that is needed in large quantities for
Deep Learning - is often very expensive, and sometimes simply impossible; this
creates a need for a plausibly realistic synthetic data generator, which is
what this simulation now provides. The simulator, named the "Bristol Betting
Exchange" (BBE), is intended as a common platform, a data-source and
experimental test-bed, for researchers studying the application of AI and
machine learning (ML) techniques to issues arising in betting exchanges; and,
as far as I have been able to determine, BBE is the first of its kind: a free
open-source agent-based simulation model consisting not only of a
sports-betting exchange, but also a minimal simulation model of racetrack
sporting events (e.g., horse-races or car-races) about which bets may be made,
and a population of simulated bettors who each form their own private
evaluation of odds and place bets on the exchange before and - crucially -
during the race itself (i.e., so-called "in-play" betting) and whose betting
opinions change second-by-second as each race event unfolds. BBE is offered as
a proof-of-concept system that enables the generation of large high-resolution
data-sets for automated discovery or improvement of profitable strategies for
betting on sporting events via the application of AI/ML and advanced data
analytics techniques. This paper offers an extensive survey of relevant
literature and explains the motivation and design of BBE, and presents brief
illustrative results. |
c_304 | A time series represents a set of observations collected over time.
Typically, these observations are captured with a uniform sampling frequency
(e.g. daily). When data points are observed in uneven time intervals the time
series is referred to as irregular or intermittent. In such scenarios, the most
common solution is to reconstruct the time series to make it regular, thus
removing its intermittency. We hypothesise that, in irregular time series, the
time at which each observation is collected may be helpful to summarise the
dynamics of the data and improve forecasting performance. We study this idea by
developing a novel automatic feature engineering framework, which focuses on
extracting information from this point of view, i.e., when each instance is
collected. We study how valuable this information is by integrating it in a
time series forecasting workflow and investigate how it compares to or
complements state-of-the-art methods for regular time series forecasting. In
the end, we contribute by providing a novel framework that tackles feature
engineering for time series from an angle previously vastly ignored. We show
that our approach has the potential to further extract more information about
time series that significantly improves forecasting performance. |
c_59313 | Is critical input information encoded in specific sparse pathways within the
neural network? In this work, we discuss the problem of identifying these
critical pathways and subsequently leverage them for interpreting the network's
response to an input. The pruning objective -- selecting the smallest group of
neurons for which the response remains equivalent to the original network --
has been previously proposed for identifying critical pathways. We demonstrate
that sparse pathways derived from pruning do not necessarily encode critical
input information. To ensure sparse pathways include critical fragments of the
encoded input information, we propose pathway selection via neurons'
contribution to the response. We proceed to explain how critical pathways can
reveal critical input features. We prove that pathways selected via neuron
contribution are locally linear (in an L2-ball), a property that we use for
proposing a feature attribution method: "pathway gradient". We validate our
interpretation method using mainstream evaluation experiments. The validation
of pathway gradient interpretation method further confirms that selected
pathways using neuron contributions correspond to critical input features. The
code is publicly available. |
c_8064 | With the sweeping digitalization of societal, medical, industrial, and
scientific processes, sensing technologies are being deployed that produce
increasing volumes of time series data, thus fueling a plethora of new or
improved applications. In this setting, outlier detection is frequently
important, and while solutions based on neural networks exist, they leave room
for improvement in terms of both accuracy and efficiency. With the objective of
achieving such improvements, we propose a diversity-driven, convolutional
ensemble. To improve accuracy, the ensemble employs multiple basic outlier
detection models built on convolutional sequence-to-sequence autoencoders that
can capture temporal dependencies in time series. Further, a novel
diversity-driven training method maintains diversity among the basic models,
with the aim of improving the ensemble's accuracy. To improve efficiency, the
approach enables a high degree of parallelism during training. In addition, it
is able to transfer some model parameters from one basic model to another,
which reduces training time. We report on extensive experiments using
real-world multivariate time series that offer insight into the design choices
underlying the new approach and offer evidence that it is capable of improved
accuracy and efficiency. This is an extended version of "Unsupervised Time
Series Outlier Detection with Diversity-Driven Convolutional Ensembles", to
appear in PVLDB 2022. |
c_44324 | Extreme events are occurrences whose magnitude and potential cause extensive
damage on people, infrastructure, and the environment. Motivated by the extreme
nature of the current global health landscape, which is plagued by the
coronavirus pandemic, we seek to better understand and model extreme events.
Modeling extreme events is common in practice and plays an important role in
time-series prediction applications. Our goal is to (i) compare and investigate
the effect of some common extreme events modeling methods to explore which
method can be practical in reality and (ii) accelerate the deep learning
training process, which commonly uses deep recurrent neural network (RNN), by
implementing the asynchronous local Stochastic Gradient Descent (SGD) framework
among multiple compute nodes. In order to verify our distributed extreme events
modeling, we evaluate our proposed framework on a stock data set S\&P500, with
a standard recurrent neural network. Our intuition is to explore the (best)
extreme events modeling method which could work well under the distributed deep
learning setting. Moreover, by using asynchronous distributed learning, we aim
to significantly reduce the communication cost among the compute nodes and
central server, which is the main bottleneck of almost all distributed learning
frameworks.
We implement our proposed work and evaluate its performance on representative
data sets, such as S&P500 stock in $5$-year period. The experimental results
validate the correctness of the design principle and show a significant
training duration reduction upto $8$x, compared to the baseline single compute
node. Our results also show that our proposed work can achieve the same level
of test accuracy, compared to the baseline setting. |
c_198154 | Deep Learning (DL) models can be used to tackle time series analysis tasks
with great success. However, the performance of DL models can degenerate
rapidly if the data are not appropriately normalized. This issue is even more
apparent when DL is used for financial time series forecasting tasks, where the
non-stationary and multimodal nature of the data pose significant challenges
and severely affect the performance of DL models. In this work, a simple, yet
effective, neural layer, that is capable of adaptively normalizing the input
time series, while taking into account the distribution of the data, is
proposed. The proposed layer is trained in an end-to-end fashion using
back-propagation and leads to significant performance improvements compared to
other evaluated normalization schemes. The proposed method differs from
traditional normalization methods since it learns how to perform normalization
for a given task instead of using a fixed normalization scheme. At the same
time, it can be directly applied to any new time series without requiring
re-training. The effectiveness of the proposed method is demonstrated using a
large-scale limit order book dataset, as well as a load forecasting dataset. |
c_201804 | Time series forecasting is a crucial component of many important
applications, ranging from forecasting the stock markets to energy load
prediction. The high-dimensionality, velocity and variety of the data collected
in these applications pose significant and unique challenges that must be
carefully addressed for each of them. In this work, a novel Temporal Logistic
Neural Bag-of-Features approach, that can be used to tackle these challenges,
is proposed. The proposed method can be effectively combined with deep neural
networks, leading to powerful deep learning models for time series analysis.
However, combining existing BoF formulations with deep feature extractors pose
significant challenges: the distribution of the input features is not
stationary, tuning the hyper-parameters of the model can be especially
difficult and the normalizations involved in the BoF model can cause
significant instabilities during the training process. The proposed method is
capable of overcoming these limitations by a employing a novel adaptive scaling
mechanism and replacing the classical Gaussian-based density estimation
involved in the regular BoF model with a logistic kernel. The effectiveness of
the proposed approach is demonstrated using extensive experiments on a
large-scale financial time series dataset that consists of more than 4 million
limit orders. |
c_185506 | A central server needs to perform statistical inference based on samples that
are distributed over multiple users who can each send a message of limited
length to the center. We study problems of distribution learning and identity
testing in this distributed inference setting and examine the role of shared
randomness as a resource. We propose a general-purpose simulate-and-infer
strategy that uses only private-coin communication protocols and is
sample-optimal for distribution learning. This general strategy turns out to be
sample-optimal even for distribution testing among private-coin protocols.
Interestingly, we propose a public-coin protocol that outperforms
simulate-and-infer for distribution testing and is, in fact, sample-optimal.
Underlying our public-coin protocol is a random hash that when applied to the
samples minimally contracts the chi-squared distance of their distribution to
the uniform distribution. |
c_39639 | The ability to perform causal and counterfactual reasoning are central
properties of human intelligence. Decision-making systems that can perform
these types of reasoning have the potential to be more generalizable and
interpretable. Simulations have helped advance the state-of-the-art in this
domain, by providing the ability to systematically vary parameters (e.g.,
confounders) and generate examples of the outcomes in the case of
counterfactual scenarios. However, simulating complex temporal causal events in
multi-agent scenarios, such as those that exist in driving and vehicle
navigation, is challenging. To help address this, we present a high-fidelity
simulation environment that is designed for developing algorithms for causal
discovery and counterfactual reasoning in the safety-critical context. A core
component of our work is to introduce \textit{agency}, such that it is simple
to define and create complex scenarios using high-level definitions. The
vehicles then operate with agency to complete these objectives, meaning
low-level behaviors need only be controlled if necessary. We perform
experiments with three state-of-the-art methods to create baselines and
highlight the affordances of this environment. Finally, we highlight challenges
and opportunities for future work. |
c_229163 | Generative adversarial networks (GANs) are recently highly successful in
generative applications involving images and start being applied to time series
data. Here we describe EEG-GAN as a framework to generate
electroencephalographic (EEG) brain signals. We introduce a modification to the
improved training of Wasserstein GANs to stabilize training and investigate a
range of architectural choices critical for time series generation (most
notably up- and down-sampling). For evaluation we consider and compare
different metrics such as Inception score, Frechet inception distance and
sliced Wasserstein distance, together showing that our EEG-GAN framework
generated naturalistic EEG examples. It thus opens up a range of new generative
application scenarios in the neuroscientific and neurological context, such as
data augmentation in brain-computer interfacing tasks, EEG super-sampling, or
restoration of corrupted data segments. The possibility to generate signals of
a certain class and/or with specific properties may also open a new avenue for
research into the underlying structure of brain signals. |
c_13298 | Time series forecasting is widely used in business intelligence, e.g.,
forecast stock market price, sales, and help the analysis of data trend. Most
time series of interest are macroscopic time series that are aggregated from
microscopic data. However, instead of directly modeling the macroscopic time
series, rare literature studied the forecasting of macroscopic time series by
leveraging data on the microscopic level. In this paper, we assume that the
microscopic time series follow some unknown mixture probabilistic
distributions. We theoretically show that as we identify the ground truth
latent mixture components, the estimation of time series from each component
could be improved because of lower variance, thus benefitting the estimation of
macroscopic time series as well. Inspired by the power of Seq2seq and its
variants on the modeling of time series data, we propose Mixture of Seq2seq
(MixSeq), an end2end mixture model to cluster microscopic time series, where
all the components come from a family of Seq2seq models parameterized by
different parameters. Extensive experiments on both synthetic and real-world
data show the superiority of our approach. |
c_260518 | Complex computer simulators are increasingly used across fields of science as
generative models tying parameters of an underlying theory to experimental
observations. Inference in this setup is often difficult, as simulators rarely
admit a tractable density or likelihood function. We introduce Adversarial
Variational Optimization (AVO), a likelihood-free inference algorithm for
fitting a non-differentiable generative model incorporating ideas from
generative adversarial networks, variational optimization and empirical Bayes.
We adapt the training procedure of generative adversarial networks by replacing
the differentiable generative network with a domain-specific simulator. We
solve the resulting non-differentiable minimax problem by minimizing
variational upper bounds of the two adversarial objectives. Effectively, the
procedure results in learning a proposal distribution over simulator
parameters, such that the JS divergence between the marginal distribution of
the synthetic data and the empirical distribution of observed data is
minimized. We evaluate and compare the method with simulators producing both
discrete and continuous data. |
c_104199 | Time-series anomaly detection is a popular topic in both academia and
industrial fields. Many companies need to monitor thousands of temporal signals
for their applications and services and require instant feedback and alerts for
potential incidents in time. The task is challenging because of the complex
characteristics of time-series, which are messy, stochastic, and often without
proper labels. This prohibits training supervised models because of lack of
labels and a single model hardly fits different time series. In this paper, we
propose a solution to address these issues. We present an automated model
selection framework to automatically find the most suitable detection model
with proper parameters for the incoming data. The model selection layer is
extensible as it can be updated without too much effort when a new detector is
available to the service. Finally, we incorporate a customized tuning algorithm
to flexibly filter anomalies to meet customers' criteria. Experiments on
real-world datasets show the effectiveness of our solution. |
c_162502 | Research into time series classification has tended to focus on the case of
series of uniform length. However, it is common for real-world time series data
to have unequal lengths. Differing time series lengths may arise from a number
of fundamentally different mechanisms. In this work, we identify and evaluate
two classes of such mechanisms -- variations in sampling rate relative to the
relevant signal and variations between the start and end points of one time
series relative to one another. We investigate how time series generated by
each of these classes of mechanism are best addressed for time series
classification. We perform extensive experiments and provide practical
recommendations on how variations in length should be handled in time series
classification. |
c_117490 | Time series research has gathered lots of interests in the last decade,
especially for Time Series Classification (TSC) and Time Series Forecasting
(TSF). Research in TSC has greatly benefited from the University of California
Riverside and University of East Anglia (UCR/UEA) Time Series Archives. On the
other hand, the advancement in Time Series Forecasting relies on time series
forecasting competitions such as the Makridakis competitions, NN3 and NN5
Neural Network competitions, and a few Kaggle competitions. Each year,
thousands of papers proposing new algorithms for TSC and TSF have utilized
these benchmarking archives. These algorithms are designed for these specific
problems, but may not be useful for tasks such as predicting the heart rate of
a person using photoplethysmogram (PPG) and accelerometer data. We refer to
this problem as Time Series Extrinsic Regression (TSER), where we are
interested in a more general methodology of predicting a single continuous
value, from univariate or multivariate time series. This prediction can be from
the same time series or not directly related to the predictor time series and
does not necessarily need to be a future value or depend heavily on recent
values. To the best of our knowledge, research into TSER has received much less
attention in the time series research community and there are no models
developed for general time series extrinsic regression problems. Most models
are developed for a specific problem. Therefore, we aim to motivate and support
the research into TSER by introducing the first TSER benchmarking archive. This
archive contains 19 datasets from different domains, with varying number of
dimensions, unequal length dimensions, and missing values. In this paper, we
introduce the datasets in this archive and did an initial benchmark on existing
models. |
c_306538 | The paper is focused on the forecasting method for time series groups with
the use of algorithms for cluster analysis. $K$-means algorithm is suggested to
be a basic one for clustering. The coordinates of the centers of clusters have
been put in correspondence with summarizing time series data the centroids of
the clusters. A description of time series, the centroids of the clusters, is
implemented with the use of forecasting models. They are based on strict binary
trees and a modified clonal selection algorithm. With the help of such
forecasting models, the possibility of forming analytic dependences is shown.
It is suggested to use a common forecasting model, which is constructed for
time series the centroid of the cluster, in forecasting the private
(individual) time series in the cluster. The promising application of the
suggested method for grouped time series forecasting is demonstrated. |
c_39483 | Data augmentation is a key element of deep learning pipelines, as it informs
the network during training about transformations of the input data that keep
the label unchanged. Manually finding adequate augmentation methods and
parameters for a given pipeline is however rapidly cumbersome. In particular,
while intuition can guide this decision for images, the design and choice of
augmentation policies remains unclear for more complex types of data, such as
neuroscience signals. Besides, class-dependent augmentation strategies have
been surprisingly unexplored in the literature, although it is quite intuitive:
changing the color of a car image does not change the object class to be
predicted, but doing the same to the picture of an orange does. This paper
investigates gradient-based automatic data augmentation algorithms amenable to
class-wise policies with exponentially larger search spaces. Motivated by
supervised learning applications using EEG signals for which good augmentation
policies are mostly unknown, we propose a new differentiable relaxation of the
problem. In the class-agnostic setting, results show that our new relaxation
leads to optimal performance with faster training than competing gradient-based
methods, while also outperforming gradient-free methods in the class-wise
setting. This work proposes also novel differentiable augmentation operations
relevant for sleep stage classification. |
c_250623 | We consider the problem of mining signal temporal logical requirements from a
dataset of regular (good) and anomalous (bad) trajectories of a dynamical
system. We assume the training set to be labeled by human experts and that we
have access only to a limited amount of data, typically noisy. We provide a
systematic approach to synthesize both the syntactical structure and the
parameters of the temporal logic formula using a two-steps procedure: first, we
leverage a novel evolutionary algorithm for learning the structure of the
formula; second, we perform the parameter synthesis operating on the
statistical emulation of the average robustness for a candidate formula w.r.t.
its parameters. We compare our results with our previous work [{BufoBSBLB14]
and with a recently proposed decision-tree [bombara_decision_2016] based
method. We present experimental results on two case studies: an anomalous
trajectory detection problem of a naval surveillance system and the
characterization of an Ineffective Respiratory effort, showing the usefulness
of our work. |
c_87902 | Organizations rely heavily on time series metrics to measure and model key
aspects of operational and business performance. The ability to reliably detect
issues with these metrics is imperative to identifying early indicators of
major problems before they become pervasive. It can be very challenging to
proactively monitor a large number of diverse and constantly changing time
series for anomalies, so there are often gaps in monitoring coverage, disabled
or ignored monitors due to false positive alarms, and teams resorting to manual
inspection of charts to catch problems. Traditionally, variations in the data
generation processes and patterns have required strong modeling expertise to
create models that accurately flag anomalies. In this paper, we describe an
anomaly detection system that overcomes this common challenge by keeping track
of its own performance and making changes as necessary to each model without
requiring manual intervention. We demonstrate that this novel approach
outperforms available alternatives on benchmark datasets in many scenarios. |
c_61331 | In the fields of statistics and unsupervised machine learning a fundamental
and well-studied problem is anomaly detection. Anomalies are difficult to
define, yet many algorithms have been proposed. Underlying the approaches is
the nebulous understanding that anomalies are rare, unusual or inconsistent
with the majority of data. The present work provides a philosophical treatise
to clearly define anomalies and develops an algorithm for their efficient
detection with minimal user intervention. Inspired by the Gestalt School of
Psychology and the Helmholtz principle of human perception, anomalies are
assumed to be observations that are unexpected to occur with respect to certain
groupings made by the majority of the data. Under appropriate random variable
modelling anomalies are directly found in a set of data by a uniform and
independent random assumption of the distribution of constituent elements of
the observations, with anomalies corresponding to those observations where the
expectation of the number of occurrences of the elements in a given view is
$<1$. Starting from fundamental principles of human perception an unsupervised
anomaly detection algorithm is developed that is simple, real-time and
parameter-free. Experiments suggest it as a competing choice for univariate
data with promising results on the detection of global anomalies in
multivariate data. |
c_185325 | Models for predicting the time of a future event are crucial for risk
assessment, across a diverse range of applications. Existing time-to-event
(survival) models have focused primarily on preserving pairwise ordering of
estimated event times, or relative risk. Model calibration is relatively under
explored, despite its critical importance in time-to-event applications. We
present a survival function estimator for probabilistic predictions in
time-to-event models, based on a neural network model for draws from the
distribution of event times, without explicit assumptions on the form of the
distribution. This is done like in adversarial learning, but we achieve
learning without a discriminator or adversarial objective. The proposed
estimator can be used in practice as a means of estimating and comparing
conditional survival distributions, while accounting for the predictive
uncertainty of probabilistic models. Extensive experiments show that the
proposed model outperforms existing approaches, trained both with and without
adversarial learning, in terms of both calibration and concentration of
time-to-event distributions. |
c_117631 | Training models with discrete latent variables is challenging due to the
difficulty of estimating the gradients accurately. Much of the recent progress
has been achieved by taking advantage of continuous relaxations of the system,
which are not always available or even possible. The Augment-REINFORCE-Merge
(ARM) estimator provides an alternative that, instead of relaxation, uses
continuous augmentation. Applying antithetic sampling over the augmenting
variables yields a relatively low-variance and unbiased estimator applicable to
any model with binary latent variables. However, while antithetic sampling
reduces variance, the augmentation process increases variance. We show that ARM
can be improved by analytically integrating out the randomness introduced by
the augmentation process, guaranteeing substantial variance reduction. Our
estimator, DisARM, is simple to implement and has the same computational cost
as ARM. We evaluate DisARM on several generative modeling benchmarks and show
that it consistently outperforms ARM and a strong independent sample baseline
in terms of both variance and log-likelihood. Furthermore, we propose a local
version of DisARM designed for optimizing the multi-sample variational bound,
and show that it outperforms VIMCO, the current state-of-the-art method. |
c_26440 | Civil engineers use numerical simulations of a building's responses to
seismic forces to understand the nature of building failures, the limitations
of building codes, and how to determine the latter to prevent the former. Such
simulations generate large ensembles of multivariate, multiattribute time
series. Comprehensive understanding of this data requires techniques that
support the multivariate nature of the time series and can compare behaviors
that are both periodic and non-periodic across multiple time scales and
multiple time series themselves. In this paper, we present a novel technique to
extract such patterns from time series generated from simulations of seismic
responses. The core of our approach is the use of topic modeling, where topics
correspond to interpretable and discriminative features of the earthquakes. We
transform the raw time series data into a time series of topics, and use this
visual summary to compare temporal patterns in earthquakes, query earthquakes
via the topics across arbitrary time scales, and enable details on demand by
linking the topic visualization with the original earthquake data. We show,
through a surrogate task and an expert study, that this technique allows
analysts to more easily identify recurring patterns in such time series. By
integrating this technique in a prototype system, we show how it enables novel
forms of visual interaction. |
Subsets and Splits