Update README.md
Browse files
README.md
CHANGED
@@ -7,17 +7,45 @@ pipeline_tag: time-series-forecasting
|
|
7 |
|
8 |
[Large time-series model](https://cloud.tsinghua.edu.cn/f/b766629dbc584a4e8563/) introduced in this [paper](https://arxiv.org/abs/2402.02368) and enhanced with our [further work](https://arxiv.org/abs/2410.04803).
|
9 |
|
10 |
-
The base version is pre-trained on **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
See this [Github Page](https://github.com/thuml/Large-Time-Series-Model) for examples of using this model.
|
13 |
|
14 |
## Acknowledgments
|
15 |
|
16 |
Timer is mostly built from the Internet public time series dataset, which comes from different research teams and providers. We sincerely thank all individuals and organizations who have contributed the data. Without their generous sharing, this model would not have existed.
|
17 |
|
18 |
-
* Time-Series-Library (https://github.com/thuml/Time-Series-Library)
|
19 |
-
* UTSD (https://huggingface.co/datasets/thuml/UTSD)
|
20 |
-
* LOTSA (https://huggingface.co/datasets/Salesforce/lotsa_data)
|
21 |
|
22 |
## Citation
|
23 |
|
|
|
7 |
|
8 |
[Large time-series model](https://cloud.tsinghua.edu.cn/f/b766629dbc584a4e8563/) introduced in this [paper](https://arxiv.org/abs/2402.02368) and enhanced with our [further work](https://arxiv.org/abs/2410.04803).
|
9 |
|
10 |
+
The base version is pre-trained on **307B** time points, which supports zero-shot forecasting and further adpatation.
|
11 |
+
* Zero-shot forecasting benchmarks: [TSLib Dataset](https://cdn-uploads.huggingface.co/production/uploads/64fbe24a2d20ced4e91de38a/n2IW7fTRpuZFMYoPr1h4O.png), [GIFT-Eval]().
|
12 |
+
* Codebase for fine-tuning: [Large-Time-Series-Model](https://github.com/thuml/Large-Time-Series-Model).
|
13 |
+
|
14 |
+
# Quickstart
|
15 |
+
|
16 |
+
```
|
17 |
+
import torch
|
18 |
+
from transformers import AutoModelForCausalLM
|
19 |
+
|
20 |
+
# load pretrain model
|
21 |
+
model = AutoModelForCausalLM.from_pretrained('thuml/timer-base', trust_remote_code=True, token=True)
|
22 |
+
|
23 |
+
# prepare input
|
24 |
+
seqs = torch.randn(2, 2880) # batch_size x input_len
|
25 |
+
mean, std = seqs.mean(dim=-1, keepdim=True), seqs.std(dim=-1, keepdim=True)
|
26 |
+
normed_seqs = (seqs - mean) / std
|
27 |
+
|
28 |
+
# forecast
|
29 |
+
prediction_length = 96
|
30 |
+
output = model.generate(normed_seqs, max_new_tokens=prediction_length)[:, -prediction_length:]
|
31 |
+
print(output.shape)
|
32 |
+
```
|
33 |
+
|
34 |
+
## Specification
|
35 |
+
|
36 |
+
* Architecture: Causal Transformer (Decoder-only)
|
37 |
+
* Pre-training Scale: 307 time points
|
38 |
+
* Context Length: up to 2880
|
39 |
+
* Parameter Count: 67M
|
40 |
+
|
41 |
+
* Patch Length: 96
|
42 |
+
* Number of Layers: 8
|
43 |
|
|
|
44 |
|
45 |
## Acknowledgments
|
46 |
|
47 |
Timer is mostly built from the Internet public time series dataset, which comes from different research teams and providers. We sincerely thank all individuals and organizations who have contributed the data. Without their generous sharing, this model would not have existed.
|
48 |
|
|
|
|
|
|
|
49 |
|
50 |
## Citation
|
51 |
|