Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ We evaluate the model on the following benchmarks: [TSLib Dataset](), [GIFT-Eval
|
|
14 |
|
15 |
# Quickstart
|
16 |
```
|
17 |
-
pip install transformers==4.40.1 # use this version
|
18 |
```
|
19 |
|
20 |
```
|
@@ -27,17 +27,19 @@ model = AutoModelForCausalLM.from_pretrained('thuml/timer-base', trust_remote_co
|
|
27 |
# prepare input
|
28 |
batch_size, lookback_length = 1, 2880
|
29 |
seqs = torch.randn(batch_size, lookback_length)
|
30 |
-
mean, std = seqs.mean(dim=-1, keepdim=True), seqs.std(dim=-1, keepdim=True)
|
31 |
normed_seqs = (seqs - mean) / std
|
32 |
|
33 |
# forecast
|
34 |
prediction_length = 96
|
35 |
normed_output = model.generate(normed_seqs, max_new_tokens=prediction_length)[:, -prediction_length:]
|
36 |
-
output = std * normed_output + mean
|
37 |
|
38 |
print(output.shape)
|
39 |
```
|
40 |
|
|
|
|
|
41 |
## Specification
|
42 |
|
43 |
* Architecture: Causal Transformer (Decoder-only)
|
|
|
14 |
|
15 |
# Quickstart
|
16 |
```
|
17 |
+
pip install transformers==4.40.1 # please use this version for the stable compatibility
|
18 |
```
|
19 |
|
20 |
```
|
|
|
27 |
# prepare input
|
28 |
batch_size, lookback_length = 1, 2880
|
29 |
seqs = torch.randn(batch_size, lookback_length)
|
30 |
+
mean, std = seqs.mean(dim=-1, keepdim=True), seqs.std(dim=-1, keepdim=True) # normalize the input to mitigate different scale
|
31 |
normed_seqs = (seqs - mean) / std
|
32 |
|
33 |
# forecast
|
34 |
prediction_length = 96
|
35 |
normed_output = model.generate(normed_seqs, max_new_tokens=prediction_length)[:, -prediction_length:]
|
36 |
+
output = std * normed_output + mean # rescale the output to the original scale
|
37 |
|
38 |
print(output.shape)
|
39 |
```
|
40 |
|
41 |
+
A notebook example is also provided [here](https://huggingface.co/thuml/timer-1.1-84m/blob/main/prediction_example_etth1.ipynb). Try it out!
|
42 |
+
|
43 |
## Specification
|
44 |
|
45 |
* Architecture: Causal Transformer (Decoder-only)
|