Yong99 commited on
Commit
5592854
·
verified ·
1 Parent(s): aca2b5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,13 +20,13 @@ tags:
20
 
21
  Large time-series model introduced in this [paper](https://arxiv.org/abs/2402.02368) and enhanced with our [further work](https://arxiv.org/abs/2410.04803).
22
 
23
- This version is pre-trained on **307B** time points with **84M** parameters, a lightweight generative Transformer with the state-of-the-art performance on zero-shot forecasting:
24
 
25
  We evaluate the model on the following benchmarks: [TSLib Dataset](), [GIFT-Eval]().
26
 
27
  # Quickstart
28
  ```
29
- pip install transformers==4.40.1 # please use this version for the stable compatibility
30
  ```
31
 
32
  ```
@@ -57,7 +57,7 @@ A notebook example is also provided [here](https://huggingface.co/thuml/timer-1.
57
  ## Specification
58
 
59
  * Architecture: Causal Transformer (Decoder-only)
60
- * Pre-training Scale: 307B time points
61
  * Context Length: up to 2880
62
  * Parameter Count: 84M
63
  * Patch Length: 96
 
20
 
21
  Large time-series model introduced in this [paper](https://arxiv.org/abs/2402.02368) and enhanced with our [further work](https://arxiv.org/abs/2410.04803).
22
 
23
+ This version is pre-trained on **260B** time points with **84M** parameters, a lightweight generative Transformer with the state-of-the-art performance on zero-shot forecasting:
24
 
25
  We evaluate the model on the following benchmarks: [TSLib Dataset](), [GIFT-Eval]().
26
 
27
  # Quickstart
28
  ```
29
+ pip install transformers==4.40.1 # use this version for the stable compatibility
30
  ```
31
 
32
  ```
 
57
  ## Specification
58
 
59
  * Architecture: Causal Transformer (Decoder-only)
60
+ * Pre-training Scale: 260B time points
61
  * Context Length: up to 2880
62
  * Parameter Count: 84M
63
  * Patch Length: 96