--- license: apache-2.0 library_name: timer pipeline_tag: time-series-forecasting --- # Large Time-Series Model (Timer) Large time-series model introduced in this [paper](https://arxiv.org/abs/2402.02368) and enhanced with our [further work](https://arxiv.org/abs/2410.04803). This version is pre-trained on **307B** time points with **84M** parameters, a lightweight generative Transformer with the state-of-the-art performance on zero-shot forecasting: We evaluate the model on the following benchmarks: [TSLib Dataset](), [GIFT-Eval](). # Quickstart ``` pip install transformers==4.40.1 # please use this version for the stable compatibility ``` ``` import torch from transformers import AutoModelForCausalLM # load pretrain model model = AutoModelForCausalLM.from_pretrained('thuml/timer-base', trust_remote_code=True, token=True) # prepare input batch_size, lookback_length = 1, 2880 seqs = torch.randn(batch_size, lookback_length) mean, std = seqs.mean(dim=-1, keepdim=True), seqs.std(dim=-1, keepdim=True) # normalize the input to mitigate different scale normed_seqs = (seqs - mean) / std # forecast prediction_length = 96 normed_output = model.generate(normed_seqs, max_new_tokens=prediction_length)[:, -prediction_length:] output = std * normed_output + mean # rescale the output to the original scale print(output.shape) ``` A notebook example is also provided [here](https://huggingface.co/thuml/timer-1.1-84m/blob/main/prediction_example_etth1.ipynb). Try it out! ## Specification * Architecture: Causal Transformer (Decoder-only) * Pre-training Scale: 307B time points * Context Length: up to 2880 * Parameter Count: 84M * Patch Length: 96 * Number of Layers: 8 ## Acknowledgments This work was supported by the National Natural Science Foundation of China (62022050 and U2342217), the BNRist Innovation Fund (BNR2024RC01010), and the National Engineering Research Center for Big Data Software. The model is mostly built from the Internet public time series dataset, which comes from different research teams and providers. We sincerely thank all individuals and organizations who have contributed the data. Without their generous sharing, this model would not have existed. ## Citation ``` @inproceedings{liutimer, title={Timer: Generative Pre-trained Transformers Are Large Time Series Models}, author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng}, booktitle={Forty-first International Conference on Machine Learning} } @article{liu2024timer, title={Timer-XL: Long-Context Transformers for Unified Time Series Forecasting}, author={Liu, Yong and Qin, Guo and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng}, journal={arXiv preprint arXiv:2410.04803}, year={2024} } ```