Upload README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -1,3 +1,104 @@ 
     | 
|
| 1 | 
         
            -
            ---
         
     | 
| 2 | 
         
            -
            license: apache-2.0
         
     | 
| 3 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
| 
         | 
|
| 1 | 
         
            +
            ---
         
     | 
| 2 | 
         
            +
            license: apache-2.0
         
     | 
| 3 | 
         
            +
            library_name: transformers
         
     | 
| 4 | 
         
            +
            pipeline_tag: text-generation
         
     | 
| 5 | 
         
            +
            base_model:
         
     | 
| 6 | 
         
            +
            - Qwen/Qwen2.5-14B-Instruct
         
     | 
| 7 | 
         
            +
            datasets:
         
     | 
| 8 | 
         
            +
            - ChatTSRepo/ChatTS-Training-Dataset
         
     | 
| 9 | 
         
            +
            language:
         
     | 
| 10 | 
         
            +
            - zho
         
     | 
| 11 | 
         
            +
            - eng
         
     | 
| 12 | 
         
            +
            - fra
         
     | 
| 13 | 
         
            +
            - spa
         
     | 
| 14 | 
         
            +
            - por
         
     | 
| 15 | 
         
            +
            - deu
         
     | 
| 16 | 
         
            +
            - ita
         
     | 
| 17 | 
         
            +
            - rus
         
     | 
| 18 | 
         
            +
            - jpn
         
     | 
| 19 | 
         
            +
            - kor
         
     | 
| 20 | 
         
            +
            - vie
         
     | 
| 21 | 
         
            +
            - tha
         
     | 
| 22 | 
         
            +
            - ara
         
     | 
| 23 | 
         
            +
            ---
         
     | 
| 24 | 
         
            +
             
     | 
| 25 | 
         
            +
            # [VLDB' 25] ChatTS-14B-GPTQ-4Bit Model
         
     | 
| 26 | 
         
            +
             
     | 
| 27 | 
         
            +
            <div style="display:flex;justify-content: center">
         
     | 
| 28 | 
         
            +
            <a href="https://github.com/NetmanAIOps/ChatTS"><img alt="github" src="https://img.shields.io/badge/Code-GitHub-blue"></a>
         
     | 
| 29 | 
         
            +
            <a href="https://arxiv.org/abs/2412.03104"><img alt="preprint" src="https://img.shields.io/static/v1?label=arXiv&message=2412.03104&color=B31B1B&logo=arXiv"></a>
         
     | 
| 30 | 
         
            +
            </div>
         
     | 
| 31 | 
         
            +
             
     | 
| 32 | 
         
            +
            This is the GPTQ-4Bit quantized model of [ChatTS-14B](https://huggingface.co/bytedance-research/ChatTS-14B).
         
     | 
| 33 | 
         
            +
             
     | 
| 34 | 
         
            +
            **[VLDB' 25] ChatTS: Aligning Time Series with LLMs via Synthetic Data for Enhanced Understanding and Reasoning**
         
     | 
| 35 | 
         
            +
             
     | 
| 36 | 
         
            +
            `ChatTS` focuses on **Understanding and Reasoning** about time series, much like what vision/video/audio-MLLMs do.
         
     | 
| 37 | 
         
            +
            This repo provides code, datasets and model for `ChatTS`: [ChatTS: Aligning Time Series with LLMs via Synthetic Data for Enhanced Understanding and Reasoning](https://arxiv.org/pdf/2412.03104).
         
     | 
| 38 | 
         
            +
             
     | 
| 39 | 
         
            +
            ## Key Features
         
     | 
| 40 | 
         
            +
            ChatTS is a Multimodal LLM built natively for time series as a core modality:
         
     | 
| 41 | 
         
            +
            - ✅ **Native support for multivariate time series**
         
     | 
| 42 | 
         
            +
            - ✅ **Flexible input**: Supports multivariate time series with **different lengths** and **flexible dimensionality**
         
     | 
| 43 | 
         
            +
            - ✅ **Conversational understanding + reasoning**:  
         
     | 
| 44 | 
         
            +
              Enables interactive dialogue over time series to explore insights about time series
         
     | 
| 45 | 
         
            +
            - ✅ **Preserves raw numerical values**:  
         
     | 
| 46 | 
         
            +
              Can answer **statistical questions**, such as _"How large is the spike at timestamp t?"_
         
     | 
| 47 | 
         
            +
            - ✅ **Easy integration with existing LLM pipelines**, including support for **vLLM**.
         
     | 
| 48 | 
         
            +
             
     | 
| 49 | 
         
            +
            ### Example Application
         
     | 
| 50 | 
         
            +
            Here is an example of a ChatTS application, which allows users to interact with a LLM to understand and reason about time series data:
         
     | 
| 51 | 
         
            +
            
         
     | 
| 52 | 
         
            +
             
     | 
| 53 | 
         
            +
            [Link to the paper](https://arxiv.org/pdf/2412.03104)
         
     | 
| 54 | 
         
            +
             
     | 
| 55 | 
         
            +
            [Link to the Github repository](https://github.com/NetManAIOps/ChatTS)
         
     | 
| 56 | 
         
            +
             
     | 
| 57 | 
         
            +
            ## Usage
         
     | 
| 58 | 
         
            +
            - This model is fine-tuned on the QWen2.5-14B-Instruct (https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) model. For more usage details, please refer to the `README.md` in the ChatTS repository.
         
     | 
| 59 | 
         
            +
            - An example usage of ChatTS (with `HuggingFace`):
         
     | 
| 60 | 
         
            +
            ```python
         
     | 
| 61 | 
         
            +
            from transformers import AutoModelForCausalLM, AutoTokenizer, AutoProcessor
         
     | 
| 62 | 
         
            +
            import torch
         
     | 
| 63 | 
         
            +
            import numpy as np
         
     | 
| 64 | 
         
            +
             
     | 
| 65 | 
         
            +
            hf_model = "bytedance-research/ChatTS-14B"
         
     | 
| 66 | 
         
            +
            # Load the model, tokenizer and processor
         
     | 
| 67 | 
         
            +
            # For pre-Ampere GPUs (like V100) use `_attn_implementation='eager'`
         
     | 
| 68 | 
         
            +
            model = AutoModelForCausalLM.from_pretrained(hf_model, trust_remote_code=True, device_map="auto", torch_dtype='float16')
         
     | 
| 69 | 
         
            +
            tokenizer = AutoTokenizer.from_pretrained(hf_model, trust_remote_code=True)
         
     | 
| 70 | 
         
            +
            processor = AutoProcessor.from_pretrained(hf_model, trust_remote_code=True, tokenizer=tokenizer)
         
     | 
| 71 | 
         
            +
            # Create time series and prompts
         
     | 
| 72 | 
         
            +
            timeseries = np.sin(np.arange(256) / 10) * 5.0
         
     | 
| 73 | 
         
            +
            timeseries[100:] -= 10.0
         
     | 
| 74 | 
         
            +
            prompt = f"I have a time series length of 256: <ts><ts/>. Please analyze the local changes in this time series."
         
     | 
| 75 | 
         
            +
            # Apply Chat Template
         
     | 
| 76 | 
         
            +
            prompt = f"""<|im_start|>system
         
     | 
| 77 | 
         
            +
            You are a helpful assistant.<|im_end|><|im_start|>user
         
     | 
| 78 | 
         
            +
            {prompt}<|im_end|><|im_start|>assistant
         
     | 
| 79 | 
         
            +
            """
         
     | 
| 80 | 
         
            +
            # Convert to tensor
         
     | 
| 81 | 
         
            +
            inputs = processor(text=[prompt], timeseries=[timeseries], padding=True, return_tensors="pt")
         
     | 
| 82 | 
         
            +
            # Model Generate
         
     | 
| 83 | 
         
            +
            outputs = model.generate(**inputs, max_new_tokens=300)
         
     | 
| 84 | 
         
            +
            print(tokenizer.decode(outputs[0][len(inputs['input_ids'][0]):], skip_special_tokens=True))
         
     | 
| 85 | 
         
            +
            ```
         
     | 
| 86 | 
         
            +
             
     | 
| 87 | 
         
            +
            ## Reference
         
     | 
| 88 | 
         
            +
            - QWen2.5-14B-Instruct (https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
         
     | 
| 89 | 
         
            +
            - transformers (https://github.com/huggingface/transformers.git)
         
     | 
| 90 | 
         
            +
            - [ChatTS Paper](https://arxiv.org/pdf/2412.03104)
         
     | 
| 91 | 
         
            +
             
     | 
| 92 | 
         
            +
             
     | 
| 93 | 
         
            +
            ## License
         
     | 
| 94 | 
         
            +
            This model is licensed under the [Apache License 2.0](LICENSE).
         
     | 
| 95 | 
         
            +
             
     | 
| 96 | 
         
            +
            ## Cite
         
     | 
| 97 | 
         
            +
            ```
         
     | 
| 98 | 
         
            +
            @article{xie2024chatts,
         
     | 
| 99 | 
         
            +
              title={ChatTS: Aligning Time Series with LLMs via Synthetic Data for Enhanced Understanding and Reasoning},
         
     | 
| 100 | 
         
            +
              author={Xie, Zhe and Li, Zeyan and He, Xiao and Xu, Longlong and Wen, Xidao and Zhang, Tieying and Chen, Jianjun and Shi, Rui and Pei, Dan},
         
     | 
| 101 | 
         
            +
              journal={arXiv preprint arXiv:2412.03104},
         
     | 
| 102 | 
         
            +
              year={2024}
         
     | 
| 103 | 
         
            +
            }
         
     | 
| 104 | 
         
            +
            ```
         
     |