PyTorch
English
gpt_neox
Haseeb javed commited on
Commit
98e91f0
·
1 Parent(s): f6672b1
README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ datasets:
6
+ - togethercomputer/RedPajama-Data-1T
7
+ - OpenAssistant/oasst1
8
+ - databricks/databricks-dolly-15k
9
+ widget:
10
+ - text: "<human>: Write an email to my friends inviting them to come to my home on Friday for a dinner party, bring their own food to share.\n<bot>:"
11
+ example_title: "Email Writing"
12
+ - text: "<human>: Create a list of things to do in San Francisco\n<bot>:"
13
+ example_title: "Brainstorming"
14
+ inference:
15
+ parameters:
16
+ temperature: 0.7
17
+ top_p: 0.7
18
+ top_k: 50
19
+ max_new_tokens: 128
20
+ ---
21
+
22
+ # RedPajama-INCITE-Chat-3B-v1
23
+
24
+ RedPajama-INCITE-Chat-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
25
+
26
+ It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
27
+
28
+ - Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1)
29
+ - Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1)
30
+ - Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
31
+
32
+
33
+ ## Model Details
34
+ - **Developed by**: Together Computer.
35
+ - **Model type**: Language Model
36
+ - **Language(s)**: English
37
+ - **License**: Apache 2.0
38
+ - **Model Description**: A 2.8B parameter pretrained language model.
39
+
40
+ # Quick Start
41
+
42
+ Please note that the model requires `transformers` version >= 4.25.1.
43
+
44
+ To prompt the chat model, use the following format:
45
+ ```
46
+ <human>: [Instruction]
47
+ <bot>:
48
+ ```
49
+
50
+ ## GPU Inference
51
+
52
+ This requires a GPU with 8GB memory.
53
+
54
+ ```python
55
+ import torch
56
+ import transformers
57
+ from transformers import AutoTokenizer, AutoModelForCausalLM
58
+
59
+ MIN_TRANSFORMERS_VERSION = '4.25.1'
60
+
61
+ # check transformers version
62
+ assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
63
+
64
+ # init
65
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1")
66
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", torch_dtype=torch.float16)
67
+ model = model.to('cuda:0')
68
+ # infer
69
+ prompt = "<human>: Who is Alan Turing?\n<bot>:"
70
+ inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
71
+ input_length = inputs.input_ids.shape[1]
72
+ outputs = model.generate(
73
+ **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
74
+ )
75
+ token = outputs.sequences[0, input_length:]
76
+ output_str = tokenizer.decode(token)
77
+ print(output_str)
78
+ """
79
+ Alan Turing was a British mathematician, logician, cryptologist, and computer scientist. He is widely regarded as the father of computer science and artificial intelligence.
80
+ """
81
+ ```
82
+
83
+ ## GPU Inference in Int8
84
+
85
+ This requires a GPU with 6GB memory.
86
+
87
+ To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
88
+
89
+ ```bash
90
+ pip install accelerate
91
+ pip install bitsandbytes
92
+ ```
93
+
94
+ Then you can run inference with int8 as follows:
95
+
96
+ ```python
97
+ import torch
98
+ import transformers
99
+ from transformers import AutoTokenizer, AutoModelForCausalLM
100
+
101
+ MIN_TRANSFORMERS_VERSION = '4.25.1'
102
+
103
+ # check transformers version
104
+ assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
105
+
106
+ # init
107
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1")
108
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
109
+
110
+ # infer
111
+ prompt = "<human>: Who is Alan Turing?\n<bot>:"
112
+ inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
113
+ input_length = inputs.input_ids.shape[1]
114
+ outputs = model.generate(
115
+ **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
116
+ )
117
+ token = outputs.sequences[0, input_length:]
118
+ output_str = tokenizer.decode(token)
119
+ print(output_str)
120
+ """
121
+ Alan Turing was a British mathematician and computer scientist who made important contributions to computer science and mathematical logic. He is widely regarded as the father of computer science and artificial intelligence for his work on the Turing machine and Turing test.
122
+ """
123
+ ```
124
+
125
+ ## CPU Inference
126
+
127
+ ```python
128
+ import torch
129
+ import transformers
130
+ from transformers import AutoTokenizer, AutoModelForCausalLM
131
+
132
+ MIN_TRANSFORMERS_VERSION = '4.25.1'
133
+
134
+ # check transformers version
135
+ assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
136
+
137
+ # init
138
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1")
139
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", torch_dtype=torch.bfloat16)
140
+ # infer
141
+ prompt = "<human>: Who is Alan Turing?\n<bot>:"
142
+ inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
143
+ input_length = inputs.input_ids.shape[1]
144
+ outputs = model.generate(
145
+ **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
146
+ )
147
+ token = outputs.sequences[0, input_length:]
148
+ output_str = tokenizer.decode(token)
149
+ print(output_str)
150
+ """
151
+ Alan Turing was a British mathematician and computer scientist who made important contributions to the fields of mathematics, cryptography, and computer science. He is widely regarded as the father of computer science and artificial intelligence.
152
+ """
153
+ ```
154
+
155
+ Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
156
+
157
+
158
+ # Uses
159
+
160
+ Excluded uses are described below.
161
+
162
+ ### Misuse, Malicious Use, and Out-of-Scope Use
163
+
164
+ It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
165
+
166
+ #### Out-of-Scope Use
167
+
168
+ `RedPajama-INCITE-Chat-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope.
169
+ For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
170
+ It is important to consider the limitations of the model and to only use it for its intended purpose.
171
+
172
+ #### Misuse and Malicious Use
173
+
174
+ `RedPajama-INCITE-Chat-3B-v1` is designed for language modeling.
175
+ Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
176
+
177
+ Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
178
+
179
+ - Generating fake news, misinformation, or propaganda
180
+ - Promoting hate speech, discrimination, or violence against individuals or groups
181
+ - Impersonating individuals or organizations without their consent
182
+ - Engaging in cyberbullying or harassment
183
+ - Defamatory content
184
+ - Spamming or scamming
185
+ - Sharing confidential or sensitive information without proper authorization
186
+ - Violating the terms of use of the model or the data used to train it
187
+ - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
188
+
189
+ ## Limitations
190
+
191
+ `RedPajama-INCITE-Chat-3B-v1`, like other language models, has limitations that should be taken into consideration.
192
+ For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
193
+ We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
194
+
195
+ ## Training
196
+
197
+ **Training Data**
198
+
199
+ Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
200
+
201
+ **Training Procedure**
202
+
203
+ - **Hardware:** 8 A100
204
+ - **Optimizer:** Adam
205
+ - **Gradient Accumulations**: 1
206
+ - **Num of Tokens:** 131M tokens
207
+ - **Learning rate:** 1e-5
208
+
209
+ ## Community
210
+
211
+ Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
212
+
213
+ ## Model Download
214
+
215
+ The model weights (`pytorch_model.bin`) are stored on Google Drive due to file size limitations. Download the model file from the link below:
216
+
217
+ [Download Model Weights](https://drive.google.com/uc?id=YOUR_FILE_ID)
218
+
219
+ After downloading, place the file in the appropriate directory before running the scripts.
app.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, request, jsonify
2
+ from flask_cors import CORS
3
+ import torch
4
+ from transformers import AutoTokenizer, AutoModelForCausalLM
5
+ import logging
6
+ import os
7
+
8
+ # Logging setup
9
+ logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
10
+
11
+ # Hugging Face Model Hub Repository
12
+ MODEL_REPO = "./" # Replace with your Hugging Face model repo name
13
+
14
+ # Load tokenizer and model from Hugging Face Model Hub
15
+ try:
16
+ logging.info("Loading model and tokenizer from Hugging Face Model Hub...")
17
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_REPO)
18
+ dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
19
+ model = AutoModelForCausalLM.from_pretrained(MODEL_REPO, torch_dtype=dtype).to(
20
+ "cuda" if torch.cuda.is_available() else "cpu"
21
+ )
22
+ logging.info("Model loaded successfully.")
23
+ except Exception as e:
24
+ logging.error("Failed to load the model or tokenizer.", exc_info=True)
25
+ raise e
26
+
27
+ # Flask app initialization
28
+ app = Flask(__name__)
29
+ CORS(app) # Enable CORS
30
+
31
+ def generate_response(prompt):
32
+ """Generate a response from the model given a prompt."""
33
+ try:
34
+ logging.debug(f"Generating response for prompt: {prompt}")
35
+ inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
36
+ input_length = inputs.input_ids.shape[1]
37
+ outputs = model.generate(
38
+ **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
39
+ )
40
+ token = outputs.sequences[0, input_length:]
41
+ output_str = tokenizer.decode(token, skip_special_tokens=True)
42
+ logging.debug(f"Generated response: {output_str}")
43
+ return output_str
44
+ except Exception as e:
45
+ logging.error("Error during response generation", exc_info=True)
46
+ return "Sorry, I encountered an error while generating the response."
47
+
48
+ @app.route('/chat', methods=['POST'])
49
+ def chat():
50
+ """Endpoint to handle chat requests."""
51
+ try:
52
+ logging.debug("Received a POST request to /chat")
53
+ data = request.json
54
+ logging.debug(f"Request data: {data}")
55
+
56
+ if not data or "message" not in data:
57
+ return jsonify({"error": "Invalid request. 'message' field is required."}), 400
58
+
59
+ user_input = data.get("message", "")
60
+ prompt = f"<human>: {user_input}\n<bot>:"
61
+ response = generate_response(prompt)
62
+ return jsonify({"response": response}), 200
63
+ except Exception as e:
64
+ logging.error("Error in /chat endpoint", exc_info=True)
65
+ return jsonify({"error": "Internal server error", "message": str(e)}), 500
66
+
67
+ if __name__ == "__main__":
68
+ # Get the port from environment variable or default to 5000
69
+ port = int(os.getenv("PORT", 5000))
70
+ logging.info(f"Starting Flask app on port {port}")
71
+ app.run(debug=True, host="0.0.0.0", port=port)
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/root/fm/models/rp_3b_800b_real_fp16",
3
+ "architectures": [
4
+ "GPTNeoXForCausalLM"
5
+ ],
6
+ "bos_token_id": 0,
7
+ "eos_token_id": 0,
8
+ "hidden_act": "gelu",
9
+ "hidden_size": 2560,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 10240,
12
+ "layer_norm_eps": 1e-05,
13
+ "max_position_embeddings": 2048,
14
+ "model_type": "gpt_neox",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "rotary_emb_base": 10000,
18
+ "rotary_pct": 1.0,
19
+ "tie_word_embeddings": false,
20
+ "torch_dtype": "float16",
21
+ "transformers_version": "4.28.1",
22
+ "use_cache": true,
23
+ "use_parallel_residual": false,
24
+ "vocab_size": 50432
25
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.28.1"
6
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8207c48c9830f6a0298f67bed916c78c9c27147006ed1f66ee122dcb1fdfd9c4
3
+ size 5686106713
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|endoftext|>",
6
+ "model_max_length": 2048,
7
+ "tokenizer_class": "GPTNeoXTokenizer",
8
+ "unk_token": "<|endoftext|>"
9
+ }