krishanusinha20 commited on
Commit
d992d55
·
1 Parent(s): 3cf4517

Upload 5 files

Browse files

Add files for meta rag application

Files changed (5) hide show
  1. Dockerfile +11 -0
  2. README.md +184 -5
  3. app.py +109 -0
  4. chainlit.md +3 -0
  5. requirements.txt +9 -0
Dockerfile ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.8.15
2
+ RUN useradd -m -u 1000 user
3
+ USER user
4
+ ENV HOME=/home/user \
5
+ PATH=/home/user/.local/bin:$PATH
6
+ WORKDIR $HOME/app
7
+ COPY --chown=user . $HOME/app
8
+ COPY ./requirements.txt ~/app/requirements.txt
9
+ RUN pip install -r requirements.txt
10
+ COPY . .
11
+ CMD ["chainlit", "run", "app.py", "--port", "7860"]
README.md CHANGED
@@ -1,10 +1,189 @@
1
  ---
2
- title: Meta Rag
3
- emoji: 🏃
4
- colorFrom: indigo
5
- colorTo: green
6
  sdk: docker
7
  pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: BeyondChatGPT Demo
3
+ emoji: 📉
4
+ colorFrom: pink
5
+ colorTo: yellow
6
  sdk: docker
7
  pinned: false
8
  ---
9
 
10
+ <p align = "center" draggable=”false” ><img src="https://github.com/AI-Maker-Space/LLM-Dev-101/assets/37101144/d1343317-fa2f-41e1-8af1-1dbb18399719"
11
+ width="200px"
12
+ height="auto"/>
13
+ </p>
14
+
15
+
16
+ ## <h1 align="center" id="heading">:wave: Welcome to Beyond ChatGPT!!</h1>
17
+
18
+ For a step-by-step YouTube video walkthrough, watch this! [Deploying Chainlit app on Hugging Face](https://www.youtube.com/live/pRbbZcL0NMI?si=NAYhMZ_suAY84f06&t=2119)
19
+
20
+ ![Beyond ChatGPT: Build Your First LLM Application](https://github.com/AI-Maker-Space/Beyond-ChatGPT/assets/48775140/cb7a74b8-28af-4d12-a008-8f5a51d47b4c)
21
+
22
+ ## 🤖 Your First LLM App
23
+
24
+ > If you need an introduction to `git`, or information on how to set up API keys for the tools we'll be using in this repository - check out our [Interactive Dev Environment for LLM Development](https://github.com/AI-Maker-Space/Interactive-Dev-Environment-for-LLM-Development/tree/main) which has everything you'd need to get started in this repository!
25
+
26
+ In this repository, we'll walk you through the steps to create a Large Language Model (LLM) application using Chainlit, then containerize it using Docker, and finally deploy it on Huggingface Spaces.
27
+
28
+ Are you ready? Let's get started!
29
+
30
+ <details>
31
+ <summary>🖥️ Accessing "gpt-3.5-turbo" (ChatGPT) like a developer</summary>
32
+
33
+ 1. Head to [this notebook](https://colab.research.google.com/drive/1mOzbgf4a2SP5qQj33ZxTz2a01-5eXqk2?usp=sharing) and follow along with the instructions!
34
+
35
+ 2. Complete the notebook and try out your own system/assistant messages!
36
+
37
+ That's it! Head to the next step and start building your application!
38
+
39
+ </details>
40
+
41
+
42
+ <details>
43
+ <summary>🏗️ Building Your First LLM App</summary>
44
+
45
+ 1. Clone [this](https://github.com/AI-Maker-Space/Beyond-ChatGPT/tree/main) repo.
46
+
47
+ ``` bash
48
+ git clone https://github.com/AI-Maker-Space/Beyond-ChatGPT.git
49
+ ```
50
+
51
+ 2. Navigate inside this repo
52
+ ``` bash
53
+ cd Beyond-ChatGPT
54
+ ```
55
+
56
+ 3. Install the packages required for this python envirnoment in `requirements.txt`.
57
+ ``` bash
58
+ pip install -r requirements.txt
59
+ ```
60
+
61
+ 4. Open your `.env` file. Replace the `###` in your `.env` file with your OpenAI Key and save the file.
62
+ ``` bash
63
+ OPENAI_API_KEY=sk-###
64
+ ```
65
+
66
+ 5. Let's try deploying it locally. Make sure you're in the python environment where you installed Chainlit and OpenAI. Run the app using Chainlit. This may take a minute to run.
67
+ ``` bash
68
+ chainlit run app.py -w
69
+ ```
70
+
71
+ <p align = "center" draggable=”false”>
72
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/54bcccf9-12e2-4cef-ab53-585c1e2b0fb5">
73
+ </p>
74
+
75
+ Great work! Let's see if we can interact with our chatbot.
76
+
77
+ <p align = "center" draggable=”false”>
78
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/854e4435-1dee-438a-9146-7174b39f7c61">
79
+ </p>
80
+
81
+ Awesome! Time to throw it into a docker container and prepare it for shipping!
82
+ </details>
83
+
84
+
85
+
86
+ <details>
87
+ <summary>🐳 Containerizing our App</summary>
88
+
89
+ 1. Let's build the Docker image. We'll tag our image as `llm-app` using the `-t` parameter. The `.` at the end means we want all of the files in our current directory to be added to our image.
90
+
91
+ ``` bash
92
+ docker build -t llm-app .
93
+ ```
94
+
95
+ 2. Run and test the Docker image locally using the `run` command. The `-p`parameter connects our **host port #** to the left of the `:` to our **container port #** on the right.
96
+
97
+ ``` bash
98
+ docker run -p 7860:7860 llm-app
99
+ ```
100
+
101
+ 3. Visit http://localhost:7860 in your browser to see if the app runs correctly.
102
+
103
+ <p align = "center" draggable=”false”>
104
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/2c764f25-09a0-431b-8d28-32246e0ca1b7">
105
+ </p>
106
+
107
+ Great! Time to ship!
108
+ </details>
109
+
110
+
111
+ <details>
112
+ <summary>🚀 Deploying Your First LLM App</summary>
113
+
114
+ 1. Let's create a new Huggingface Space. Navigate to [Huggingface](https://huggingface.co) and click on your profile picture on the top right. Then click on `New Space`.
115
+
116
+ <p align = "center" draggable=”false”>
117
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/f0656408-28b8-4876-9887-8f0c4b882bae">
118
+ </p>
119
+
120
+ 2. Setup your space as shown below:
121
+
122
+ - Owner: Your username
123
+ - Space Name: `llm-app`
124
+ - License: `Openrail`
125
+ - Select the Space SDK: `Docker`
126
+ - Docker Template: `Blank`
127
+ - Space Hardware: `CPU basic - 2 vCPU - 16 GB - Free`
128
+ - Repo type: `Public`
129
+
130
+ <p align = "center" draggable=”false”>
131
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/8f16afd1-6b46-4d9f-b642-8fefe355c5c9">
132
+ </p>
133
+
134
+ 3. You should see something like this. We're now ready to send our files to our Huggingface Space. After cloning, move your files to this repo and push it along with your docker file. You DO NOT need to create a Dockerfile. Make sure NOT TO push your `.env` file. This should automatically be ignored.
135
+
136
+ <p align = "center" draggable=”false”>
137
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/cbf366e2-7613-4223-932a-72c67a73f9c6">
138
+ </p>
139
+
140
+ 4. After pushing all files, navigate to the settings in the top right to add your OpenAI API key.
141
+
142
+ <p align = "center" draggable=”false”>
143
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/a1123a6f-abdd-4f76-bea4-39acf9928762">
144
+ </p>
145
+
146
+ 5. Scroll down to `Variables and secrets` and click on `New secret` on the top right.
147
+
148
+ <p align = "center" draggable=”false”>
149
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/a8a4a25d-752b-4036-b572-93381370c2db">
150
+ </p>
151
+
152
+ 6. Set the name to `OPENAI_API_KEY` and add your OpenAI key under `Value`. Click save.
153
+
154
+ <p align = "center" draggable=”false”>
155
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/0a897538-1779-48ff-bcb4-486af30f7a14">
156
+ </p>
157
+
158
+ 7. To ensure your key is being used, we recommend you `Restart this Space`.
159
+
160
+ <p align = "center" draggable=”false”>
161
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/fb1d83af-6ebe-4676-8bf5-b6d88f07c583">
162
+ </p>
163
+
164
+ 8. Congratulations! You just deployed your first LLM! 🚀🚀🚀 Get on linkedin and post your results and experience! Make sure to tag us at #AIMakerspace !
165
+
166
+ Here's a template to get your post started!
167
+
168
+ ```
169
+ 🚀🎉 Exciting News! 🎉🚀
170
+
171
+ 🏗️ Today, I'm thrilled to announce that I've successfully built and shipped my first-ever LLM using the powerful combination of Chainlit, Docker, and the OpenAI API! 🖥️
172
+
173
+ Check it out 👇
174
+ [LINK TO APP]
175
+
176
+ A big shoutout to the @**AI Makerspace** for all making this possible. Couldn't have done it without the incredible community there. 🤗🙏
177
+
178
+ Looking forward to building with the community! 🙌✨ Here's to many more creations ahead! 🥂🎉
179
+
180
+ Who else is diving into the world of AI? Let's connect! 🌐💡
181
+
182
+ #FirstLLM #Chainlit #Docker #OpenAI #AIMakerspace
183
+ ```
184
+
185
+ </details>
186
+
187
+ <p></p>
188
+
189
+ ### That's it for now! And so it begins.... :)
app.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import openai # importing openai for API usage
2
+ import chainlit as cl # importing chainlit for our app
3
+ import pinecone
4
+ import arxiv
5
+ from chainlit.input_widget import (
6
+ Select,
7
+ Switch,
8
+ Slider,
9
+ ) # importing chainlit settings selection tools
10
+ from chainlit.prompt import Prompt, PromptMessage # importing prompt tools
11
+ from chainlit.playground.providers import ChatOpenAI # importing ChatOpenAI tools
12
+
13
+ from langchain.vectorstores import Pinecone
14
+ from langchain.embeddings.openai import OpenAIEmbeddings
15
+ from langchain.embeddings import CacheBackedEmbeddings
16
+ from langchain.storage import LocalFileStore
17
+ from langchain.prompts import ChatPromptTemplate
18
+
19
+ from langchain.prompts import ChatPromptTemplate
20
+ from langchain.chat_models import ChatOpenAI
21
+
22
+ from multiprocessing import context
23
+
24
+ from operator import itemgetter
25
+ from langchain.schema.runnable import RunnableLambda, RunnablePassthrough
26
+ from langchain.schema import format_document
27
+ from langchain.schema.output_parser import StrOutputParser
28
+ from langchain.prompts.prompt import PromptTemplate
29
+
30
+ from multiprocessing import context
31
+ from langchain.prompts import ChatPromptTemplate
32
+ from langchain.document_loaders import PyPDFLoader
33
+ import os
34
+
35
+ from langchain.prompts.chat import (
36
+ ChatPromptTemplate,
37
+ SystemMessagePromptTemplate,
38
+ AIMessagePromptTemplate,
39
+ HumanMessagePromptTemplate,
40
+ )
41
+
42
+ pinecone.init(
43
+ api_key=os.environ.get("PINECONE_API_KEY"),
44
+ environment=os.environ.get("PINECONE_ENV"),
45
+ )
46
+
47
+ openai.api_key = os.environ.get("OPENAI_API_KEY")
48
+
49
+ index_name = "arxiv-paper-index"
50
+ # Optional
51
+
52
+ embeddings = OpenAIEmbeddings()
53
+
54
+ template = """Answer the following question based only on the provided context. If you are not able to answer the question from the context - please don't answer the question
55
+
56
+ ### CONTEXT
57
+ {context}
58
+
59
+ ### QUESTION
60
+ {question}
61
+ """
62
+
63
+ prompt = ChatPromptTemplate.from_template(template)
64
+
65
+ welcome_message = "Welcome to the Chainlit Pinecone demo! Ask anything about documents you vectorized and stored in your Pinecone DB."
66
+
67
+
68
+ @cl.on_chat_start
69
+ async def start():
70
+ await cl.Message(content=welcome_message).send()
71
+
72
+ store = LocalFileStore("./cache/")
73
+ core_embeddings_model = OpenAIEmbeddings()
74
+ embedder = CacheBackedEmbeddings.from_bytes_store(
75
+ core_embeddings_model, store, namespace=core_embeddings_model.model
76
+ )
77
+
78
+ # #### we've created our index, let's convert it to a LangChain VectorStroe so we can use it in the rest of the LangChain ecosystem!
79
+ text_field = "text"
80
+ index = pinecone.Index(index_name)
81
+
82
+ vectorstore = Pinecone(index, embedder.embed_query, text_field)
83
+
84
+ llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
85
+ retriever = vectorstore.as_retriever()
86
+
87
+ retrieval_augmented_qa_chain = (
88
+ {
89
+ "context": itemgetter("question") | retriever,
90
+ "question": itemgetter("question"),
91
+ }
92
+ | RunnablePassthrough.assign(context=itemgetter("context"))
93
+ | {
94
+ "response": prompt | llm,
95
+ "context": itemgetter("context"),
96
+ }
97
+ )
98
+ cl.user_session.set("chain",retrieval_augmented_qa_chain)
99
+
100
+
101
+
102
+ @cl.on_message
103
+ async def main(message: cl.Message):
104
+ print(message.content)
105
+ chain = cl.user_session.get("chain") # type: ConversationalRetrievalChain
106
+ res = chain.invoke({"question": message.content})
107
+ answer = res['response']
108
+ print(type(answer))
109
+ await cl.Message(content=answer).send()
chainlit.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Beyond ChatGPT
2
+
3
+ This Chainlit app was created following instructions from [this repository!](https://github.com/AI-Maker-Space/Beyond-ChatGPT)
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ chainlit==0.7.700
2
+ cohere==4.37
3
+ openai==1.3.5
4
+ tiktoken==0.5.1
5
+ python-dotenv==1.0.0
6
+ pinecone-client==2.2.4
7
+ pydantic==1.10.8
8
+ arxiv
9
+ langchain