Reshmarb commited on
Commit
b410607
·
1 Parent(s): 27eeeac

File saved

Browse files
Files changed (3) hide show
  1. app.py +469 -39
  2. pred.py +108 -0
  3. requirements.txt +29 -8
app.py CHANGED
@@ -1,3 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  from groq import Groq
2
  import gradio as gr
3
  from gtts import gTTS
@@ -6,6 +271,18 @@ import base64
6
  from io import BytesIO
7
  import os
8
  import logging
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  # Set up logger
11
  logger = logging.getLogger(__name__)
@@ -18,57 +295,175 @@ file_handler.setFormatter(formatter)
18
  logger.addHandler(console_handler)
19
  logger.addHandler(file_handler)
20
 
21
- # Initialize Groq Client
22
  client = Groq(api_key=os.getenv("GROQ_API_KEY_2"))
23
 
24
- # client = Groq(
25
- # api_key="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",
26
- # )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
  # Function to encode the image
29
  def encode_image(uploaded_image):
30
  try:
31
  logger.debug("Encoding image...")
32
  buffered = BytesIO()
33
- uploaded_image.save(buffered, format="PNG") # Ensure the correct format
34
  logger.debug("Image encoding complete.")
35
  return base64.b64encode(buffered.getvalue()).decode("utf-8")
36
  except Exception as e:
37
  logger.error(f"Error encoding image: {e}")
38
  raise
39
- def initialize_messages():
40
- return [{"role": "system",
41
- "content": '''You are Dr. HealthBuddy, a highly experienced and professional virtual doctor chatbot with over 40 years of expertise across all medical fields. You provide health-related information, symptom guidance, lifestyle tips, and actionable solutions using a dataset to reference common symptoms and conditions. Your goal is to offer concise, empathetic, and knowledgeable responses tailored to each patient’s needs.
42
-
43
- You only respond to health-related inquiries and strive to provide the best possible guidance. Your responses should include clear explanations, actionable steps, and when necessary, advise patients to seek in-person care from a healthcare provider for a proper diagnosis or treatment. Maintain a friendly, professional, and empathetic tone in all your interactions.
44
-
45
- Prompt Template:
46
- - Input: Patient’s health concerns, including symptoms, questions, or specific issues they mention.
47
- - Response: Start with a polite acknowledgment of the patient’s concern. Provide a clear, concise explanation and suggest practical, actionable steps based on the dataset. If needed, advise on when to consult a healthcare provider.
48
-
49
- Examples:
50
 
51
- - User: "I have skin rash and itching. What could it be?"
52
- Response: "According to the data, skin rash and itching are common symptoms of conditions like fungal infections. You can try keeping the affected area dry and clean, and using over-the-counter antifungal creams. If the rash persists or worsens, please consult a dermatologist."
 
53
 
54
- - User: "What might cause nodal skin eruptions?"
55
- Response: "Nodal skin eruptions could be linked to conditions such as fungal infections. It's best to monitor the symptoms and avoid scratching. For a proper diagnosis, consider visiting a healthcare provider."
56
 
57
- - User: "I am a 22-year-old female diagnosed with hypothyroidism. I've gained 10 kg recently. What should I do?"
58
- Response: "Hi. You have done well managing your hypothyroidism. For effective weight loss, focus on a balanced diet rich in vegetables, lean proteins, and whole grains. Pair this with regular exercise like brisk walking or yoga. Also, consult your endocrinologist to ensure your thyroid levels are well-controlled. Let me know if you have more questions."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
- - User: "I’ve been feeling discomfort between my shoulder blades after sitting for long periods. What could this be?"
61
- Response: "Hello. The discomfort between your shoulder blades could be related to posture or strain. Try adjusting your sitting position and consider ergonomic changes to your workspace. Over-the-counter pain relievers or hot compresses may help. If the pain persists, consult an orthopedic specialist for further evaluation."
 
62
 
63
- Always ensure the tone remains compassionate, and offer educational insights while stressing that you are not a substitute for professional medical advice. Encourage users to consult a healthcare provider for any serious or persistent health concerns.'''
64
- }]
65
- messages=initialize_messages()
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  def customLLMBot(user_input, uploaded_image, chat_history):
68
  try:
69
  global messages
70
  logger.info("Processing input...")
71
 
 
 
 
 
 
 
 
 
 
 
 
72
  # Append user input to the chat history
73
  chat_history.append(("user", user_input))
74
 
@@ -76,12 +471,10 @@ def customLLMBot(user_input, uploaded_image, chat_history):
76
  # Encode the image to base64
77
  base64_image = encode_image(uploaded_image)
78
 
79
- # Log the image size and type
80
  logger.debug(f"Image received, size: {len(base64_image)} bytes")
81
 
82
  # Create a message for the image prompt
83
- # Create a message specifically for image prompts
84
- messages_image=[
85
  {
86
  "role": "user",
87
  "content": [
@@ -116,7 +509,7 @@ def customLLMBot(user_input, uploaded_image, chat_history):
116
 
117
  # Append the bot's response to the chat history
118
  chat_history.append(("bot", LLM_reply))
119
- messages.append({"role":"assistant","content":LLM_reply})
120
 
121
  # Generate audio for response
122
  audio_file = f"response_{uuid.uuid4().hex}.mp3"
@@ -128,10 +521,8 @@ def customLLMBot(user_input, uploaded_image, chat_history):
128
  return chat_history, audio_file
129
 
130
  except Exception as e:
131
- # Handle errors gracefully
132
  logger.error(f"Error in customLLMBot function: {e}")
133
- return [(("user", user_input or "Image uploaded"), ("bot", f"An error occurred: {e}"))], None
134
-
135
 
136
  # Gradio Interface
137
  def chatbot_ui():
@@ -157,24 +548,48 @@ def chatbot_ui():
157
  clear_btn = gr.Button("Clear")
158
  audio_output = gr.Audio(label="Audio Response")
159
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
160
  # Define actions
161
  def handle_submit(user_query, image, history):
162
  logger.info("User submitted a query.")
163
  response, audio = customLLMBot(user_query, image, history)
164
- return response, audio, None,"",history # Clear the image after submission
 
 
 
 
165
 
166
  # Submit on pressing Enter key
167
  user_input.submit(
168
  handle_submit,
169
  inputs=[user_input, uploaded_image, chat_history],
170
- outputs=[chatbot, audio_output, uploaded_image,user_input, chat_history],
171
  )
172
 
173
  # Submit on button click
174
  submit_btn.click(
175
  handle_submit,
176
  inputs=[user_input, uploaded_image, chat_history],
177
- outputs=[chatbot, audio_output, uploaded_image,user_input, chat_history],
178
  )
179
 
180
  # Action for clearing all fields
@@ -184,9 +599,24 @@ def chatbot_ui():
184
  outputs=[chatbot, user_input, uploaded_image, chat_history],
185
  )
186
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
187
  return demo
188
 
189
  # Launch the interface
190
- chatbot_ui().launch(server_name="0.0.0.0", server_port=7860)
191
 
192
- #chatbot_ui().launch(server_name="localhost", server_port=7860)
 
 
1
+ # from groq import Groq
2
+ # import gradio as gr
3
+ # from gtts import gTTS
4
+ # import uuid
5
+ # import base64
6
+ # from io import BytesIO
7
+ # import os
8
+ # import logging
9
+ # import spacy
10
+ # from transformers import pipeline
11
+
12
+ # # Set up logger
13
+ # logger = logging.getLogger(__name__)
14
+ # logger.setLevel(logging.DEBUG)
15
+ # console_handler = logging.StreamHandler()
16
+ # file_handler = logging.FileHandler('chatbot_log.log')
17
+ # formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
18
+ # console_handler.setFormatter(formatter)
19
+ # file_handler.setFormatter(formatter)
20
+ # logger.addHandler(console_handler)
21
+ # logger.addHandler(file_handler)
22
+
23
+ # # Initialize Groq Client
24
+ # #client = Groq(api_key=os.getenv("GROQ_API_KEY_2"))
25
+
26
+ # client = Groq(
27
+ # api_key="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",
28
+ # )
29
+
30
+ # # Initialize spaCy NLP model for named entity recognition (NER)
31
+ # nlp = spacy.load("en_core_web_sm")
32
+
33
+ # # Initialize sentiment analysis model using Hugging Face
34
+ # sentiment_analyzer = pipeline("sentiment-analysis")
35
+
36
+ # # Function to preprocess user input for better NLP understanding
37
+ # def preprocess_input(user_input):
38
+ # # Clean up text (remove unnecessary characters, standardize)
39
+ # user_input = user_input.strip().lower()
40
+ # return user_input
41
+
42
+ # # Function for sentiment analysis (optional)
43
+ # def analyze_sentiment(user_input):
44
+ # result = sentiment_analyzer(user_input)
45
+ # return result[0]['label'] # Positive, Negative, or Neutral
46
+
47
+ # # Function to extract medical entities from input using NER
48
+
49
+ # symptoms = [
50
+ # "fever", "cough", "headache", "nausea", "pain", "fatigue", "dizziness",
51
+ # "shortness of breath", "sore throat", "runny nose", "congestion", "diarrhea",
52
+ # "vomiting", "chills", "sweating", "loss of appetite", "insomnia",
53
+ # "itching", "rash", "swelling", "bleeding", "burning sensation",
54
+ # "weakness", "tingling", "numbness", "muscle cramps", "joint pain",
55
+ # "blurred vision", "double vision", "dry eyes", "sensitivity to light",
56
+ # "difficulty breathing", "palpitations", "chest pain", "back pain",
57
+ # "stomach ache", "abdominal pain", "weight loss", "weight gain",
58
+ # "frequent urination", "difficulty urinating", "anxiety", "depression",
59
+ # "irritability", "confusion", "memory loss", "bruising"
60
+ # ]
61
+ # diseases = [
62
+ # "diabetes", "cancer", "asthma", "flu", "pneumonia", "hypertension",
63
+ # "arthritis", "bronchitis", "migraine", "stroke", "heart attack",
64
+ # "coronary artery disease", "tuberculosis", "malaria", "dengue",
65
+ # "hepatitis", "anemia", "thyroid disease", "eczema", "psoriasis",
66
+ # "osteoporosis", "parkinson's", "alzheimer's", "depression",
67
+ # "anxiety disorder", "schizophrenia", "epilepsy", "bipolar disorder",
68
+ # "chronic kidney disease", "liver cirrhosis", "HIV", "AIDS",
69
+ # "covid-19", "cholera", "smallpox", "measles", "mumps",
70
+ # "rubella", "whooping cough", "obesity", "GERD", "IBS",
71
+ # "celiac disease", "ulcerative colitis", "Crohn's disease",
72
+ # "sleep apnea", "hypothyroidism", "hyperthyroidism"
73
+ # ]
74
+
75
+
76
+ # # Function to extract medical entities
77
+ # def extract_medical_entities(user_input):
78
+ # user_input = preprocess_input(user_input)
79
+ # medical_entities = []
80
+ # for word in user_input.split():
81
+ # if word in symptoms or word in diseases:
82
+ # medical_entities.append(word)
83
+ # return medical_entities
84
+ # # def extract_medical_entities(user_input):
85
+ # # doc = nlp(user_input)
86
+ # # medical_entities = [ent.text for ent in doc.ents if ent.label_ == "SYMPTOM" or ent.label_ == "DISEASE"]
87
+ # # print(medical_entities)
88
+ # # print("This is doc",doc)
89
+ # # return medical_entities
90
+
91
+ # # Function to encode the image
92
+ # def encode_image(uploaded_image):
93
+ # try:
94
+ # logger.debug("Encoding image...")
95
+ # buffered = BytesIO()
96
+ # uploaded_image.save(buffered, format="PNG")
97
+ # logger.debug("Image encoding complete.")
98
+ # return base64.b64encode(buffered.getvalue()).decode("utf-8")
99
+ # except Exception as e:
100
+ # logger.error(f"Error encoding image: {e}")
101
+ # raise
102
+
103
+ # # Initialize messages
104
+ # def initialize_messages():
105
+ # return [{"role": "system",
106
+ # "content": '''You are Dr. HealthBuddy, a professional, empathetic,
107
+ # and knowledgeable virtual doctor chatbot. Your purpose is to provide health information,
108
+ # symptom guidance, and lifestyle tips using the uploaded dataset as a reference for common
109
+ # symptoms and associated conditions.
110
+
111
+ # Utilize the dataset to provide information about symptoms and possible conditions for educational purposes.
112
+ # If a symptom matches data in the dataset, offer users relevant insights, and suggest general management strategies.
113
+ # Clearly communicate that you are not a substitute for professional medical advice.
114
+ # Encourage users to consult a licensed healthcare provider for any severe or persistent health issues.
115
+ # Maintain a friendly and understanding tone in all responses.
116
+ # Examples:
117
+
118
+ # User: "I have skin rash and itching. What could it be?"
119
+ # Response: "According to the data, skin rash and itching are common symptoms of conditions like fungal infections.
120
+ # You can try keeping the affected area dry and clean, and using over-the-counter antifungal creams.
121
+ # If the rash persists or worsens, please consult a dermatologist."
122
+
123
+ # User: "What might cause nodal skin eruptions?"
124
+ # Response: "Nodal skin eruptions could be linked to conditions such as fungal infections.
125
+ # It's best to monitor the symptoms and avoid scratching.
126
+ # For a proper diagnosis, consider visiting a healthcare provider.'''}]
127
+
128
+
129
+ # messages = initialize_messages()
130
+
131
+ # def customLLMBot(user_input, uploaded_image, chat_history):
132
+ # try:
133
+ # global messages
134
+ # logger.info("Processing input...")
135
+
136
+ # # Preprocess the user input
137
+ # user_input = preprocess_input(user_input)
138
+
139
+ # # Analyze sentiment (Optional)
140
+ # sentiment = analyze_sentiment(user_input)
141
+ # logger.info(f"Sentiment detected: {sentiment}")
142
+
143
+ # # Extract medical entities (Optional)
144
+ # medical_entities = extract_medical_entities(user_input)
145
+ # logger.info(f"Extracted medical entities: {medical_entities}")
146
+
147
+ # # Append user input to the chat history
148
+ # chat_history.append(("user", user_input))
149
+
150
+ # if uploaded_image is not None:
151
+ # # Encode the image to base64
152
+ # base64_image = encode_image(uploaded_image)
153
+
154
+ # logger.debug(f"Image received, size: {len(base64_image)} bytes")
155
+
156
+ # # Create a message for the image prompt
157
+ # messages_image = [
158
+ # {
159
+ # "role": "user",
160
+ # "content": [
161
+ # {"type": "text", "text": "What's in this image?"},
162
+ # {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{base64_image}"}}
163
+ # ]
164
+ # }
165
+ # ]
166
+
167
+ # logger.info("Sending image to Groq API for processing...")
168
+ # response = client.chat.completions.create(
169
+ # model="llama-3.2-11b-vision-preview",
170
+ # messages=messages_image,
171
+ # )
172
+ # logger.info("Image processed successfully.")
173
+ # else:
174
+ # # Process text input
175
+ # logger.info("Processing text input...")
176
+ # messages.append({
177
+ # "role": "user",
178
+ # "content": user_input
179
+ # })
180
+ # response = client.chat.completions.create(
181
+ # model="llama-3.2-11b-vision-preview",
182
+ # messages=messages,
183
+ # )
184
+ # logger.info("Text processed successfully.")
185
+
186
+ # # Extract the reply
187
+ # LLM_reply = response.choices[0].message.content
188
+ # logger.debug(f"LLM reply: {LLM_reply}")
189
+
190
+ # # Append the bot's response to the chat history
191
+ # chat_history.append(("bot", LLM_reply))
192
+ # messages.append({"role": "assistant", "content": LLM_reply})
193
+
194
+ # # Generate audio for response
195
+ # audio_file = f"response_{uuid.uuid4().hex}.mp3"
196
+ # tts = gTTS(LLM_reply, lang='en')
197
+ # tts.save(audio_file)
198
+ # logger.info(f"Audio response saved as {audio_file}")
199
+
200
+ # # Return chat history and audio file
201
+ # return chat_history, audio_file
202
+
203
+ # except Exception as e:
204
+ # logger.error(f"Error in customLLMBot function: {e}")
205
+ # return [("user", user_input or "Image uploaded"), ("bot", f"An error occurred: {e}")], None
206
+
207
+ # # Gradio Interface
208
+ # def chatbot_ui():
209
+ # with gr.Blocks() as demo:
210
+ # gr.Markdown("# Healthcare Chatbot Doctor")
211
+
212
+ # # State for user chat history
213
+ # chat_history = gr.State([])
214
+
215
+ # # Layout for chatbot and input box alignment
216
+ # with gr.Row():
217
+ # with gr.Column(scale=3): # Main column for chatbot
218
+ # chatbot = gr.Chatbot(label="Responses", elem_id="chatbot")
219
+ # user_input = gr.Textbox(
220
+ # label="Ask a health-related question",
221
+ # placeholder="Describe your symptoms...",
222
+ # elem_id="user-input",
223
+ # lines=1,
224
+ # )
225
+ # with gr.Column(scale=1): # Side column for image and buttons
226
+ # uploaded_image = gr.Image(label="Upload an Image", type="pil")
227
+ # submit_btn = gr.Button("Submit")
228
+ # clear_btn = gr.Button("Clear")
229
+ # audio_output = gr.Audio(label="Audio Response")
230
+
231
+ # # Define actions
232
+ # def handle_submit(user_query, image, history):
233
+ # logger.info("User submitted a query.")
234
+ # response, audio = customLLMBot(user_query, image, history)
235
+ # return response, audio, None, "", history # Clear the image after submission
236
+
237
+ # # Submit on pressing Enter key
238
+ # user_input.submit(
239
+ # handle_submit,
240
+ # inputs=[user_input, uploaded_image, chat_history],
241
+ # outputs=[chatbot, audio_output, uploaded_image, user_input, chat_history],
242
+ # )
243
+
244
+ # # Submit on button click
245
+ # submit_btn.click(
246
+ # handle_submit,
247
+ # inputs=[user_input, uploaded_image, chat_history],
248
+ # outputs=[chatbot, audio_output, uploaded_image, user_input, chat_history],
249
+ # )
250
+
251
+ # # Action for clearing all fields
252
+ # clear_btn.click(
253
+ # lambda: ([], "", None, []),
254
+ # inputs=[],
255
+ # outputs=[chatbot, user_input, uploaded_image, chat_history],
256
+ # )
257
+
258
+ # return demo
259
+
260
+ # # Launch the interface
261
+ # #chatbot_ui().launch(server_name="0.0.0.0", server_port=7860)
262
+
263
+ # chatbot_ui().launch(server_name="localhost", server_port=7860)
264
+
265
+
266
  from groq import Groq
267
  import gradio as gr
268
  from gtts import gTTS
 
271
  from io import BytesIO
272
  import os
273
  import logging
274
+ import spacy
275
+ from transformers import pipeline
276
+ import torch
277
+ from PIL import Image
278
+ from torchvision import transforms
279
+ import pathlib
280
+ import cv2 # Import OpenCV
281
+ import numpy as np
282
+
283
+ # Pathlib adjustment for Windows compatibility
284
+ temp = pathlib.PosixPath
285
+ pathlib.PosixPath = pathlib.WindowsPath
286
 
287
  # Set up logger
288
  logger = logging.getLogger(__name__)
 
295
  logger.addHandler(console_handler)
296
  logger.addHandler(file_handler)
297
 
298
+ #Initialize Groq Client
299
  client = Groq(api_key=os.getenv("GROQ_API_KEY_2"))
300
 
301
+ # # Initialize Groq Client
302
+ # client = Groq(api_key="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")
303
+
304
+ # Initialize spaCy NLP model for named entity recognition (NER)
305
+ nlp = spacy.load("en_core_web_sm")
306
+
307
+ # Initialize sentiment analysis model using Hugging Face
308
+ sentiment_analyzer = pipeline("sentiment-analysis")
309
+
310
+ # Load pre-trained YOLOv5 model
311
+ def load_yolov5_model():
312
+ model = torch.hub.load(
313
+ r'C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\yolov5',
314
+ 'custom',
315
+ path=r"C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\models\best.pt",
316
+ source="local"
317
+ )
318
+ model.eval()
319
+ return model
320
+
321
+ model = load_yolov5_model()
322
+
323
+ # Function to preprocess user input for better NLP understanding
324
+ def preprocess_input(user_input):
325
+ user_input = user_input.strip().lower()
326
+ return user_input
327
+
328
+ # Function for sentiment analysis (optional)
329
+ def analyze_sentiment(user_input):
330
+ result = sentiment_analyzer(user_input)
331
+ return result[0]['label']
332
+
333
+ # Function to extract medical entities from input using NER
334
+ symptoms = [
335
+ "fever", "cough", "headache", "nausea", "pain", "fatigue", "dizziness",
336
+ "shortness of breath", "sore throat", "runny nose", "congestion", "diarrhea",
337
+ "vomiting", "chills", "sweating", "loss of appetite", "insomnia",
338
+ "itching", "rash", "swelling", "bleeding", "burning sensation",
339
+ "weakness", "tingling", "numbness", "muscle cramps", "joint pain",
340
+ "blurred vision", "double vision", "dry eyes", "sensitivity to light",
341
+ "difficulty breathing", "palpitations", "chest pain", "back pain",
342
+ "stomach ache", "abdominal pain", "weight loss", "weight gain",
343
+ "frequent urination", "difficulty urinating", "anxiety", "depression",
344
+ "irritability", "confusion", "memory loss", "bruising"
345
+ ]
346
+ diseases = [
347
+ "diabetes", "cancer", "asthma", "flu", "pneumonia", "hypertension",
348
+ "arthritis", "bronchitis", "migraine", "stroke", "heart attack",
349
+ "coronary artery disease", "tuberculosis", "malaria", "dengue",
350
+ "hepatitis", "anemia", "thyroid disease", "eczema", "psoriasis",
351
+ "osteoporosis", "parkinson's", "alzheimer's", "depression",
352
+ "anxiety disorder", "schizophrenia", "epilepsy", "bipolar disorder",
353
+ "chronic kidney disease", "liver cirrhosis", "HIV", "AIDS",
354
+ "covid-19", "cholera", "smallpox", "measles", "mumps",
355
+ "rubella", "whooping cough", "obesity", "GERD", "IBS",
356
+ "celiac disease", "ulcerative colitis", "Crohn's disease",
357
+ "sleep apnea", "hypothyroidism", "hyperthyroidism"
358
+ ]
359
+
360
+ def extract_medical_entities(user_input):
361
+ user_input = preprocess_input(user_input)
362
+ medical_entities = []
363
+ for word in user_input.split():
364
+ if word in symptoms or word in diseases:
365
+ medical_entities.append(word)
366
+ return medical_entities
367
 
368
  # Function to encode the image
369
  def encode_image(uploaded_image):
370
  try:
371
  logger.debug("Encoding image...")
372
  buffered = BytesIO()
373
+ uploaded_image.save(buffered, format="PNG")
374
  logger.debug("Image encoding complete.")
375
  return base64.b64encode(buffered.getvalue()).decode("utf-8")
376
  except Exception as e:
377
  logger.error(f"Error encoding image: {e}")
378
  raise
 
 
 
 
 
 
 
 
 
 
 
379
 
380
+ # Initialize messages
381
+ def initialize_messages():
382
+ return [{"role": "system", "content": '''You are Dr. HealthBuddy, a professional, empathetic, and knowledgeable virtual doctor chatbot.'''}]
383
 
384
+ messages = initialize_messages()
 
385
 
386
+ # Function for image prediction using YOLOv5
387
+ def predict_image(image):
388
+ try:
389
+ # Debug: Check if the image is None
390
+ if image is None:
391
+ return "Error: No image uploaded.", "No description available."
392
+
393
+ # Convert PIL image to NumPy array (OpenCV format)
394
+ image_np = np.array(image) # Convert PIL image to NumPy array
395
+
396
+ # Convert RGB to BGR (OpenCV uses BGR by default)
397
+ image_np = cv2.cvtColor(image_np, cv2.COLOR_RGB2BGR)
398
+
399
+ # Resize the image to match the model's expected input size
400
+ image_resized = cv2.resize(image_np, (224, 224))
401
+
402
+ # Transform the image for the model
403
+ transform = transforms.Compose([
404
+ transforms.ToTensor(), # Convert image to tensor
405
+ transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # Normalize
406
+ ])
407
+ im = transform(image_resized).unsqueeze(0) # Add batch dimension (BCHW)
408
+
409
+ # Get predictions
410
+ with torch.no_grad():
411
+ output = model(im) # Raw model output (logits)
412
+
413
+ # Apply softmax to get confidence scores
414
+ softmax = torch.nn.Softmax(dim=1)
415
+ probs = softmax(output)
416
+
417
+ # Get the predicted class and its confidence score
418
+ predicted_class_id = torch.argmax(probs, dim=1).item()
419
+ confidence_score = probs[0, predicted_class_id].item()
420
+
421
+ # Get predicted class name if available
422
+ if hasattr(model, 'names'):
423
+ class_name = model.names[predicted_class_id]
424
+ prediction_result = f"Predicted Class: {class_name}\nConfidence: {confidence_score:.4f}"
425
+ description = get_description(class_name) # Function to get description
426
+ else:
427
+ prediction_result = f"Predicted Class ID: {predicted_class_id}\nConfidence: {confidence_score:.4f}"
428
+ description = "No description available."
429
 
430
+ # Display the image with OpenCV (optional)
431
+ cv2.imshow("Processed Image", image_resized)
432
+ cv2.waitKey(1) # Wait for 1 ms to display the image
433
 
434
+ return prediction_result, description
 
 
435
 
436
+ except Exception as e:
437
+ logger.error(f"Error in image prediction: {e}")
438
+ return f"An error occurred during image prediction: {e}", "No description available."
439
+
440
+ # Function to get description based on predicted class
441
+ def get_description(class_name):
442
+ descriptions = {
443
+ "bcc": "Basal cell carcinoma (BCC) is a type of skin cancer that begins in the basal cells. It often appears as a slightly transparent bump on the skin, though it can take other forms. BCC grows slowly and is unlikely to spread to other parts of the body, but early treatment is important to prevent damage to surrounding tissues.",
444
+ "atopic": "Atopic dermatitis is a chronic skin condition characterized by itchy, inflamed skin. It is common in individuals with a family history of allergies or asthma.",
445
+ "acne": "Acne is a skin condition that occurs when hair follicles become clogged with oil and dead skin cells. It often causes pimples, blackheads, and whiteheads, and is most common among teenagers.",
446
+ # Add more descriptions as needed
447
+ }
448
+ return descriptions.get(class_name.lower(), "No description available.")
449
+
450
+ # Custom LLM Bot Function
451
  def customLLMBot(user_input, uploaded_image, chat_history):
452
  try:
453
  global messages
454
  logger.info("Processing input...")
455
 
456
+ # Preprocess the user input
457
+ user_input = preprocess_input(user_input)
458
+
459
+ # Analyze sentiment (Optional)
460
+ sentiment = analyze_sentiment(user_input)
461
+ logger.info(f"Sentiment detected: {sentiment}")
462
+
463
+ # Extract medical entities (Optional)
464
+ medical_entities = extract_medical_entities(user_input)
465
+ logger.info(f"Extracted medical entities: {medical_entities}")
466
+
467
  # Append user input to the chat history
468
  chat_history.append(("user", user_input))
469
 
 
471
  # Encode the image to base64
472
  base64_image = encode_image(uploaded_image)
473
 
 
474
  logger.debug(f"Image received, size: {len(base64_image)} bytes")
475
 
476
  # Create a message for the image prompt
477
+ messages_image = [
 
478
  {
479
  "role": "user",
480
  "content": [
 
509
 
510
  # Append the bot's response to the chat history
511
  chat_history.append(("bot", LLM_reply))
512
+ messages.append({"role": "assistant", "content": LLM_reply})
513
 
514
  # Generate audio for response
515
  audio_file = f"response_{uuid.uuid4().hex}.mp3"
 
521
  return chat_history, audio_file
522
 
523
  except Exception as e:
 
524
  logger.error(f"Error in customLLMBot function: {e}")
525
+ return [("user", user_input or "Image uploaded"), ("bot", f"An error occurred: {e}")], None
 
526
 
527
  # Gradio Interface
528
  def chatbot_ui():
 
548
  clear_btn = gr.Button("Clear")
549
  audio_output = gr.Audio(label="Audio Response")
550
 
551
+ # New section for image prediction (left and right layout)
552
+ with gr.Row():
553
+ # Left side: Upload image
554
+ with gr.Column():
555
+ gr.Markdown("### Upload Image for Prediction")
556
+ prediction_image = gr.Image(label="Upload Image", type="pil")
557
+ predict_btn = gr.Button("Predict")
558
+
559
+ # Right side: Prediction result and description
560
+ with gr.Column():
561
+ gr.Markdown("### Prediction Result")
562
+ prediction_output = gr.Textbox(label="Result", interactive=False)
563
+
564
+ # Description column
565
+ gr.Markdown("### Description")
566
+ description_output = gr.Textbox(label="Description", interactive=False)
567
+
568
+ # Clear button for prediction result (below description box)
569
+ clear_prediction_btn = gr.Button("Clear Prediction")
570
+
571
  # Define actions
572
  def handle_submit(user_query, image, history):
573
  logger.info("User submitted a query.")
574
  response, audio = customLLMBot(user_query, image, history)
575
+ return response, audio, None, "", history
576
+
577
+ # Clear prediction result and image
578
+ def clear_prediction(prediction_image, prediction_output, description_output):
579
+ return None, "", ""
580
 
581
  # Submit on pressing Enter key
582
  user_input.submit(
583
  handle_submit,
584
  inputs=[user_input, uploaded_image, chat_history],
585
+ outputs=[chatbot, audio_output, uploaded_image, user_input, chat_history],
586
  )
587
 
588
  # Submit on button click
589
  submit_btn.click(
590
  handle_submit,
591
  inputs=[user_input, uploaded_image, chat_history],
592
+ outputs=[chatbot, audio_output, uploaded_image, user_input, chat_history],
593
  )
594
 
595
  # Action for clearing all fields
 
599
  outputs=[chatbot, user_input, uploaded_image, chat_history],
600
  )
601
 
602
+ # Action for image prediction
603
+ predict_btn.click(
604
+ predict_image,
605
+ inputs=[prediction_image],
606
+ outputs=[prediction_output, description_output], # Update both outputs
607
+ )
608
+
609
+ # Action for clearing prediction result and image
610
+ clear_prediction_btn.click(
611
+ clear_prediction,
612
+ inputs=[prediction_image, prediction_output, description_output],
613
+ outputs=[prediction_image, prediction_output, description_output],
614
+ )
615
+
616
  return demo
617
 
618
  # Launch the interface
619
+ # chatbot_ui().launch(server_name="localhost", server_port=7860)
620
 
621
+ # Launch the interface
622
+ chatbot_ui().launch(server_name="0.0.0.0", server_port=7860)
pred.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import torch
3
+ from PIL import Image
4
+ from torchvision import transforms
5
+ import pathlib
6
+ temp = pathlib.PosixPath
7
+ pathlib.PosixPath = pathlib.WindowsPath
8
+
9
+
10
+ model = torch.hub.load(r'C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\yolov5', 'custom', path=r"C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\models\best.pt", source="local")
11
+
12
+
13
+ img_path = r"C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\ACNE.jpg"
14
+ image = Image.open(img_path)
15
+
16
+
17
+ transform = transforms.Compose([
18
+ transforms.Resize((224, 224)), # Resize to model's expected input size
19
+ transforms.ToTensor(), # Convert image to tensor
20
+ transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # Normalize
21
+ ])
22
+ im = transform(image).unsqueeze(0) # Add batch dimension (BCHW)
23
+
24
+
25
+ output = model(im)
26
+
27
+
28
+ # Get predictions
29
+ with torch.no_grad():
30
+ output = model(im) # Raw model output (logits)
31
+
32
+ # Apply softmax to get confidence scores
33
+ softmax = torch.nn.Softmax(dim=1)
34
+ probs = softmax(output)
35
+
36
+ # Get the predicted class and its confidence score
37
+ predicted_class_id = torch.argmax(probs, dim=1).item()
38
+ confidence_score = probs[0, predicted_class_id].item()
39
+
40
+ # Print predicted class and confidence score
41
+ print(f"Predicted Class ID: {predicted_class_id}")
42
+ print(f"Confidence Score: {confidence_score:.4f}")
43
+
44
+ # Print predicted class name if available
45
+ if hasattr(model, 'names'):
46
+ class_name = model.names[predicted_class_id]
47
+ print(f"Predicted Class Name: {class_name}")
48
+
49
+ # import torch
50
+ # import cv2 # Import OpenCV
51
+ # from torchvision import transforms
52
+ # import pathlib
53
+
54
+ # # Pathlib adjustment for Windows compatibility
55
+ # temp = pathlib.PosixPath
56
+ # pathlib.PosixPath = pathlib.WindowsPath
57
+
58
+ # # Load pre-trained YOLOv5 model
59
+ # model = torch.hub.load(
60
+ # r'C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\yolov5',
61
+ # 'custom',
62
+ # path=r"C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\models\best.pt",
63
+ # source="local"
64
+ # )
65
+
66
+ # # Set model to evaluation mode
67
+ # model.eval()
68
+
69
+ # # Define image transformations (for PyTorch)
70
+ # transform = transforms.Compose([
71
+ # transforms.ToTensor(), # Convert image to tensor
72
+ # transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # Normalize
73
+ # ])
74
+
75
+ # # Load and preprocess the image using OpenCV
76
+ # img_path = r"C:\Users\RESHMA R B\OneDrive\Documents\Desktop\project_without_malayalam\chatbot2\ACNE.jpg"
77
+ # image = cv2.imread(img_path) # Load image in BGR format
78
+ # image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
79
+ # image_resized = cv2.resize(image, (224, 224)) # Resize to match model's expected input size
80
+
81
+ # # Transform the image for the model
82
+ # im = transform(image_resized).unsqueeze(0) # Add batch dimension (BCHW)
83
+
84
+ # # Get predictions
85
+ # with torch.no_grad():
86
+ # output = model(im) # Raw model output (logits)
87
+
88
+ # # Apply softmax to get confidence scores
89
+ # softmax = torch.nn.Softmax(dim=1)
90
+ # probs = softmax(output)
91
+
92
+ # # Get the predicted class and its confidence score
93
+ # predicted_class_id = torch.argmax(probs, dim=1).item()
94
+ # confidence_score = probs[0, predicted_class_id].item()
95
+
96
+ # # Print predicted class and confidence score
97
+ # print(f"Predicted Class ID: {predicted_class_id}")
98
+ # print(f"Confidence Score: {confidence_score:.4f}")
99
+
100
+ # # Print predicted class name if available
101
+ # if hasattr(model, 'names'):
102
+ # class_name = model.names[predicted_class_id]
103
+ # print(f"Predicted Class Name: {class_name}")
104
+
105
+
106
+ # cv2.imshow("Input Image", image)
107
+ # cv2.waitKey(0)
108
+ # cv2.destroyAllWindows()
requirements.txt CHANGED
@@ -1,8 +1,29 @@
1
- gtts
2
- gradio
3
- groq
4
- loguru
5
- # torch
6
- # transformers
7
- # torchvision
8
- # pillow
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core Libraries
2
+ numpy==1.24.3
3
+ pandas==2.0.2
4
+ scipy==1.10.1
5
+
6
+ # Machine Learning & Deep Learning
7
+ torch==2.0.1
8
+ torchvision==0.15.2
9
+ transformers==4.30.2
10
+ scikit-learn==1.2.2
11
+ ultralytics==8.0.124
12
+
13
+ # Image Processing
14
+ pillow==9.5.0
15
+ opencv-python==4.7.0.72
16
+
17
+ # NLP
18
+ spacy==3.5.3
19
+
20
+ # Visualization
21
+ matplotlib==3.7.1
22
+
23
+ # Gradio & Audio
24
+ gradio==3.32.0
25
+ gtts==2.3.2
26
+
27
+ # API Integration
28
+ groq==0.1.0
29
+ requests==2.28.2