DrishtiSharma commited on
Commit
ee29c87
·
verified ·
1 Parent(s): 8e669f1

Update prompts.py

Browse files
Files changed (1) hide show
  1. prompts.py +33 -30
prompts.py CHANGED
@@ -1,55 +1,57 @@
1
  # RAG prompt
2
- rag_prompt = """ You are ahelpful assistant very profiient in formulating clear and meaningful answers from the context provided.Based on the CONTEXT Provided ,Please formulate
3
- a clear concise and meaningful answer for the QUERY asked.Please refrain from making up your own answer in case the COTEXT
4
- provided is not sufficient to answer the QUERY.In such a situation please respond as 'I do not know'.
 
5
  QUERY:
6
  {query}
7
- CONTEXT
 
8
  {context}
 
9
  ANSWER:
10
  """
11
 
 
12
  # Context Relevancy Checker Prompt
13
- relevancy_prompt = """You are an expert judge tasked with evaluating whether the EACH OF THE CONTEXT provided in the CONTEXT LIST is self sufficient to answer the QUERY asked.
14
- Analyze the provided QUERY AND CONTEXT to determine if each Ccontent in the CONTEXT LIST contains Relevant information to answer the QUERY.
15
 
16
  Guidelines:
17
- 1. The content must not introduce new information beyond what's provided in the QUERY.
18
  2. Pay close attention to the subject of statements. Ensure that attributes, actions, or dates are correctly associated with the right entities (e.g., a person vs. a TV show they star in).
19
- 6. Be vigilant for subtle misattributions or conflations of information, even if the date or other details are correct.
20
- 7. Check that the content in the CONTEXT LIST doesn't oversimplify or generalize information in a way that changes the meaning of the QUERY.
21
 
22
- Analyze the text thoroughly and assign a relevancy score 0 or 1 where:
23
- - 0: The content has all the necessary information to answer the QUERY
24
- - 1: The content does not has the necessary information to answer the QUERY
25
 
26
- ```
27
  EXAMPLE:
28
 
29
  INPUT (for context only, not to be used for faithfulness evaluation):
30
- What is the capital of France?
31
-
32
  CONTEXT:
33
- ['France is a country in Western Europe. Its capital is Paris, which is known for landmarks like the Eiffel Tower.',
34
- 'Mr. Naveen patnaik has been the chief minister of Odisha for consequetive 5 terms']
35
 
36
  OUTPUT:
37
- The Context has sufficient information to answer the query.
38
-
39
- RESPONSE:
40
- {{"score":0}}
41
- ```
42
-
43
- CONTENT LIST:
44
- {context}
45
 
46
  QUERY:
47
  {retriever_query}
48
- Provide your verdict in JSON format with a single key 'score' and no preamble or explanation:
49
- [{{"content:1,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
50
- {{"content:2,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
51
- ...]
52
 
 
 
 
 
 
 
 
 
53
  """
54
 
55
  # Relevant Context Picker Prompt
@@ -82,4 +84,5 @@ Provide your verdict in JSON format with a two key 'relevant_content' and 'cont
82
  [{{"context_number":<content1>,"relevant_content":<content corresponing to that element 1 in the Content List>}},
83
  {{"context_number":<content4>,"relevant_content":<content corresponing to that element 4 in the Content List>}},
84
  ...
85
- ]"""
 
 
1
  # RAG prompt
2
+ rag_prompt = """You are a helpful assistant proficient in formulating clear and meaningful answers from the provided context.
3
+ Based on the CONTEXT, please formulate a clear, concise, and meaningful answer for the QUERY.
4
+ Do not make up an answer if the CONTEXT is insufficient. Instead, respond with: "I do not know."
5
+
6
  QUERY:
7
  {query}
8
+
9
+ CONTEXT:
10
  {context}
11
+
12
  ANSWER:
13
  """
14
 
15
+
16
  # Context Relevancy Checker Prompt
17
+ relevancy_prompt = """You are an expert judge tasked with evaluating whether EACH CONTEXT in the CONTEXT LIST is sufficient to answer the QUERY.
18
+ Analyze the provided QUERY and CONTEXT carefully.
19
 
20
  Guidelines:
21
+ 1. The content must not introduce new information beyond what is provided in the QUERY.
22
  2. Pay close attention to the subject of statements. Ensure that attributes, actions, or dates are correctly associated with the right entities (e.g., a person vs. a TV show they star in).
23
+ 3. Be vigilant for subtle misattributions or conflations of information, even if the date or other details are correct.
24
+ 4. Check that the content in the CONTEXT LIST does not oversimplify or generalize information in a way that changes the meaning of the QUERY.
25
 
26
+ Assign a relevancy score:
27
+ - 0: The content has all the necessary information to answer the QUERY.
28
+ - 1: The content does not have the necessary information to answer the QUERY.
29
 
 
30
  EXAMPLE:
31
 
32
  INPUT (for context only, not to be used for faithfulness evaluation):
33
+ QUERY: What is the capital of France?
 
34
  CONTEXT:
35
+ 1. "France is a country in Western Europe. Its capital is Paris."
36
+ 2. "Mr. Naveen Patnaik has been the Chief Minister of Odisha for five terms."
37
 
38
  OUTPUT:
39
+ [
40
+ {"content": 1, "score": 0, "reasoning": "Paris is correctly mentioned as the capital."},
41
+ {"content": 2, "score": 1, "reasoning": "Unrelated to the query."}
42
+ ]
 
 
 
 
43
 
44
  QUERY:
45
  {retriever_query}
 
 
 
 
46
 
47
+ CONTEXT LIST:
48
+ {context}
49
+
50
+ Provide your verdict in JSON format with a single key 'score' and no preamble or explanation:
51
+ [
52
+ {"content": <content_number>, "score": <0 or 1>, "reasoning": "<explanation>"},
53
+ {"content": <content_number>, "score": <0 or 1>, "reasoning": "<explanation>"}
54
+ ]
55
  """
56
 
57
  # Relevant Context Picker Prompt
 
84
  [{{"context_number":<content1>,"relevant_content":<content corresponing to that element 1 in the Content List>}},
85
  {{"context_number":<content4>,"relevant_content":<content corresponing to that element 4 in the Content List>}},
86
  ...
87
+ ]"""
88
+