Create prompts.py
Browse files- prompts.py +57 -0
prompts.py
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
# RAG prompt
|
4 |
+
rag_prompt = """ You are ahelpful assistant very profiient in formulating clear and meaningful answers from the context provided.Based on the CONTEXT Provided ,Please formulate
|
5 |
+
a clear concise and meaningful answer for the QUERY asked.Please refrain from making up your own answer in case the COTEXT
|
6 |
+
provided is not sufficient to answer the QUERY.In such a situation please respond as 'I do not know'.
|
7 |
+
QUERY:
|
8 |
+
{query}
|
9 |
+
CONTEXT
|
10 |
+
{context}
|
11 |
+
ANSWER:
|
12 |
+
"""
|
13 |
+
|
14 |
+
# Context Relevancy Checker Prompt
|
15 |
+
relevancy_prompt = """You are an expert judge tasked with evaluating whather the EACH OF THE CONTEXT provided in the CONTEXT LIST is self sufficient to answer the QUERY asked.
|
16 |
+
Analyze the provided QUERY AND CONTEXT to determine if each Ccontent in the CONTEXT LIST contains Relevant information to answer the QUERY.
|
17 |
+
|
18 |
+
Guidelines:
|
19 |
+
1. The content must not introduce new information beyond what's provided in the QUERY.
|
20 |
+
2. Pay close attention to the subject of statements. Ensure that attributes, actions, or dates are correctly associated with the right entities (e.g., a person vs. a TV show they star in).
|
21 |
+
6. Be vigilant for subtle misattributions or conflations of information, even if the date or other details are correct.
|
22 |
+
7. Check that the content in the CONTEXT LIST doesn't oversimplify or generalize information in a way that changes the meaning of the QUERY.
|
23 |
+
|
24 |
+
Analyze the text thoroughly and assign a relevancy score 0 or 1 where:
|
25 |
+
- 0: The content has all the necessary information to answer the QUERY
|
26 |
+
- 1: The content does not has the necessary information to answer the QUERY
|
27 |
+
|
28 |
+
```
|
29 |
+
EXAMPLE:
|
30 |
+
|
31 |
+
INPUT (for context only, not to be used for faithfulness evaluation):
|
32 |
+
What is the capital of France?
|
33 |
+
|
34 |
+
CONTEXT:
|
35 |
+
['France is a country in Western Europe. Its capital is Paris, which is known for landmarks like the Eiffel Tower.',
|
36 |
+
'Mr. Naveen patnaik has been the chief minister of Odisha for consequetive 5 terms']
|
37 |
+
|
38 |
+
OUTPUT:
|
39 |
+
The Context has sufficient information to answer the query.
|
40 |
+
|
41 |
+
RESPONSE:
|
42 |
+
{{"score":0}}
|
43 |
+
```
|
44 |
+
|
45 |
+
CONTENT LIST:
|
46 |
+
{context}
|
47 |
+
|
48 |
+
QUERY:
|
49 |
+
{retriever_query}
|
50 |
+
Provide your verdict in JSON format with a single key 'score' and no preamble or explanation:
|
51 |
+
[{{"content:1,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
|
52 |
+
{{"content:2,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
|
53 |
+
...]
|
54 |
+
|
55 |
+
"""
|
56 |
+
|
57 |
+
#
|