Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,13 @@ app_file: app.py
|
|
9 |
pinned: false
|
10 |
---
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
pinned: false
|
10 |
---
|
11 |
|
12 |
+
# Advanced Dual Prompting
|
13 |
+
|
14 |
+
This is a playground for testing out Standford's 'Meta-Prompting' logic ([paper link](https://arxiv.org/abs/2401.12954)), in whcih for every user request, it first passes the request to a 'meta' bot, the `meta` bot will generate a system prompt of a field-related 'Expert' bot for answering user's request.
|
15 |
+
That is, for each round, the LLM accordingly assigns the best expert for answering user's specific request.
|
16 |
+
Standford claimed that this simple implementation result in a 60%+ better accuracy compared to a standard 'syst_prompt + chat_history' logic.
|
17 |
+
Hence, one can't be too curious in checking it out, here is a simple implemnetaion for everybody to play around.
|
18 |
+
|
19 |
+
Something to keep in mind:
|
20 |
+
1. Currently it requires an api key from chatglm (get one here if you don't have one: [link](https://open.bigmodel.cn/usercenter/apikeys))
|
21 |
+
2. To balanced between contextual-understanding and token-saving, the meta bot's logic is modified to have access to only the last round of chat and the current user request when 'generating' an expert, for now.
|