Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
---
|
5 |
+
This repository contains the index files required for the `SearchWikipediaTool` of the [grammar-based-agents](https://github.com/krasserm/grammar-based-agents) project.
|
6 |
+
|
7 |
+
It is based on the `krasserm/wikipedia-2023-11-en-embed-mxbai-int8-binary` dataset and contains the following files:
|
8 |
+
|
9 |
+
- `faiss-ubinary.index`: [Faiss](https://github.com/facebookresearch/faiss) index file containing the `binary` embeddings
|
10 |
+
- `usearch-int8-index`: [usearch](https://github.com/unum-cloud/usearch) index files containing the `int8` embeddings
|
11 |
+
- `document-url-mappings.sqlite`: [SQLite](https://www.sqlite.org/) database file containing mappings from document URLs to text chunk indices
|
12 |
+
|
13 |
+
The following code snippet demonstrates how to use the index files with the `SearchWikipediaTool`:
|
14 |
+
|
15 |
+
```python
|
16 |
+
from sentence_transformers import CrossEncoder, SentenceTransformer
|
17 |
+
from gba.tools.search import SearchWikipediaTool
|
18 |
+
from gba.tools.search import ContentExtractor
|
19 |
+
from gba.client import Llama3Instruct, LlamaCppClient
|
20 |
+
from gba.utils import Scratchpad
|
21 |
+
|
22 |
+
llm_model = Llama3Instruct(llm=LlamaCppClient(url="http://localhost:8084/completion", temperature=-1))
|
23 |
+
embedding_model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1", device="cuda")
|
24 |
+
rerank_model = CrossEncoder("mixedbread-ai/mxbai-rerank-large-v1", device="cuda")
|
25 |
+
|
26 |
+
search_wikipedia = SearchWikipediaTool(
|
27 |
+
llm=llm_model,
|
28 |
+
embedding_model=embedding_model,
|
29 |
+
rerank_model=rerank_model,
|
30 |
+
top_k_nodes=10,
|
31 |
+
top_k_related_documents=1,
|
32 |
+
top_k_related_nodes=3,
|
33 |
+
extractor=ContentExtractor(model=llm_model),
|
34 |
+
)
|
35 |
+
|
36 |
+
response = search_wikipedia.run(
|
37 |
+
task="Search Wikipedia for the launch date of the first iPhone.",
|
38 |
+
request="",
|
39 |
+
scratchpad=Scratchpad(),
|
40 |
+
)
|
41 |
+
```
|