|
用于演示多语言检索的demo |
|
|
|
### 下载 |
|
|
|
在一个bash脚本里运行以下代码 |
|
|
|
```shell |
|
git clone https://hf-mirror.com/datasets/cfli/ret_demo |
|
# git clone 的数据不完整,需要全部删除,然后重新下载 |
|
for lang in "ar" "bn" "de" "en" "es" "fa" "fi" "fr" "hi" "id" "ja" "ko" "ru" "sw" "te" "th" "yo" "zh" |
|
do |
|
# 删除旧文件 |
|
rm -f ./ret_demo/data/${lang}/corpus.jsonl |
|
rm -f ./ret_demo/data/${lang}/dev_qrels.jsonl |
|
rm -f ./ret_demo/data/${lang}/dev_queries.jsonl |
|
rm -f ./ret_demo/emb/${lang}/corpus.npy |
|
done |
|
|
|
for lang in "ar" "bn" "de" "en" "es" "fa" "fi" "fr" "hi" "id" "ja" "ko" "ru" "sw" "te" "th" "yo" "zh" |
|
do |
|
# 下载并移动文件 |
|
wget https://hf-mirror.com/datasets/cfli/ret_demo/resolve/main/emb/${lang}/corpus.npy |
|
mv corpus.npy ./ret_demo/emb/${lang}/ |
|
|
|
wget https://hf-mirror.com/datasets/cfli/ret_demo/resolve/main/data/${lang}/corpus.jsonl |
|
mv corpus.jsonl ./ret_demo/data/${lang}/ |
|
|
|
wget https://hf-mirror.com/datasets/cfli/ret_demo/resolve/main/data/${lang}/dev_qrels.jsonl |
|
mv dev_qrels.jsonl ./ret_demo/data/${lang}/ |
|
|
|
wget https://hf-mirror.com/datasets/cfli/ret_demo/resolve/main/data/${lang}/dev_queries.jsonl |
|
mv dev_queries.jsonl ./ret_demo/data/${lang}/ |
|
done |
|
|
|
``` |
|
|
|
### 环境依赖 |
|
|
|
```shell |
|
pip install gradio |
|
pip install -U FlagEmbedding |
|
pip install https://github.com/kyamagu/faiss-wheels/releases/download/v1.7.3/faiss_gpu-1.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl |
|
``` |
|
|
|
### 使用 |
|
|
|
1. 如果模型在之前未下载,运行`app.py`之前先设置`export HF_ENDPOINT=https://hf-mirror.com` |
|
|
|
2. 如果要先下载模型到本地某个指定路径,可以按如下代码下载 |
|
|
|
```python |
|
import os |
|
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' |
|
|
|
save_path = './save_model' |
|
|
|
from transformers import AutoTokenizer, AutoModel |
|
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-multilingual-gemma2') |
|
model = AutoModel.from_pretrained('BAAI/bge-multilingual-gemma2') |
|
|
|
tokenizer.save_pretrained(save_path) |
|
model.save_pretrained(save_path) |
|
``` |
|
|
|
3. 需要修改`utils.py`文件第 23 行模型名称代码,将`self_model_path`改为【上述`save_path` 或 `self_model_path=model_path`】 |
|
|
|
4. 如果要用CPU加载模型,需要修改`utils.py`文件第 25-45 行 |
|
|
|
```python |
|
def load_model_util(previous_model, model_path): |
|
self_model_path = '' # 使用上述 save_path,或者 self_model_path = model_path |
|
if model_path == 'BAAI/bge-multilingual-gemma2': |
|
if previous_model is not None and previous_model.model_name_or_path == self_model_path: |
|
return previous_model |
|
model = FlagLLMModel(self_model_path, |
|
query_instruction_for_retrieval="Given a question, retrieve Wikipedia passages that answer the question.", |
|
query_instruction_format="<instruct>{}\n<query>{}", |
|
use_fp16=False, |
|
devices=['cpu']) |
|
else: |
|
if previous_model is not None and previous_model.model_name_or_path == model_path: |
|
return previous_model |
|
model = FlagAutoModel.from_finetuned(model_path, |
|
use_fp16=False, |
|
devices=['cpu']) |
|
if previous_model is not None: |
|
del previous_model |
|
return model |
|
``` |
|
|
|
5. 需要修改`app.py`第 11 行和第 12 行`data_dir`和`index_dir`的值,指向本地的数据/索引路径 |
|
|
|
6. 如果需要保存每次的`faiss index`,修改`app.py`第 81 行,设置`faiss.write_index(faiss_index, index_path)`(读index与构造index时间相近,保存不是很必要) |
|
|
|
7. 运行`app.py` |