metadata
language:
- zh
- en
ChatTruth-7B
ChatTruth-7B 在Qwen-VL的基础上,使用精心设计的数据进行了优化训练。与Qwen-VL相比,模型的中文对话能力得到了大幅提升。创新性提出Restore Module使大分辨率计算量大幅减少。
安装要求 (Requirements)
transformers 4.32.0
python 3.8 and above
pytorch 1.13 and above
CUDA 11.4 and above
快速开始 (Quickstart)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(1234)
model_path = 'ChatTruth-7B' # your downloaded model path.
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# use cuda device
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval()
query = tokenizer.from_list_format([
{'image': 'demo.jpeg'},
{'text': '图片中的文字是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 昆明太厉害了