|
--- |
|
language: |
|
- ja |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- trl |
|
- mistral |
|
datasets: |
|
- sakusakumura/databricks-dolly-15k-ja-scored |
|
- nu-dialogue/jmultiwoz |
|
- kunishou/amenokaku-code-instruct |
|
- HachiML/alpaca_jp_python |
|
base_model: mistralai/Mistral-7B-Instruct-v0.3 |
|
--- |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** taoki |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** mistralai/Mistral-7B-Instruct-v0.3 |
|
|
|
|
|
# Usage |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( |
|
"taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python" |
|
) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python" |
|
) |
|
|
|
if torch.cuda.is_available(): |
|
model = model.to("cuda") |
|
|
|
prompt="""[INST] OpenCVを用いて定点カメラから画像を保存するコードを示してください。 [/INST]""" |
|
|
|
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
outputs = model.generate( |
|
**input_ids, |
|
max_new_tokens=512, |
|
do_sample=True, |
|
top_p=0.9, |
|
temperature=0.2, |
|
repetition_penalty=1.1, |
|
) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
# Output |
|
```` |
|
<s>[INST] OpenCVを用いて定点カメラから画像を保存するコードを示してください。 [/INST]```python |
|
import cv2 |
|
|
|
# カメラの設定 |
|
cap = cv2.VideoCapture(0) |
|
|
|
# フレーム数 |
|
frame_count = 10 |
|
|
|
# 画像の保存 |
|
for i in range(frame_count): |
|
# フレームの取得 |
|
ret, frame = cap.read() |
|
|
|
# 画像の保存 |
|
cv2.imwrite('image_{}.jpg'.format(i), frame) |
|
|
|
# カメラの終了 |
|
cap.release() |
|
```</s> |
|
|
|
```` |