Uploaded model
- Developed by: taoki
- License: apache-2.0
- Finetuned from model : mistralai/Mistral-7B-Instruct-v0.3
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained(
"taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python"
)
model = AutoModelForCausalLM.from_pretrained(
"taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python"
)
if torch.cuda.is_available():
model = model.to("cuda")
prompt="""[INST] OpenCVを用いて定点カメラから画像を保存するコードを示してください。 [/INST]"""
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=512,
do_sample=True,
top_p=0.9,
temperature=0.2,
repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0]))
Output
<s>[INST] OpenCVを用いて定点カメラから画像を保存するコードを示してください。 [/INST]```python
import cv2
# カメラの設定
cap = cv2.VideoCapture(0)
# フレーム数
frame_count = 10
# 画像の保存
for i in range(frame_count):
# フレームの取得
ret, frame = cap.read()
# 画像の保存
cv2.imwrite('image_{}.jpg'.format(i), frame)
# カメラの終了
cap.release()
```</s>
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for taoki/Mistral-7B-Instruct-v0.3_lora_jmultiwoz-dolly-amenokaku-alpaca_jp_python
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3