Safetensors
llama
MilesQLi commited on
Commit
a1b692e
·
verified ·
1 Parent(s): 8aec3a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -38,7 +38,7 @@ The model follows the Vicuna 1.1 chat format:
38
  A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
39
 
40
  USER: {user input}
41
- ASSISTANT: {machine response}<|endoftext|>
42
  ```
43
 
44
  ## Usage
@@ -53,7 +53,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
53
  model = AutoModelForCausalLM.from_pretrained(model_name)
54
 
55
  # Define the prompt in Vicuna 1.1 format
56
- prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: What are the Pyramids of Giza known for?\nASSISTANT: "
57
  inputs = tokenizer(prompt, return_tensors="pt")
58
  outputs = model.generate(**inputs, max_length=100)
59
 
 
38
  A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
39
 
40
  USER: {user input}
41
+ ASSISTANT:{machine response}<|endoftext|>
42
  ```
43
 
44
  ## Usage
 
53
  model = AutoModelForCausalLM.from_pretrained(model_name)
54
 
55
  # Define the prompt in Vicuna 1.1 format
56
+ prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: What are the Pyramids of Giza known for?\nASSISTANT:"
57
  inputs = tokenizer(prompt, return_tensors="pt")
58
  outputs = model.generate(**inputs, max_length=100)
59