Update hf_benchmark_example.py
Browse files- hf_benchmark_example.py +2 -2
hf_benchmark_example.py
CHANGED
|
@@ -3,9 +3,9 @@ cmd example
|
|
| 3 |
You need a file called "sample.txt" (default path) with text to take tokens for prompts or supply --text_file "path/to/text.txt" as an argument to a text file.
|
| 4 |
You can use our attached "sample.txt" file with one of Deci's blogs as a prompt.
|
| 5 |
# Run this and record tokens per second (652 tokens per second on A10 for DeciLM-6b)
|
| 6 |
-
python
|
| 7 |
# Run this and record tokens per second (136 tokens per second on A10 for meta-llama/Llama-2-7b-hf), CUDA OOM above batch size 8
|
| 8 |
-
python
|
| 9 |
"""
|
| 10 |
|
| 11 |
import json
|
|
|
|
| 3 |
You need a file called "sample.txt" (default path) with text to take tokens for prompts or supply --text_file "path/to/text.txt" as an argument to a text file.
|
| 4 |
You can use our attached "sample.txt" file with one of Deci's blogs as a prompt.
|
| 5 |
# Run this and record tokens per second (652 tokens per second on A10 for DeciLM-6b)
|
| 6 |
+
python hf_benchmark_example.py --model Deci/DeciLM-6b-instruct
|
| 7 |
# Run this and record tokens per second (136 tokens per second on A10 for meta-llama/Llama-2-7b-hf), CUDA OOM above batch size 8
|
| 8 |
+
python hf_benchmark_example.py --model meta-llama/Llama-2-7b-hf --batch_size 8
|
| 9 |
"""
|
| 10 |
|
| 11 |
import json
|