Fix code formatting in README
#3
by
mahiatlinux
- opened
README.md
CHANGED
|
@@ -45,7 +45,7 @@ Right now, we're working on more new Build Tools to come very soon, built on Lla
|
|
| 45 |
## Prompting Guide
|
| 46 |
Enigma uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
|
| 47 |
|
| 48 |
-
|
| 49 |
import transformers
|
| 50 |
import torch
|
| 51 |
|
|
@@ -69,7 +69,7 @@ outputs = pipeline(
|
|
| 69 |
)
|
| 70 |
|
| 71 |
print(outputs[0]["generated_text"][-1])
|
| 72 |
-
|
| 73 |
|
| 74 |
## The Model
|
| 75 |
Enigma is built on top of Llama 3.1 8b Instruct, using code-instruct data to supplement code-instruct performance using Llama 3.1 Instruct prompt style.
|
|
|
|
| 45 |
## Prompting Guide
|
| 46 |
Enigma uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
|
| 47 |
|
| 48 |
+
python```
|
| 49 |
import transformers
|
| 50 |
import torch
|
| 51 |
|
|
|
|
| 69 |
)
|
| 70 |
|
| 71 |
print(outputs[0]["generated_text"][-1])
|
| 72 |
+
```
|
| 73 |
|
| 74 |
## The Model
|
| 75 |
Enigma is built on top of Llama 3.1 8b Instruct, using code-instruct data to supplement code-instruct performance using Llama 3.1 Instruct prompt style.
|