Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ pipeline_tag: text-generation
|
|
6 |
---
|
7 |
|
8 |
|
9 |
-
Pofi is a fine-tuned version of "decapoda-research/llama-7b-hf," designed to act as an assistant capable of performing various tasks, such as:
|
10 |
|
11 |
Setting alarms
|
12 |
Connecting to the web
|
@@ -17,13 +17,13 @@ Opening applications
|
|
17 |
Creating files
|
18 |
Manipulating the system
|
19 |
|
20 |
-
The training data was obtained manually through different prompts in ChatGPT, including examples created by me. It was fine-tuned using over 7,000 examples of "User" to "AI" commands. The training process was carried out in Google Colab, specifically in the "🦙🎛️ LLaMA-LoRA Tuner" notebook. The training lasted for approximately 5 hours over 10 epochs.
|
21 |
|
22 |
-
Once LoRa was obtained, the "export_hf_checkpoint.py" script from the "tloen/alpaca-lora" repository was used to merge LoRa with the base model. This was done to quantize the model to "ggml-q4," enabling its use on a graphics card-free computer (like mine). Quantization was performed using the "ggerganov/llama.cpp" repository and the "convert.py" script.
|
23 |
|
24 |
-
To utilize this model, you can employ "oobabooga/text-generation-webui," a user-friendly interface, or the interface I am developing for this project, "OscarMes/Pofi-Assistant."
|
25 |
|
26 |
-
This project was created for the purpose of studying and learning about language models. All rights are reserved according to the license included in "decapoda-research/llama-7b-hf." Please refer to the included LICENSE file in that repository.
|
27 |
|
28 |
---
|
29 |
license: other
|
|
|
6 |
---
|
7 |
|
8 |
|
9 |
+
Pofi is a fine-tuned version of ["decapoda-research/llama-7b-hf,"](https://huggingface.co/decapoda-research/llama-7b-hf) designed to act as an assistant capable of performing various tasks, such as:
|
10 |
|
11 |
Setting alarms
|
12 |
Connecting to the web
|
|
|
17 |
Creating files
|
18 |
Manipulating the system
|
19 |
|
20 |
+
The training data was obtained manually through different prompts in ChatGPT, including examples created by me. It was fine-tuned using over 7,000 examples of "User" to "AI" commands. The training process was carried out in Google Colab, specifically in the ["🦙🎛️ LLaMA-LoRA Tuner"](https://github.com/zetavg/LLaMA-LoRA-Tuner) notebook. The training lasted for approximately 5 hours over 10 epochs.
|
21 |
|
22 |
+
Once LoRa was obtained, the "export_hf_checkpoint.py" script from the ["tloen/alpaca-lora"](https://github.com/tloen/alpaca-lora) repository was used to merge LoRa with the base model. This was done to quantize the model to "ggml-q4," enabling its use on a graphics card-free computer (like mine). Quantization was performed using the ["ggerganov/llama.cpp"](https://github.com/ggerganov/llama.cpp) repository and the "convert.py" script.
|
23 |
|
24 |
+
To utilize this model, you can employ ["oobabooga/text-generation-webui,"](https://github.com/oobabooga/text-generation-webui) a user-friendly interface, or the interface I am developing for this project, ["OscarMes/Pofi-Assistant."](https://github.com/OscarMes/Pofi-Assistant)
|
25 |
|
26 |
+
This project was created for the purpose of studying and learning about language models. All rights are reserved according to the license included in "decapoda-research/llama-7b-hf." Please refer to the included [LICENSE](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/LICENSE) file in that repository.
|
27 |
|
28 |
---
|
29 |
license: other
|