Bluckr commited on
Commit
ee30efd
·
1 Parent(s): 153e259

Update README.md

Browse files

![shimeji.gif](https://cdn-uploads.huggingface.co/production/uploads/64beeb8f4b4ff0d5097ddcfc/HF124f84-X7L_rPynRa4n.gif)

Files changed (1) hide show
  1. README.md +12 -9
README.md CHANGED
@@ -4,18 +4,21 @@ language:
4
  - es
5
  pipeline_tag: text-generation
6
  ---
7
-
 
 
 
8
 
9
  Pofi is a fine-tuned version of ["decapoda-research/llama-7b-hf,"](https://huggingface.co/decapoda-research/llama-7b-hf) designed to act as an assistant capable of performing various tasks, such as:
10
 
11
- Setting alarms
12
- Connecting to the web
13
- Sending files
14
- Sending messages
15
- Saving strings of characters
16
- Opening applications
17
- Creating files
18
- Manipulating the system
19
 
20
  The training data was obtained manually through different prompts in ChatGPT, including examples created by me. It was fine-tuned using over 7,000 examples of "User" to "AI" commands. The training process was carried out in Google Colab, specifically in the ["🦙🎛️ LLaMA-LoRA Tuner"](https://github.com/zetavg/LLaMA-LoRA-Tuner) notebook. The training lasted for approximately 5 hours over 10 epochs.
21
 
 
4
  - es
5
  pipeline_tag: text-generation
6
  ---
7
+ <p align="center">
8
+ ![shimeji.gif](https://cdn-uploads.huggingface.co/production/uploads/64beeb8f4b4ff0d5097ddcfc/HF124f84-X7L_rPynRa4n.gif)
9
+ <p>
10
+ <br>
11
 
12
  Pofi is a fine-tuned version of ["decapoda-research/llama-7b-hf,"](https://huggingface.co/decapoda-research/llama-7b-hf) designed to act as an assistant capable of performing various tasks, such as:
13
 
14
+ |Setting alarms|
15
+ |Connecting to the web||
16
+ |Sending files|
17
+ |Sending messages|
18
+ |Saving strings of characters|
19
+ |Opening applications|
20
+ |Creating files|
21
+ |Manipulating the system|
22
 
23
  The training data was obtained manually through different prompts in ChatGPT, including examples created by me. It was fine-tuned using over 7,000 examples of "User" to "AI" commands. The training process was carried out in Google Colab, specifically in the ["🦙🎛️ LLaMA-LoRA Tuner"](https://github.com/zetavg/LLaMA-LoRA-Tuner) notebook. The training lasted for approximately 5 hours over 10 epochs.
24