m00bs commited on
Commit
b508849
·
verified ·
1 Parent(s): f12a61e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -5
README.md CHANGED
@@ -21,18 +21,49 @@ This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](htt
21
 
22
  ## Model description
23
 
24
- More information needed
25
 
26
- ## Intended uses & limitations
 
 
27
 
28
- More information needed
 
 
29
 
30
- ## Training and evaluation data
 
 
31
 
32
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Training procedure
35
 
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
 
21
 
22
  ## Model description
23
 
24
+ The notebook is structured to guide the user through the fine-tuning process with the following components:
25
 
26
+ 1. **Setup and Configuration**:
27
+ - Imports necessary libraries and sets up the environment.
28
+ - Configures GPU settings and initializes the Jupyter Widgets.
29
 
30
+ 2. **Data Preparation**:
31
+ - Loads and preprocesses the dataset.
32
+ - Splits the data into training and validation sets.
33
 
34
+ 3. **Model Initialization**:
35
+ - Loads the pre-trained model.
36
+ - Configures the model for fine-tuning.
37
 
38
+ 4. **Training Loop**:
39
+ - Implements the training loop with real-time progress updates.
40
+ - Displays training metrics and updates the progress bar widget.
41
+
42
+ ## How to use
43
+
44
+ 1. **Install Required Libraries**
45
+
46
+ ```python
47
+ import torch
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+ from unsloth import FastLanguageModel
50
+ from unsloth.chat_templates import get_chat_template
51
+ from peft import PeftModel, PeftConfig
52
+ ```
53
+
54
+
55
+ 2. **Load the Model and Tokenizer**
56
+
57
+ 3. **Prepare Inputs**
58
+
59
+ 4. **Run Inference**
60
+
61
+ 5.
62
 
63
  ## Training procedure
64
 
65
+
66
+
67
  ### Training hyperparameters
68
 
69
  The following hyperparameters were used during training: