Update README.md
Browse files
README.md
CHANGED
@@ -15,10 +15,11 @@ inference:
|
|
15 |
# Gemma-2B Fine-Tuned Python Model
|
16 |
|
17 |
## Overview
|
18 |
-
Gemma-2B Fine-Tuned Python Model is based on the Gemma-2B architecture,fine-tuned
|
19 |
|
20 |
## Model Details
|
21 |
- **Model Name**: Gemma-2B Fine-Tuned Python Model
|
|
|
22 |
- **Base Model**: Gemma-2B
|
23 |
- **Language**: Python
|
24 |
- **Task**: Python Code Understanding and Assistance
|
@@ -32,10 +33,11 @@ Gemma-2B Fine-Tuned Python Model is based on the Gemma-2B architecture,fine-tune
|
|
32 |
## How to Use
|
33 |
1. **Install Gemma Python Package**:
|
34 |
```bash
|
35 |
-
|
36 |
```
|
37 |
|
38 |
## Inference
|
|
|
39 |
```python
|
40 |
# Load model directly
|
41 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
@@ -55,9 +57,11 @@ model_inputs = encodeds.to('cuda')
|
|
55 |
|
56 |
# Increase max_new_tokens if needed
|
57 |
generated_ids = merged_model.generate(**model_inputs, max_new_tokens=1000, do_sample=False, pad_token_id=tokenizer.eos_token_id)
|
58 |
-
|
59 |
-
for i in tokenizer.decode(generated_ids[0], skip_special_tokens=True).split('<end_of_turn>')[:2]
|
60 |
-
ans+=i
|
61 |
-
|
62 |
-
|
|
|
|
|
63 |
```
|
|
|
15 |
# Gemma-2B Fine-Tuned Python Model
|
16 |
|
17 |
## Overview
|
18 |
+
Gemma-2B Fine-Tuned Python Model is a deep learning model based on the Gemma-2B architecture, fine-tuned specifically for Python programming tasks. This model is designed to understand Python code and assist developers by providing suggestions, completing code snippets, or offering corrections to improve code quality and efficiency.
|
19 |
|
20 |
## Model Details
|
21 |
- **Model Name**: Gemma-2B Fine-Tuned Python Model
|
22 |
+
- **Model Type**: Deep Learning Model
|
23 |
- **Base Model**: Gemma-2B
|
24 |
- **Language**: Python
|
25 |
- **Task**: Python Code Understanding and Assistance
|
|
|
33 |
## How to Use
|
34 |
1. **Install Gemma Python Package**:
|
35 |
```bash
|
36 |
+
pip install -q -U transformers==4.38.0
|
37 |
```
|
38 |
|
39 |
## Inference
|
40 |
+
1. **How to use the model in our notebook**:
|
41 |
```python
|
42 |
# Load model directly
|
43 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
57 |
|
58 |
# Increase max_new_tokens if needed
|
59 |
generated_ids = merged_model.generate(**model_inputs, max_new_tokens=1000, do_sample=False, pad_token_id=tokenizer.eos_token_id)
|
60 |
+
ans = ''
|
61 |
+
for i in tokenizer.decode(generated_ids[0], skip_special_tokens=True).split('<end_of_turn>')[:2]:
|
62 |
+
ans += i
|
63 |
+
|
64 |
+
# Extract only the model's answer
|
65 |
+
model_answer = ans.split("model")[1].strip()
|
66 |
+
return model_answer
|
67 |
```
|