File size: 429 Bytes
270eb80
2da67dc
7119ffb
4acac65
 
 
 
 
270eb80
 
7119ffb
270eb80
7119ffb
270eb80
7119ffb
270eb80
7119ffb
 
 
270eb80
7119ffb
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---
base_model: google/gemma-7b-it
tags:
- adapter
- lora
- gemma
- peft
- causal-lm
---

# Cot Adapter for Gemma-7B-IT

This is a LoRA adapter trained on **cot**, compatible with `google/gemma-7b-it`.

## Usage

```python
from transformers import AutoModelForCausalLM
from peft import PeftModel

base = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
model = PeftModel.from_pretrained(base, "RealSilvia/cot-adapter")