basharatwali commited on
Commit
422577a
·
verified ·
1 Parent(s): d2bd6fa

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: meta-llama/CodeLlama-7b-Instruct-hf
3
- library_name: peft
4
- model_name: CodeLlamaInstruct_finetuned_2
5
  tags:
6
  - generated_from_trainer
7
  - trl
@@ -9,7 +9,7 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for CodeLlamaInstruct_finetuned_2
13
 
14
  This model is a fine-tuned version of [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
@@ -20,7 +20,7 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="basharatwali/CodeLlamaInstruct_finetuned_2", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
@@ -34,7 +34,6 @@ This model was trained with SFT.
34
 
35
  ### Framework versions
36
 
37
- - PEFT 0.14.1.dev0
38
  - TRL: 0.13.0
39
  - Transformers: 4.48.0.dev0
40
  - Pytorch: 2.5.1+cu121
 
1
  ---
2
  base_model: meta-llama/CodeLlama-7b-Instruct-hf
3
+ library_name: transformers
4
+ model_name: CodeLlama-Instruct-Python-7b
5
  tags:
6
  - generated_from_trainer
7
  - trl
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for CodeLlama-Instruct-Python-7b
13
 
14
  This model is a fine-tuned version of [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="basharatwali/CodeLlama-Instruct-Python-7b", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
 
34
 
35
  ### Framework versions
36
 
 
37
  - TRL: 0.13.0
38
  - Transformers: 4.48.0.dev0
39
  - Pytorch: 2.5.1+cu121
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:02c502d292250d935954caa325a41f9e4169004674d67e6901eb3e4b983cf072
3
  size 67143296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2744cccc9ff911989664ef5b6902ee0294700bd71a3d2b6b015b852f3fb1a645
3
  size 67143296
runs/Jan06_06-55-43_2db6e875cc75/events.out.tfevents.1736146549.2db6e875cc75.518.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:78ed7f9f3757d251a2605aae6574dedbc1da380414f7ffbc6a231d6431293f70
3
- size 11342
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5ac90bfde2007adb5d2b7b277314c151fa8249624281a4e72a0f51c6b159939
3
+ size 11696
tokenizer.json CHANGED
@@ -2,7 +2,7 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 1024,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 320,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },