File size: 4,617 Bytes
139ee3d
21c53f6
139ee3d
 
 
 
 
21c53f6
139ee3d
39dcb3e
24811c3
21c53f6
 
46ed3dd
 
 
21c53f6
 
e4c9e67
139ee3d
52b7614
7233087
139ee3d
 
7494d99
3bd6c65
782b83a
ae263bf
 
 
 
 
 
 
 
 
139ee3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
02ab594
 
 
 
 
 
 
6b35ad3
474ef37
 
6b35ad3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
---
license: apache-2.0
base_model: []
library_name: transformers
tags:
- mergekit
- merge
pipeline_tag: text-generation
---
# Credit for the model card's description goes to ddh0, mergekit, and, migtissera
# Inspired by ddh0/Starling-LM-10.7B-beta and ddh0/Mistral-10.7B-Instruct-v0.2
# Tess-10.7B-v0.2

# Deprecated
"This model is deprecated due to the use of wrong sliding window parameter while training. Will update with the new model link in a couple of days." - migtissera

This is Tess-10.7B-v0.2, a depth-upscaled version of [migtissera/Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0).

This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.

Paper detailing how Depth-Up Scaling works:  [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).


# Prompt format same as [migtissera/Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0)

# Prompt Format:

```
SYSTEM: <ANY SYSTEM CONTEXT>
USER: 
ASSISTANT:
```


## Merge Details
### Merge Method

This model was merged using the passthrough merge method.

### Models Merged

The following models were included in the merge:
* /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0

### Configuration

The following YAML configuration was used to produce this model:

```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 24]
    model: /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0
- sources:
  - layer_range: [8, 32]
    model: /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0 

```
# GGUFs (Thanks to [bartowski](https://huggingface.co/bartowski))

https://huggingface.co/bartowski/Tess-10.7B-v2.0-GGUF

# exl2s (Thanks to [bartowski](https://huggingface.co/bartowski))

https://huggingface.co/bartowski/Tess-10.7B-v2.0-exl2

![Tesoro](https://huggingface.co/migtissera/Tess-7B-v2.0/resolve/main/Tesoro.png)

---
license: apache-2.0
---

# Tess-7B-v2.0
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-7B-v2.0 was trained on the Mistral-7B-v0.2 base.

# Prompt Format:

```
SYSTEM: <ANY SYSTEM CONTEXT>
USER: 
ASSISTANT:
```

### Below shows a code example on how to use this model:

```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "migtissera/Tess-7B-v2.0"
output_file_path = "./conversations.jsonl"

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.float16,
    device_map="auto",
    load_in_8bit=False,
    trust_remote_code=True,
)

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)


def generate_text(instruction):
    tokens = tokenizer.encode(instruction)
    tokens = torch.LongTensor(tokens).unsqueeze(0)
    tokens = tokens.to("cuda")

    instance = {
        "input_ids": tokens,
        "top_p": 1.0,
        "temperature": 0.5,
        "generate_len": 1024,
        "top_k": 50,
    }

    length = len(tokens[0])
    with torch.no_grad():
        rest = model.generate(
            input_ids=tokens,
            max_length=length + instance["generate_len"],
            use_cache=True,
            do_sample=True,
            top_p=instance["top_p"],
            temperature=instance["temperature"],
            top_k=instance["top_k"],
            num_return_sequences=1,
        )
    output = rest[0][length:]
    string = tokenizer.decode(output, skip_special_tokens=True)
    answer = string.split("USER:")[0].strip()
    return f"{answer}"


conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation."


while True:
    user_input = input("You: ")
    llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
    answer = generate_text(llm_prompt)
    print(answer)
    conversation = f"{llm_prompt}{answer}"
    json_data = {"prompt": user_input, "answer": answer}

    ## Save your conversation
    with open(output_file_path, "a") as output_file:
        output_file.write(json.dumps(json_data) + "\n")

```

<br>

#### Limitations & Biases:

While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. 

Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. 

Exercise caution and cross-check information when necessary. This is an uncensored model.


<br>