File size: 3,443 Bytes
0f44210
 
 
 
 
 
 
 
 
 
550c6ba
0f44210
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
555a692
0f44210
555a692
 
0f44210
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: mit
language:
- en
base_model:
- google-bert/bert-base-multilingual-uncased
---

# BERT-Based Classification Model for Optimal Temperature Selection

This model leverages a BERT-based classification model to analyze input prompts and identify the most suitable generation temperature, enhancing text generation quality and relevance from our paper related to temperature.

## Overview

The model classifies input text into six distinct abilities, providing a probability distribution for each:
- **Causal Reasoning**
- **Creativity**
- **In-Context Learning**
- **Instruction Following**
- **Machine Translation**
- **Summarization**

## Features

- **Pre-trained Model**: Uses the multilingual BERT model: `Volavion/bert-base-multilingual-uncased-Temperature-CLS`.
- **Tokenization**: Processes text inputs into numerical formats compatible with the model.
- **Classification Output**: Provides probabilities for each class, allowing precise evaluation of the prompt's capabilities.

## Installation

1. Clone the repository if necessary:
   ```bash
   git clone https://huggingface.co/Volavion/bert-base-multilingual-uncased-temperature-cls
   cd bert-base-multilingual-uncased-temperature-cls
   ```

2. Install the required Python libraries:
   ```bash
   pip install transformers torch numpy
   ```

## Usage

1. Load the tokenizer and model:
   ```python
   from transformers import AutoTokenizer, AutoModelForSequenceClassification
   
   model_name = "Volavion/bert-base-multilingual-uncased-Temperature-CLS"
   tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True)
   model = AutoModelForSequenceClassification.from_pretrained(model_name)
   ```

2. Tokenize your input text:
   ```python
   input_text = "Your input prompt here."
   encoded_dict = tokenizer.encode_plus(
       input_text,
       add_special_tokens=True,
       max_length=512,
       pad_to_max_length=True,
       return_attention_mask=True,
       return_tensors="pt"
   )
   ```

3. Perform inference:
   ```python
   import torch
   import numpy as np
   
   input_ids = encoded_dict["input_ids"].to(device)
   attention_mask = encoded_dict["attention_mask"].to(device)
   
   model.eval()
   with torch.no_grad():
       outputs = model(input_ids, attention_mask=attention_mask)
   
   logits = outputs.logits.cpu().numpy()
   probabilities = np.exp(logits - np.max(logits, axis=1, keepdims=True))
   probabilities /= np.sum(probabilities, axis=1, keepdims=True)
   ```

4. Map probabilities to abilities:
   ```python
   ability_mapping = {0: "Causal Reasoning", 1: "Creativity", 2: "In-Context Learning",
                      3: "Instruction Following", 4: "Machine Translation", 5: "Summarization"}
   for prob, ability in zip(probabilities[0], ability_mapping.values()):
       print(f"{ability}: {prob*100:.2f}%")
   ```

## Example Output

```plaintext
Ability Classification Probabilities:
Causal Reasoning: 15.30%
Creativity: 20.45%
In-Context Learning: 18.22%
Instruction Following: 12.78%
Machine Translation: 21.09%
Summarization: 12.16%
```

## Device Compatibility

The model supports GPU acceleration for faster inference. It will automatically detect and utilize a GPU if available; otherwise, it defaults to CPU.

## Contributing

Contributions are welcome! Feel free to fork the repository, create a branch, and submit a pull request.

## License

This project is licensed under the [MIT License](LICENSE).