File size: 763 Bytes
af58f8b e8eedfc af58f8b 47a16d2 d1ae4f2 47a16d2 d1ae4f2 47a16d2 d1ae4f2 47a16d2 d1ae4f2 47a16d2 d1ae4f2 47a16d2 d1ae4f2 47a16d2 d1ae4f2 af58f8b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
datasets:
- prasannadhungana8848/TOS_sentence_embedded_all_minilm_l6_v2
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
TOS_LLAMA
## Model details
- Model type: [LLAMA_3.2_3B_INSTRUCT]
- Training data: [This model is finetuned on "prasannadhungana8848/TOS_sentence_embedded_all_minilm_l6_v2".]
- Intended use: [This model is used to classify the clauses into mulitple privacy labels.]
## Usage
Here's a quick example of how to use the model:
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("prasannadhungana8848/TOS_LLAMA_3.2_3B_INSTRUCT")
tokenizer = AutoTokenizer.from_pretrained("prasannadhungana8848/TOS_LLAMA_3.2_3B_INSTRUCT") |