File size: 1,467 Bytes
7bd482f
 
 
 
 
 
 
 
9ff3e7c
7bd482f
 
 
 
0d93716
7bd482f
0d93716
7bd482f
0d93716
7bd482f
0d93716
 
 
 
 
7bd482f
0d93716
 
7bd482f
0d93716
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
base_model: unsloth/Qwen2.5-7B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Model Card for ORANSight Qwen-7B

This model belongs to the first release of the ORANSight family of models.

- **Developed by:** NextG lab@ NC State
- **License:** apache-2.0
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.  
- **Fine Tuning Framework:** Unsloth
  
### Generate with Transformers  
Below is a quick example of how to use the model with Hugging Face Transformers:  

```python
from transformers import pipeline

# Example query
messages = [
    {"role": "system", "content": "You are an O-RAN expert assistant."},
    {"role": "user", "content": "Explain the E2 interface."},
]

# Load the model
chatbot = pipeline("text-generation", model="NextGLab/ORANSight_Qwen_7B_Instruct")
result = chatbot(messages)
print(result)
```

### Coming Soon  
A detailed paper documenting the experiments and results achieved with this model will be available soon. Meanwhile, if you try this model, please cite the below mentioned paper to acknowledge the foundational work that enabled this fine-tuning.  

```bibtex
@article{gajjar2024oran,
  title={Oran-bench-13k: An open source benchmark for assessing llms in open radio access networks},
  author={Gajjar, Pranshav and Shah, Vijay K},
  journal={arXiv preprint arXiv:2407.06245},
  year={2024}
}
```  
---