File size: 8,098 Bytes
edc76ba
 
 
23364a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a68766
 
 
2de0664
 
 
8eb7453
 
 
4e373dc
 
 
a2fd615
f8aa4cd
a2fd615
edc76ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3981c5a
7e60686
edc76ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
---
language:
- en
license: apache-2.0
model-index:
- name: AllyArc/llama_allyar
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: chat_imitate
      type: AllyArc/chat_imitate
      split: test
    metrics:
    - type: bleu
      value: 0.5
      name: BLEU
    - type: confusion_matrix
      value: 0.5
      name: Confusion Matrix
    - type: glue
      value: 0.5
      name: GLUE
    - type: mse
      value: 0.5
      name: MSE
    - type: squad
      value: 0.5
      name: SQUAD
    - type: wiki_split
      value: 0.8
      name: Wiki Split
---
# Model Card for AllyArc

This model card describes AllyArc, an educational chatbot designed to support autistic students with personalized learning experiences. AllyArc uses a fine-tuned Large Language Model to interact with users and provide educational content.

## Model Details

### Model Description

AllyArc is an innovative chatbot tailored for the educational support of autistic students. It leverages a fine-tuned LLM to provide interactive learning experiences, emotional support, and a platform for students to engage in conversational learning.

- **Developed by:** Zainab, a computer science student and MLH Top 50.
- **Model type:** Conversational Large Language Model
- **Language(s) (NLP):** Primarily English, with potential multilingual support.
- **Finetuned from model:** Mistral 7b.

- [Dataset generation Script](https://colab.research.google.com/drive/1zqmu6vQn1Rb0OJPAoRBBYcI7qw0H_LCH#scrollTo=xOj-BuKARJrH)
- [llama Finetuning Script](https://colab.research.google.com/drive/1dz49DEzCKiE2103A3Kdb8BLIed_aCFi_?usp=sharing)
<!--
### Model Sources [optional]

- **Repository:** [GitHub or other repository link]
- **Paper [Coming Soon]:** [Link to any research paper published]
- **Demo [Coming  Soon]:** [Link to a live demo if available]
-->

## Uses

### Direct Use

AllyArc can be directly interacted with by students and educators through a conversational interface, providing instant responses to queries and aiding in learning.

### Downstream Use

The model can be integrated into educational platforms or applications as a support tool for autistic students, offering personalized assistance.

### Out-of-Scope Use

AllyArc is not designed for high-stakes decisions, medical advice, or any context outside of educational support.

## Bias, Risks, and Limitations

While designed to be inclusive, there is a risk of unintended bias in responses due to the training data. The model may not fully understand or appropriately respond to all nuances of human emotion and communication.

### Recommendations

Educators should monitor interactions and provide regular feedback to improve AllyArc's accuracy and sensitivity. Users should be aware of the model's limitations and not rely on it for critical decisions.

## How to Get Started with the Model on Google Colab

To explore and interact with AllyArc using Google Colab:
1. Open the [AllyArc Interactive Colab Notebook](https://colab.research.google.com/drive/1MiGTw7nKMFbE8FllpVAW66DTmQSzOTFd?usp=sharing).
2. Go to `File > Save a copy in Drive` to create a personal copy of the notebook.
3. Obtain a Hugging Face API token by creating an account or logging in at [Hugging Face](https://huggingface.co/).
4. In your copied notebook, replace `YOUR_HUGGING_FACE_TOKEN_HERE` with your actual Hugging Face token.
5. Follow the instructions in the notebook to install necessary libraries and dependencies.
6. Run the cells step by step to initialize and interact with the AllyArc model.

Please ensure you have the appropriate permissions and quotas on Google Colab to run the model without interruption.


## How to Get Started with the AllyArc Model Locally

To run the AllyArc model on your local machine, follow these steps:

1. Ensure you have Python installed on your system.
2. Install the necessary Python packages by running:
```bash
pip install transformers tokenizers sentencepiece
```
3. Obtain a Hugging Face API token by creating an account or logging in at [Hugging Face](https://huggingface.co/settings/tokens).
4. Set an environment variable for your Hugging Face token. You can do this by running the following command in your terminal (replace `<your_hugging_face_token>` with your actual token):
```bash
export HUGGING_FACE_API_KEY=<your_hugging_face_token>
```
5. Create a new Python script or open a Python interactive shell and input the following code:

```python
import os
from huggingface_hub import hf_hub_download
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline

# Replace <hugging-face-api-key-goes-here> with your Hugging Face token
HUGGING_FACE_API_KEY = os.environ.get("HUGGING_FACE_API_KEY")

model_id = "ZainabF/allyarc_finetune_model_sample"

filenames = [
    "pytorch_model.bin", "added_tokens.json", "config.json", "generation_config.json",
    "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "pytorch_model.bin.index.json"
]

for filename in filenames:
    downloaded_model_path = hf_hub_download(
                repo_id=model_id,
                filename=filename,
                token=HUGGING_FACE_API_KEY
    )
    print(f"Downloaded {filename} to {downloaded_model_path}")

# Initialize the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id, legacy=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

# Set up the pipeline for text generation
text_gen_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=1000)

# Generate a response
response = text_gen_pipeline("How I'm upset that I got low mark at math, please help me")
print(response)
```
6. Execute the script to download the model and interact with it.

Please ensure that your environment variables are correctly set, and that the necessary packages are installed before running the script. The script will download the model files and then initialize the model for text generation, allowing you to input prompts and receive responses.


<!--

## Training Details

### Training Data

The model is trained on a curated dataset from educational websites and textbooks, with a focus on materials suitable for autistic learners.

### Training Procedure 

#### Preprocessing [optional]

Data is cleaned and formatted to remove irrelevant information, ensuring the model receives high-quality input.

#### Training Hyperparameters

- **Training regime:** fp16 mixed precision for efficiency.

#### Speeds, Sizes, Times [optional]

[More Information Needed]

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

The model is evaluated against a set of questions and scenarios typical of an educational environment for autistic students.

#### Factors

The evaluation considers the model's ability to handle various subjects and the clarity of its explanations.

#### Metrics

Metrics include accuracy, response time, and user satisfaction.

### Results

[More Information Needed]

#### Summary

[More Information Needed]

## Environmental Impact

- **Hardware Type:** Cloud-based GPUs.
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]

## Technical Specifications [optional]

### Model Architecture and Objective

The model uses a transformer-based architecture optimized for conversational understanding.

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed]

#### Software

[More Information Needed]

## Citation [optional]

**BibTeX:**

```bibtex
@misc{allyarc2024,
  title={AllyArc: A Conversational Chatbot for Autistic Learners},
  author={Zainab},
  year={2024},
  note={Model card for AllyArc}
}
```

**APA:**

Zainab. (2024). AllyArc: A Conversational Chatbot for Autistic Learners. [Model Card].

## Glossary [optional]

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

Zainab

## Model Card Contact

[Contact Information]



->