llama_allyarc / README.md
ZainabF's picture
Update README.md
3981c5a verified
---
language:
- en
license: apache-2.0
model-index:
- name: AllyArc/llama_allyar
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: chat_imitate
type: AllyArc/chat_imitate
split: test
metrics:
- type: bleu
value: 0.5
name: BLEU
- type: confusion_matrix
value: 0.5
name: Confusion Matrix
- type: glue
value: 0.5
name: GLUE
- type: mse
value: 0.5
name: MSE
- type: squad
value: 0.5
name: SQUAD
- type: wiki_split
value: 0.8
name: Wiki Split
---
# Model Card for AllyArc
This model card describes AllyArc, an educational chatbot designed to support autistic students with personalized learning experiences. AllyArc uses a fine-tuned Large Language Model to interact with users and provide educational content.
## Model Details
### Model Description
AllyArc is an innovative chatbot tailored for the educational support of autistic students. It leverages a fine-tuned LLM to provide interactive learning experiences, emotional support, and a platform for students to engage in conversational learning.
- **Developed by:** Zainab, a computer science student and MLH Top 50.
- **Model type:** Conversational Large Language Model
- **Language(s) (NLP):** Primarily English, with potential multilingual support.
- **Finetuned from model:** Mistral 7b.
- [Dataset generation Script](https://colab.research.google.com/drive/1zqmu6vQn1Rb0OJPAoRBBYcI7qw0H_LCH#scrollTo=xOj-BuKARJrH)
- [llama Finetuning Script](https://colab.research.google.com/drive/1dz49DEzCKiE2103A3Kdb8BLIed_aCFi_?usp=sharing)
<!--
### Model Sources [optional]
- **Repository:** [GitHub or other repository link]
- **Paper [Coming Soon]:** [Link to any research paper published]
- **Demo [Coming Soon]:** [Link to a live demo if available]
-->
## Uses
### Direct Use
AllyArc can be directly interacted with by students and educators through a conversational interface, providing instant responses to queries and aiding in learning.
### Downstream Use
The model can be integrated into educational platforms or applications as a support tool for autistic students, offering personalized assistance.
### Out-of-Scope Use
AllyArc is not designed for high-stakes decisions, medical advice, or any context outside of educational support.
## Bias, Risks, and Limitations
While designed to be inclusive, there is a risk of unintended bias in responses due to the training data. The model may not fully understand or appropriately respond to all nuances of human emotion and communication.
### Recommendations
Educators should monitor interactions and provide regular feedback to improve AllyArc's accuracy and sensitivity. Users should be aware of the model's limitations and not rely on it for critical decisions.
## How to Get Started with the Model on Google Colab
To explore and interact with AllyArc using Google Colab:
1. Open the [AllyArc Interactive Colab Notebook](https://colab.research.google.com/drive/1MiGTw7nKMFbE8FllpVAW66DTmQSzOTFd?usp=sharing).
2. Go to `File > Save a copy in Drive` to create a personal copy of the notebook.
3. Obtain a Hugging Face API token by creating an account or logging in at [Hugging Face](https://huggingface.co/).
4. In your copied notebook, replace `YOUR_HUGGING_FACE_TOKEN_HERE` with your actual Hugging Face token.
5. Follow the instructions in the notebook to install necessary libraries and dependencies.
6. Run the cells step by step to initialize and interact with the AllyArc model.
Please ensure you have the appropriate permissions and quotas on Google Colab to run the model without interruption.
## How to Get Started with the AllyArc Model Locally
To run the AllyArc model on your local machine, follow these steps:
1. Ensure you have Python installed on your system.
2. Install the necessary Python packages by running:
```bash
pip install transformers tokenizers sentencepiece
```
3. Obtain a Hugging Face API token by creating an account or logging in at [Hugging Face](https://huggingface.co/settings/tokens).
4. Set an environment variable for your Hugging Face token. You can do this by running the following command in your terminal (replace `<your_hugging_face_token>` with your actual token):
```bash
export HUGGING_FACE_API_KEY=<your_hugging_face_token>
```
5. Create a new Python script or open a Python interactive shell and input the following code:
```python
import os
from huggingface_hub import hf_hub_download
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
# Replace <hugging-face-api-key-goes-here> with your Hugging Face token
HUGGING_FACE_API_KEY = os.environ.get("HUGGING_FACE_API_KEY")
model_id = "ZainabF/allyarc_finetune_model_sample"
filenames = [
"pytorch_model.bin", "added_tokens.json", "config.json", "generation_config.json",
"special_tokens_map.json", "spiece.model", "tokenizer_config.json", "pytorch_model.bin.index.json"
]
for filename in filenames:
downloaded_model_path = hf_hub_download(
repo_id=model_id,
filename=filename,
token=HUGGING_FACE_API_KEY
)
print(f"Downloaded {filename} to {downloaded_model_path}")
# Initialize the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id, legacy=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
# Set up the pipeline for text generation
text_gen_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=1000)
# Generate a response
response = text_gen_pipeline("How I'm upset that I got low mark at math, please help me")
print(response)
```
6. Execute the script to download the model and interact with it.
Please ensure that your environment variables are correctly set, and that the necessary packages are installed before running the script. The script will download the model files and then initialize the model for text generation, allowing you to input prompts and receive responses.
<!--
## Training Details
### Training Data
The model is trained on a curated dataset from educational websites and textbooks, with a focus on materials suitable for autistic learners.
### Training Procedure
#### Preprocessing [optional]
Data is cleaned and formatted to remove irrelevant information, ensuring the model receives high-quality input.
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision for efficiency.
#### Speeds, Sizes, Times [optional]
[More Information Needed]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model is evaluated against a set of questions and scenarios typical of an educational environment for autistic students.
#### Factors
The evaluation considers the model's ability to handle various subjects and the clarity of its explanations.
#### Metrics
Metrics include accuracy, response time, and user satisfaction.
### Results
[More Information Needed]
#### Summary
[More Information Needed]
## Environmental Impact
- **Hardware Type:** Cloud-based GPUs.
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
The model uses a transformer-based architecture optimized for conversational understanding.
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
**BibTeX:**
```bibtex
@misc{allyarc2024,
title={AllyArc: A Conversational Chatbot for Autistic Learners},
author={Zainab},
year={2024},
note={Model card for AllyArc}
}
```
**APA:**
Zainab. (2024). AllyArc: A Conversational Chatbot for Autistic Learners. [Model Card].
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
Zainab
## Model Card Contact
[Contact Information]
->