File size: 2,498 Bytes
cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 cf2dbec 7927897 efe5fc3 7927897 730d5e1 7927897 730d5e1 7927897 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
library_name: transformers
tags: [conversational, chain-of-thought, education]
---
# CaedenAI - O1
CaedenAI is a conversational AI model fine-tuned to provide detailed reasoning in its responses using the Chain-of-Thought (CoT) methodology. It is designed for educational use, enabling users to understand the reasoning process behind answers.
## Model Details
### Model Description
- **Developed by:** Caeden Rajoo
- **Model type:** Conversational AI with CoT reasoning
- **License:** Apache 2
- **Finetuned from model:** Qwen/Qwen2.5-1.5B
- **Primary Use Case:** Education and knowledge expansion
This model is fine-tuned for generating step-by-step reasoning for queries, making it an excellent tool for educational environments and learning applications.
## Uses
### Direct Use
This model can be directly applied in:
- Educational environments to help students learn with explanations.
- Applications where detailed reasoning is required for understanding answers.
- Conversational AI systems that prioritize reasoning over simple answers.
### Out-of-Scope Use
This model may not be suitable for:
- Scenarios requiring highly specialized domain knowledge not covered in the training data.
- Tasks requiring real-time response for critical systems (e.g., healthcare, safety).
## Bias, Risks, and Limitations
The model inherits limitations from its training data and base model. Users should consider potential biases or incomplete information in responses.
### Recommendations
- The model's output should be reviewed for accuracy in critical use cases.
- Users should ensure that ethical considerations are met when using the model in sensitive environments.
## How to Get Started with the Model
Here’s how you can load and use CaedenAI:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("caedencode/Caeden-o1")
tokenizer = AutoTokenizer.from_pretrained("caedencode/Caeden-o1")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
def generate_answer(question):
prompt = f"Question: {question}\nReasoning:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_length=200, num_beams=5, early_stopping=True)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
question = "What is the largest planet in our solar system?"
answer = generate_answer(question)
print(answer)
```
|