|
|
--- |
|
|
base_model: allenai/OLMo-1B-hf |
|
|
library_name: peft |
|
|
--- |
|
|
|
|
|
# OLMo Code Python3 Text-Only Model |
|
|
|
|
|
This is a LoRA adapter fine-tuned on the OLMo-1B model for Python 3 code generation tasks. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
- **Base Model:** allenai/OLMo-1B-hf |
|
|
- **Model Type:** LoRA Adapter |
|
|
- **Task:** Causal Language Modeling for Python 3 code |
|
|
- **Language:** Python 3 |
|
|
- **License:** MIT |
|
|
- **Fine-tuned by:** dipikakhullar |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This model is a LoRA adapter that has been fine-tuned on Python 3 code data. It extends the capabilities of the base OLMo-1B model specifically for Python code generation tasks. |
|
|
|
|
|
### LoRA Configuration |
|
|
|
|
|
- **LoRA Type:** LORA |
|
|
- **LoRA Alpha:** 16 |
|
|
- **LoRA Dropout:** 0.05 |
|
|
- **LoRA Rank (r):** 8 |
|
|
- **Target Modules:** down_proj, q_proj, v_proj, up_proj, k_proj, gate_proj, o_proj |
|
|
- **Task Type:** CAUSAL_LM |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
This model is intended for Python 3 code generation tasks. It can be used to: |
|
|
- Generate Python code completions |
|
|
- Assist with code writing |
|
|
- Provide code suggestions |
|
|
|
|
|
### Downstream Use |
|
|
|
|
|
The model can be further fine-tuned for specific Python programming tasks or integrated into code generation applications. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
This model is specifically designed for Python 3 code generation and may not perform well for: |
|
|
- Other programming languages |
|
|
- Natural language tasks |
|
|
- Non-code related tasks |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
```python |
|
|
from peft import PeftModel, PeftConfig |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
# Load the base model and tokenizer |
|
|
base_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf") |
|
|
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-hf") |
|
|
|
|
|
# Load the LoRA adapter |
|
|
model = PeftModel.from_pretrained(base_model, "dipikakhullar/olmo-code-python3-text-only") |
|
|
|
|
|
# Example usage |
|
|
prompt = "def fibonacci(n):" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_length=100, temperature=0.7) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
The model was fine-tuned on cleaned Python 3 code data specifically prepared for language model training. |
|
|
|
|
|
### Training Procedure |
|
|
|
|
|
- **Base Model:** allenai/OLMo-1B-hf |
|
|
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation) |
|
|
- **Checkpoint:** checkpoint-6000 |
|
|
|
|
|
## Model Card Contact |
|
|
|
|
|
- **Author:** dipikakhullar |
|
|
- **Repository:** https://huggingface.co/dipikakhullar/olmo-code-python3-text-only |
|
|
|
|
|
## Framework versions |
|
|
|
|
|
- PEFT 0.7.1 |
|
|
- Transformers |
|
|
|