|
--- |
|
library_name: peft |
|
base_model: codellama/CodeLlama-13b-hf |
|
tags: |
|
- code-generation |
|
- text-generation |
|
- llama |
|
- turkish |
|
- n8n |
|
- workflow |
|
- automation |
|
- fine-tuned |
|
- lora |
|
language: |
|
- en |
|
- tr |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: "Create an n8n workflow that triggers when a webhook receives data:" |
|
example_title: "n8n Webhook Workflow" |
|
- text: '{"name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "parameters": {' |
|
example_title: "n8n HTTP Node" |
|
- text: "n8n automation: monitor CSV file and send Slack notification:" |
|
example_title: "n8n File Monitor + Slack" |
|
- text: "Build n8n workflow for API data processing:" |
|
example_title: "n8n API Processing" |
|
inference: true |
|
license: llama2 |
|
model_type: llama |
|
--- |
|
|
|
# 🚀 Code Llama 13B - n8n Workflow Generator |
|
|
|
<div align="center"> |
|
|
|
 |
|
 |
|
 |
|
 |
|
|
|
</div> |
|
|
|
Bu model, **CodeLlama-13b-hf**'den fine-tune edilmiş, **n8n workflow automation** için özelleştirilmiş bir kod üretim modelidir. |
|
|
|
## 🎯 Özelleştirilmiş Alanlar |
|
|
|
- ✅ **n8n Workflow Creation** - Webhook, HTTP, API workflows |
|
- ✅ **Node Configurations** - JSON node parameters |
|
- ✅ **Automation Logic** - File monitoring, data processing |
|
- ✅ **Integration Patterns** - Slack, email, database integrations |
|
- ✅ **Best Practices** - n8n terminology ve syntax |
|
|
|
## 🚀 Hızlı Kullanım |
|
|
|
### Widget Kullanımı |
|
Yukarıdaki widget'ta örnek promptları deneyebilirsiniz! |
|
|
|
### Kod ile Kullanım |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from peft import PeftModel |
|
|
|
# Base model yükle |
|
base_model = AutoModelForCausalLM.from_pretrained( |
|
"codellama/CodeLlama-13b-hf", |
|
torch_dtype=torch.float16, |
|
device_map="auto" |
|
) |
|
|
|
# n8n fine-tuned adapter ekle |
|
model = PeftModel.from_pretrained(base_model, "AlpYzc/code-llama-13b-turkish-quick-fix") |
|
|
|
# Tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("AlpYzc/code-llama-13b-turkish-quick-fix") |
|
|
|
# n8n workflow üret |
|
prompt = "Create an n8n workflow that triggers when a webhook receives data:" |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
outputs = model.generate(inputs.input_ids, max_new_tokens=150, temperature=0.7) |
|
result = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
print(result) |
|
``` |
|
|
|
## 📊 Performance Comparison |
|
|
|
| Model | n8n Terms | Workflow Focus | JSON Structure | |
|
|-------|-----------|----------------|----------------| |
|
| **Original CodeLlama** | ⭐⭐ | ⭐⭐ | ⭐⭐ | |
|
| **n8n Fine-tuned** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | |
|
|
|
## 🎨 Example Outputs |
|
|
|
### Input: "Create n8n webhook workflow:" |
|
|
|
**Original CodeLlama:** |
|
``` |
|
Create a n8n webhook workflow: |
|
1. Add a webhook node |
|
2. Create a webhook url |
|
3. Update the n8n workflow with the webhook url |
|
``` |
|
|
|
**n8n Fine-tuned:** |
|
``` |
|
Create a webhook in n8n: |
|
1. Create a new workflow. |
|
2. Add a webhook node. |
|
3. Copy the URL from the Webhook node to the clipboard. |
|
4. Paste the URL into the N8N_WEBHOOK_URL field in the .env file. |
|
``` |
|
|
|
## 🛠️ Training Details |
|
|
|
- **Base Model**: `codellama/CodeLlama-13b-hf` |
|
- **Method**: LoRA (Low-Rank Adaptation) |
|
- **Training Data**: n8n workflow examples |
|
- **Training Duration**: ~3.3 hours |
|
- **Final Loss**: 0.1577 |
|
- **Parameters**: 250M adapter weights |
|
|
|
## 🎯 Use Cases |
|
|
|
### 1. **n8n Workflow Generation** |
|
```python |
|
prompt = "Create n8n workflow for monitoring file changes:" |
|
# Generates complete n8n workflow with proper nodes |
|
``` |
|
|
|
### 2. **Node Configuration** |
|
```python |
|
prompt = '{"name": "HTTP Request", "type": "n8n-nodes-base.httpRequest",' |
|
# Generates valid n8n node JSON configuration |
|
``` |
|
|
|
### 3. **Automation Patterns** |
|
```python |
|
prompt = "n8n automation: CSV processing and Slack notification:" |
|
# Generates multi-step automation workflows |
|
``` |
|
|
|
## ⚙️ Model Requirements |
|
|
|
- **GPU Memory**: ~26GB (for full model) |
|
- **RAM**: 32GB+ recommended |
|
- **CUDA**: 11.8+ |
|
- **Python**: 3.8+ |
|
- **Dependencies**: `transformers`, `peft`, `torch` |
|
|
|
## 🔗 Related Links |
|
|
|
- **Base Model**: [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) |
|
- **n8n Documentation**: [n8n.io](https://n8n.io) |
|
- **LoRA Paper**: [LoRA: Low-Rank Adaptation](https://arxiv.org/abs/2106.09685) |
|
|
|
## 📜 Citation |
|
|
|
```bibtex |
|
@misc{code-llama-n8n-2025, |
|
title={Code Llama 13B n8n Workflow Generator}, |
|
author={AlpYzc}, |
|
year={2025}, |
|
url={https://huggingface.co/AlpYzc/code-llama-13b-turkish-quick-fix} |
|
} |
|
``` |
|
|
|
## ⚠️ Limitations |
|
|
|
- Specialized for n8n workflows - may not perform well on general coding tasks |
|
- Requires significant GPU memory for full model inference |
|
- LoRA adapter needs base model for functionality |
|
- Output quality depends on prompt specificity |
|
|
|
## 🤝 Contributing |
|
|
|
Bu model n8n community için geliştirilmiştir. Feedback ve improvement önerileri memnuniyetle karşılanır! |
|
|
|
--- |
|
|
|
<div align="center"> |
|
|
|
**🚀 Ready to automate your workflows with n8n?** |
|
|
|
[](https://huggingface.co/AlpYzc/code-llama-13b-turkish-quick-fix) |
|
[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_generation.ipynb) |
|
|
|
</div> |
|
|