File size: 5,461 Bytes
34a39c4 ad1bde1 34a39c4 ad1bde1 34a39c4 ad1bde1 34a39c4 ad1bde1 34a39c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
---
library_name: peft
base_model: codellama/CodeLlama-13b-hf
tags:
- code-generation
- text-generation
- llama
- turkish
- n8n
- workflow
- automation
- fine-tuned
- lora
language:
- en
- tr
pipeline_tag: text-generation
widget:
- text: "Create an n8n workflow that triggers when a webhook receives data:"
example_title: "n8n Webhook Workflow"
- text: '{"name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "parameters": {'
example_title: "n8n HTTP Node"
- text: "n8n automation: monitor CSV file and send Slack notification:"
example_title: "n8n File Monitor + Slack"
- text: "Build n8n workflow for API data processing:"
example_title: "n8n API Processing"
inference: true
license: llama2
model_type: llama
---
# 🚀 Code Llama 13B - n8n Workflow Generator
<div align="center">




</div>
Bu model, **CodeLlama-13b-hf**'den fine-tune edilmiş, **n8n workflow automation** için özelleştirilmiş bir kod üretim modelidir.
## 🎯 Özelleştirilmiş Alanlar
- ✅ **n8n Workflow Creation** - Webhook, HTTP, API workflows
- ✅ **Node Configurations** - JSON node parameters
- ✅ **Automation Logic** - File monitoring, data processing
- ✅ **Integration Patterns** - Slack, email, database integrations
- ✅ **Best Practices** - n8n terminology ve syntax
## 🚀 Hızlı Kullanım
### Widget Kullanımı
Yukarıdaki widget'ta örnek promptları deneyebilirsiniz!
### Kod ile Kullanım
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Base model yükle
base_model = AutoModelForCausalLM.from_pretrained(
"codellama/CodeLlama-13b-hf",
torch_dtype=torch.float16,
device_map="auto"
)
# n8n fine-tuned adapter ekle
model = PeftModel.from_pretrained(base_model, "AlpYzc/code-llama-13b-turkish-quick-fix")
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("AlpYzc/code-llama-13b-turkish-quick-fix")
# n8n workflow üret
prompt = "Create an n8n workflow that triggers when a webhook receives data:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_new_tokens=150, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 📊 Performance Comparison
| Model | n8n Terms | Workflow Focus | JSON Structure |
|-------|-----------|----------------|----------------|
| **Original CodeLlama** | ⭐⭐ | ⭐⭐ | ⭐⭐ |
| **n8n Fine-tuned** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
## 🎨 Example Outputs
### Input: "Create n8n webhook workflow:"
**Original CodeLlama:**
```
Create a n8n webhook workflow:
1. Add a webhook node
2. Create a webhook url
3. Update the n8n workflow with the webhook url
```
**n8n Fine-tuned:**
```
Create a webhook in n8n:
1. Create a new workflow.
2. Add a webhook node.
3. Copy the URL from the Webhook node to the clipboard.
4. Paste the URL into the N8N_WEBHOOK_URL field in the .env file.
```
## 🛠️ Training Details
- **Base Model**: `codellama/CodeLlama-13b-hf`
- **Method**: LoRA (Low-Rank Adaptation)
- **Training Data**: n8n workflow examples
- **Training Duration**: ~3.3 hours
- **Final Loss**: 0.1577
- **Parameters**: 250M adapter weights
## 🎯 Use Cases
### 1. **n8n Workflow Generation**
```python
prompt = "Create n8n workflow for monitoring file changes:"
# Generates complete n8n workflow with proper nodes
```
### 2. **Node Configuration**
```python
prompt = '{"name": "HTTP Request", "type": "n8n-nodes-base.httpRequest",'
# Generates valid n8n node JSON configuration
```
### 3. **Automation Patterns**
```python
prompt = "n8n automation: CSV processing and Slack notification:"
# Generates multi-step automation workflows
```
## ⚙️ Model Requirements
- **GPU Memory**: ~26GB (for full model)
- **RAM**: 32GB+ recommended
- **CUDA**: 11.8+
- **Python**: 3.8+
- **Dependencies**: `transformers`, `peft`, `torch`
## 🔗 Related Links
- **Base Model**: [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf)
- **n8n Documentation**: [n8n.io](https://n8n.io)
- **LoRA Paper**: [LoRA: Low-Rank Adaptation](https://arxiv.org/abs/2106.09685)
## 📜 Citation
```bibtex
@misc{code-llama-n8n-2025,
title={Code Llama 13B n8n Workflow Generator},
author={AlpYzc},
year={2025},
url={https://huggingface.co/AlpYzc/code-llama-13b-turkish-quick-fix}
}
```
## ⚠️ Limitations
- Specialized for n8n workflows - may not perform well on general coding tasks
- Requires significant GPU memory for full model inference
- LoRA adapter needs base model for functionality
- Output quality depends on prompt specificity
## 🤝 Contributing
Bu model n8n community için geliştirilmiştir. Feedback ve improvement önerileri memnuniyetle karşılanır!
---
<div align="center">
**🚀 Ready to automate your workflows with n8n?**
[](https://huggingface.co/AlpYzc/code-llama-13b-turkish-quick-fix)
[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_generation.ipynb)
</div>
|