AlpYzc commited on
Commit
34a39c4
·
verified ·
1 Parent(s): ad1bde1

Add enhanced README with widget and metadata

Browse files
Files changed (1) hide show
  1. README.md +179 -3
README.md CHANGED
@@ -1,11 +1,187 @@
1
- # n8n Code Generator - Quick Fix Upload
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- Base model ile kullanımı:
4
  ```python
5
  from transformers import AutoModelForCausalLM, AutoTokenizer
6
  from peft import PeftModel
7
 
8
- base_model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-13b-hf")
 
 
 
 
 
 
 
9
  model = PeftModel.from_pretrained(base_model, "AlpYzc/code-llama-13b-turkish-quick-fix")
 
 
10
  tokenizer = AutoTokenizer.from_pretrained("AlpYzc/code-llama-13b-turkish-quick-fix")
 
 
 
 
 
 
 
11
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: codellama/CodeLlama-13b-hf
4
+ tags:
5
+ - code-generation
6
+ - text-generation
7
+ - llama
8
+ - turkish
9
+ - n8n
10
+ - workflow
11
+ - automation
12
+ - fine-tuned
13
+ - lora
14
+ language:
15
+ - en
16
+ - tr
17
+ pipeline_tag: text-generation
18
+ widget:
19
+ - text: "Create an n8n workflow that triggers when a webhook receives data:"
20
+ example_title: "n8n Webhook Workflow"
21
+ - text: '{"name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "parameters": {'
22
+ example_title: "n8n HTTP Node"
23
+ - text: "n8n automation: monitor CSV file and send Slack notification:"
24
+ example_title: "n8n File Monitor + Slack"
25
+ - text: "Build n8n workflow for API data processing:"
26
+ example_title: "n8n API Processing"
27
+ inference: true
28
+ license: llama2
29
+ model_type: llama
30
+ ---
31
+
32
+ # 🚀 Code Llama 13B - n8n Workflow Generator
33
+
34
+ <div align="center">
35
+
36
+ ![Model Type](https://img.shields.io/badge/Model-LoRA%20Adapter-blue)
37
+ ![Base Model](https://img.shields.io/badge/Base-CodeLlama%2013B-green)
38
+ ![Specialization](https://img.shields.io/badge/Specialty-n8n%20Workflows-orange)
39
+ ![License](https://img.shields.io/badge/License-Llama%202-red)
40
+
41
+ </div>
42
+
43
+ Bu model, **CodeLlama-13b-hf**'den fine-tune edilmiş, **n8n workflow automation** için özelleştirilmiş bir kod üretim modelidir.
44
+
45
+ ## 🎯 Özelleştirilmiş Alanlar
46
+
47
+ - ✅ **n8n Workflow Creation** - Webhook, HTTP, API workflows
48
+ - ✅ **Node Configurations** - JSON node parameters
49
+ - ✅ **Automation Logic** - File monitoring, data processing
50
+ - ✅ **Integration Patterns** - Slack, email, database integrations
51
+ - ✅ **Best Practices** - n8n terminology ve syntax
52
+
53
+ ## 🚀 Hızlı Kullanım
54
+
55
+ ### Widget Kullanımı
56
+ Yukarıdaki widget'ta örnek promptları deneyebilirsiniz!
57
+
58
+ ### Kod ile Kullanım
59
 
 
60
  ```python
61
  from transformers import AutoModelForCausalLM, AutoTokenizer
62
  from peft import PeftModel
63
 
64
+ # Base model yükle
65
+ base_model = AutoModelForCausalLM.from_pretrained(
66
+ "codellama/CodeLlama-13b-hf",
67
+ torch_dtype=torch.float16,
68
+ device_map="auto"
69
+ )
70
+
71
+ # n8n fine-tuned adapter ekle
72
  model = PeftModel.from_pretrained(base_model, "AlpYzc/code-llama-13b-turkish-quick-fix")
73
+
74
+ # Tokenizer
75
  tokenizer = AutoTokenizer.from_pretrained("AlpYzc/code-llama-13b-turkish-quick-fix")
76
+
77
+ # n8n workflow üret
78
+ prompt = "Create an n8n workflow that triggers when a webhook receives data:"
79
+ inputs = tokenizer(prompt, return_tensors="pt")
80
+ outputs = model.generate(inputs.input_ids, max_new_tokens=150, temperature=0.7)
81
+ result = tokenizer.decode(outputs[0], skip_special_tokens=True)
82
+ print(result)
83
  ```
84
+
85
+ ## 📊 Performance Comparison
86
+
87
+ | Model | n8n Terms | Workflow Focus | JSON Structure |
88
+ |-------|-----------|----------------|----------------|
89
+ | **Original CodeLlama** | ⭐⭐ | ⭐⭐ | ⭐⭐ |
90
+ | **n8n Fine-tuned** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
91
+
92
+ ## 🎨 Example Outputs
93
+
94
+ ### Input: "Create n8n webhook workflow:"
95
+
96
+ **Original CodeLlama:**
97
+ ```
98
+ Create a n8n webhook workflow:
99
+ 1. Add a webhook node
100
+ 2. Create a webhook url
101
+ 3. Update the n8n workflow with the webhook url
102
+ ```
103
+
104
+ **n8n Fine-tuned:**
105
+ ```
106
+ Create a webhook in n8n:
107
+ 1. Create a new workflow.
108
+ 2. Add a webhook node.
109
+ 3. Copy the URL from the Webhook node to the clipboard.
110
+ 4. Paste the URL into the N8N_WEBHOOK_URL field in the .env file.
111
+ ```
112
+
113
+ ## 🛠️ Training Details
114
+
115
+ - **Base Model**: `codellama/CodeLlama-13b-hf`
116
+ - **Method**: LoRA (Low-Rank Adaptation)
117
+ - **Training Data**: n8n workflow examples
118
+ - **Training Duration**: ~3.3 hours
119
+ - **Final Loss**: 0.1577
120
+ - **Parameters**: 250M adapter weights
121
+
122
+ ## 🎯 Use Cases
123
+
124
+ ### 1. **n8n Workflow Generation**
125
+ ```python
126
+ prompt = "Create n8n workflow for monitoring file changes:"
127
+ # Generates complete n8n workflow with proper nodes
128
+ ```
129
+
130
+ ### 2. **Node Configuration**
131
+ ```python
132
+ prompt = '{"name": "HTTP Request", "type": "n8n-nodes-base.httpRequest",'
133
+ # Generates valid n8n node JSON configuration
134
+ ```
135
+
136
+ ### 3. **Automation Patterns**
137
+ ```python
138
+ prompt = "n8n automation: CSV processing and Slack notification:"
139
+ # Generates multi-step automation workflows
140
+ ```
141
+
142
+ ## ⚙️ Model Requirements
143
+
144
+ - **GPU Memory**: ~26GB (for full model)
145
+ - **RAM**: 32GB+ recommended
146
+ - **CUDA**: 11.8+
147
+ - **Python**: 3.8+
148
+ - **Dependencies**: `transformers`, `peft`, `torch`
149
+
150
+ ## 🔗 Related Links
151
+
152
+ - **Base Model**: [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf)
153
+ - **n8n Documentation**: [n8n.io](https://n8n.io)
154
+ - **LoRA Paper**: [LoRA: Low-Rank Adaptation](https://arxiv.org/abs/2106.09685)
155
+
156
+ ## 📜 Citation
157
+
158
+ ```bibtex
159
+ @misc{code-llama-n8n-2025,
160
+ title={Code Llama 13B n8n Workflow Generator},
161
+ author={AlpYzc},
162
+ year={2025},
163
+ url={https://huggingface.co/AlpYzc/code-llama-13b-turkish-quick-fix}
164
+ }
165
+ ```
166
+
167
+ ## ⚠️ Limitations
168
+
169
+ - Specialized for n8n workflows - may not perform well on general coding tasks
170
+ - Requires significant GPU memory for full model inference
171
+ - LoRA adapter needs base model for functionality
172
+ - Output quality depends on prompt specificity
173
+
174
+ ## 🤝 Contributing
175
+
176
+ Bu model n8n community için geliştirilmiştir. Feedback ve improvement önerileri memnuniyetle karşılanır!
177
+
178
+ ---
179
+
180
+ <div align="center">
181
+
182
+ **🚀 Ready to automate your workflows with n8n?**
183
+
184
+ [![Use with Transformers](https://img.shields.io/badge/🤗%20Transformers-Use%20Model-yellow)](https://huggingface.co/AlpYzc/code-llama-13b-turkish-quick-fix)
185
+ [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_generation.ipynb)
186
+
187
+ </div>