--- # @see https://github.com/huggingface/hub-docs/blob/main/modelcard.md # @see https://huggingface.co/docs/huggingface_hub/guides/model-cards#update-metadata # @see https://huggingface.co/docs/hub/model-cards#model-card-metadata version: '0.22' timestamp: '20250402_012414955_UTC' model_name: retrain-pipelines Function Caller base_model: unsloth/Qwen2.5-1.5B base_model_relation: adapter library_name: transformers datasets: - retrain-pipelines/func_calls_ds license: apache-2.0 language: - en task_categories: - text2text-generation tags: - retrain-pipelines - function-calling - LLM Agent - code - unsloth thumbnail: https://cdn-avatars.huggingface.co/v1/production/uploads/651e93137b2a2e027f9e55df/96hzBved0YMjCq--s0kad.png # @see https://huggingface.co/docs/hub/models-widgets#enabling-a-widget # @see https://huggingface.co/docs/hub/models-widgets-examples # @see https://huggingface.co/docs/hub/en/model-cards#specifying-a-task--pipelinetag- pipeline_tag: text2text-generation widget: - text: >- Hello example_title: No function call output: text: '[]' - text: >- Is 49 a perfect square? example_title: Perfect square output: text: '[{"name": "is_perfect_square", "arguments": {"num": 49}}]' mf_run_id: '92' # @see https://huggingface.co/docs/huggingface_hub/guides/model-cards#include-evaluation-results # @see https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cards#huggingface_hub.EvalResult model-index: - name: retrain-pipelines Function Caller results: - task: type: text2text-generation name: Text2Text Generation dataset: name: retrain-pipelines Function Calling type: retrain-pipelines/func_calls_ds split: validation revision: 8d4dacf6095dc0ef2702d58dfeaa36b730dece00 metrics: - type: precision value: 0.7514837980270386 - type: recall value: 0.7512756586074829 - type: f1 value: 0.7511526346206665 - type: jaccard value: 0.7329407334327698 ---
retrain-pipelines Function Caller
version 0.22 - 2025-04-02 01:24:14 UTC (retraining source-code | pipeline-card)
Training dataset : - retrain-pipelines/func_calls_ds v0.20 (8d4dacf - 2025-04-01 18:12:25 UTC)
  Base model : - unsloth/Qwen2.5-1.5B (2d0a015 - 2025-02-06 02:32:14 UTC)
 
arxiv :
- 2407.10671
The herein LoRa adapter can for instance be used as follows :
```python from transformers import AutoModelForCausalLM, AutoTokenizer from torch import device, cuda repo_id = "retrain-pipelines/function_caller_lora" revision = "" model = AutoModelForCausalLM.from_pretrained( repo_id, revision=revision, torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained( repo_id, revision=revision, torch_dtype="auto", device_map="auto") device = device("cuda" if cuda.is_available() else "cpu") def generate_tool_calls_list(query, max_new_tokens=400) -> str: formatted_query = tokenizer.chat_template.format(query, "") inputs = tokenizer(formatted_query, return_tensors="pt").input_ids.to(device) outputs = model.generate(inputs, max_new_tokens=max_new_tokens, do_sample=False) generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] return generated_text[len(formatted_query):].strip() generate_tool_calls_list("Is 49 a perfect square ?") ```

Powered by retrain-pipelines 0.1.1 - Run by Aurelien-Morgan-Bot - UnslothFuncCallFlow - mf_run_id : 92