|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- dongsheng/DTA-Tool |
|
base_model: |
|
- meta-llama/Llama-2-13b |
|
--- |
|
|
|
## Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
DTA_llama2_7b is from the paper "[Divide-Then-Aggregate: An Efficient Tool Learning Method via Parallel Tool Invocation](https://arxiv.org/abs/2501.12432)". |
|
It is a large language model capable of invoking tools and can parallel invoke multiple tools within a single round. |
|
The tool format it used is similar to OpenAI's Function Call. |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
The related code can be found in our GitHub [repository](https://github.com/Zhudongsheng75/Divide-Then-Aggregate). |
|
|
|
## Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
The training data comes from our specially constructed [DTA-Tool](https://huggingface.co/datasets/dongsheng/DTA-Toolhttps://github.com/OpenBMB/ToolBench), which is derived from [ToolBench](https://github.com/OpenBMB/ToolBench). |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
We evaluated the performance of DTA-Llama on [StableToolBench](https://github.com/THUNLP-MT/StableToolBench). |
|
|
|
### Results |
|
|
|
 |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA |
|
 that should go in this section. --> |
|
```bibtex |
|
@misc{zhu2025dividethenaggregateefficienttoollearning, |
|
title={Divide-Then-Aggregate: An Efficient Tool Learning Method via Parallel Tool Invocation}, |
|
author={Dongsheng Zhu and Weixian Shi and Zhengliang Shi and Zhaochun Ren and Shuaiqiang Wang and Lingyong Yan and Dawei Yin}, |
|
year={2025}, |
|
eprint={2501.12432}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG}, |
|
url={https://arxiv.org/abs/2501.12432}, |
|
} |
|
``` |